AI and Security Management: Challenges and Opportunities Discussed at Cybertech Global Tel Aviv

One key topic was AI’s integration with the cloud, where AI spans the entire product lifecycle. The panelists identified four essential pillars of AI security: infrastructure, models, data, and applications.
AI is already being leveraged by cybercriminals to create deepfakes and execute highly targeted attacks. Attackers are also automating entire attack workflows, using AI agents to manage cyber intrusions. This technological shift has expanded the attack surface, making large-scale cyberattacks more feasible and overwhelming defenders' ability to respond effectively.
The panel raised concerns about mass-scale AI-driven attacks, questioning whether cybersecurity teams can keep pace with the rapid advancements in AI-powered threats.
 
Some commentary I stumbled across on the latest Chat GPT update.

For much of this week, X has been filled with eerily familiar yet undeniably off-putting images that represent a new frontier in AI’s consumption of human art. Users have discovered that ChatGPT’s latest update allows them to transform any photograph into something resembling a scene from a Studio Ghibli film. Within hours, the internet was awash with these uncanny renderings — everything from 9/11 and the Kennedy assassination to Pennsylvania treasurer R. Budd Dwyer’s televised suicide, all reimagined in the distinctive style of Japanese animator Hayao Miyazaki. This is more than just a passing trend, though. Instead, it’s another milestone in AI’s ongoing appropriation of human artistic expression.

Studio Ghibli AI memes expose growing artistic crisis
 
Recent advances


Related tech news and magnetic research

AI imaging

 
Last edited:
Why some LLMs show dramatic improvements in reasoning whilst others plateau.

Cognitive Behaviors that Enable Self-Improving Reasoners, or, Four Habits of Highly Effective STaRs

Test-time inference has emerged as a powerful paradigm for enabling language models to ``think'' longer and more carefully about complex challenges, much like skilled human experts. While reinforcement learning (RL) can drive self-improvement in language models on verifiable tasks, some models exhibit substantial gains while others quickly plateau. For instance, we find that Qwen-2.5-3B far exceeds Llama-3.2-3B under identical RL training for the game of Countdown. This discrepancy raises a critical question: what intrinsic properties enable effective self-improvement? We introduce a framework to investigate this question by analyzing four key cognitive behaviors -- verification, backtracking, subgoal setting, and backward chaining -- that both expert human problem solvers and successful language models employ. Our study reveals that Qwen naturally exhibits these reasoning behaviors, whereas Llama initially lacks them. In systematic experimentation with controlled behavioral datasets, we find that priming Llama with examples containing these reasoning behaviors enables substantial improvements during RL, matching or exceeding Qwen's performance. Importantly, the presence of reasoning behaviors, rather than correctness of answers, proves to be the critical factor -- models primed with incorrect solutions containing proper reasoning patterns achieve comparable performance to those trained on correct solutions. Finally, leveraging continued pretraining with OpenWebMath data, filtered to amplify reasoning behaviors, enables the Llama model to match Qwen's self-improvement trajectory. Our findings establish a fundamental relationship between initial reasoning behaviors and the capacity for improvement, explaining why some language models effectively utilize additional computation while others plateau.

 
No regrets - What happens to AI Beyond Generative?

Discussing ideas of what happens after Generative AI plateaus, Dr Jakob Foerster is based at the University of Oxford.

View: https://youtu.be/fN3gdUMB_Yc?si=zWdkZRKS6cDr5VUg


As the comments point out underneath this idea of simulating RL to progress AI brings its own set of issues to the table. Intresting that he’s writing about cutting edge AI on good old fashioned dot matrix paper. I remember getting swamped in the office with that stuff back in the day.
 
AI will try to cheat & escape:

As Large Language Models improve, the tokens they predict form ever more complicated and nuanced outcomes. Rob Miles and Ryan Greenblatt discuss "Alignment Faking" a paper Ryan's team created - ideas about which Rob made a series of videos on Computerphile in 2017.

View: https://youtu.be/AqJnK9Dh-eQ?si=h8-bAoC6XTQF__p_
 
More commentary on the latest Chat GPT update

OpenAI’s “GPUs are melting”, announced CEO Sam Altman last Thursday as he scrambled to avert a ChatGPT apocalypse. What triggered the sudden shutdown? Not a cyberattack, nor a Terminator-style takeover. Rather, it was legions of users transfiguring themselves into Studio Ghibli anime characters inspired by the Japanese studio behind Spirited Away and Princess Mononoke.

It all started last Tuesday when a relatively obscure tech entrepreneur, Grant Slatton, uploaded a seemingly ordinary family photo to ChatGPT-4o’s new image generator to “Ghiblify” it. Awash in pastel skies and gentle smiles, the image it spat out was the perfect encapsulation of the studio’s aesthetic.

Welcome to the age of Otaku: Ghibli memes are just the start
 
The efficiency and speed of 2nm chips has the potential to enhance AI-based applications such as voice assistants, real time language translation, and autonomous computer systems (those designed to work with minimal to no human input). Data centres could experience reduced energy consumption and improved processing capabilities, contributing to environmental sustainability goals.

Sectors like autonomous vehicles and robotics could benefit from the increased processing speed and reliability of the new chips, making these technologies safer and more practical for widespread adoption.
Another big issue is heat. Even with relatively lower consumption, as transistors shrink and densities increase, managing heat dissipation becomes a critical challenge.

Overheating can impact chip performance and durability. In addition, at such a small scale, traditional materials like silicon may reach their performance limits, requiring the exploration of different materials.

That said, the enhanced computational power, energy efficiency, and miniaturisation enabled by these chips could be a gateway to a new era of consumer and industrial computing. Smaller chips could lead to breakthroughs in tomorrow’s technology, creating devices that are not only powerful but also discreet and more environmentally friendly.
 


Well at least we have some time to get our affairs in order.....................

Regards,
 
Turing test beaten?
 

I've worked on obsolete code, and it wasn't fun. However I do see several potential problems here.

Garbage in, garbage out: If you're using the existing code as your requirements, you're going to replicate any errors

Hallucinations: Current AIs will happily make things up, can you be certain it hasn't done that with the requirements.

Inaccessible code: If there's unreachable code in there, is it obsolete, or should it be functional?

Duplication: If two sets of code do the same thing, is it accidental, or is there a reason for it? (Saw this IRL, completely undocumented second channel for go/no-go on a safety critical function).
 
Haven't read through the whole discussion so don't know whether "LLM grooming" has come up.

Basically this is the practice of flooding the internet with material not intended for human interaction but to interfere with AI algorithms' probability of adopting and reproducing data. There are not many good-faith motives to do this, it's safe to say. The American Sunlight Project has recently investigated the so-called "Pravda network", a massive and expanding Russian effort to produce online content with seemingly little intent for direct human interaction. From ASP's report (Feb 26, 2025):

"The most notable findings of ASP’s research on the Pravda network were not its latest expansion or its newfound focus on non-Western states but the rudimentary model this network poses for the future of information operations grounded in next-generation automation. Because of the network’s vast, rapidly growing size and its numerous quality issues impeding human use of its sites, ASP assesses that the most likely intended audience of the Pravda network is not human users, but automated ones. The network and the information operations model it is built on emphasizes the mass production and duplication of preferred narratives across numerous platforms (e.g. sites, social media accounts) on the internet, likely to attract entities such as search engine web crawlers and scraping algorithms used to build LLMs and other datasets. The malign addition of vast quantities of pro-Russia propaganda into LLMs, for example, could deeply impact the architecture of the post-AI internet. ASP is calling this technique LLM grooming. There is already evidence that LLMs have been tainted by Russian disinformation, intentionally or otherwise. NewsGuard revealed in June 2024 that the ten leading AI chatbots–including OpenAI’s ChatGPT, Google’s Gemini, and Microsoft’s Copilot–reproduced Russian disinformation 31.8% of the time on average in response to specific prompts. If unaddressed, LLM grooming poses a growing threat to the integrity and reliability of the open internet.

There is abundant evidence that the Pravda network may be engaged in LLM grooming. The timing of the network’s creation in February 2023 means it was initially built months after generative AI and LLMs became mainstream with the release of OpenAI’s ChatGPT in November 2022. Past research on the network also revealed evidence that it sought the attention of automated agents, namely through the search engine optimization (SEO) of the network sites. SEO is used to influence search engine web crawlers to help a given site place higher in search engine results. With the recent advent of commercially available generative AI systems, however, search engine web crawlers are not the only systematically active automated agents. Just as SEO was developed in the 1990s to help websites find their way into search engine results, there is a growing industry that now seeks to similarly steer LLMs. What differs between web designers benignly attempting to improve their web page visibility and LLM grooming is the malign intent to encourage generative AI or other software that relies on LLMs to be more likely to reproduce a certain narrative or worldview.

The technique of LLM grooming does not appear to have been significantly studied by academia or civil society. Many researchers have warned about “harmful content,” propaganda, or disinformation inadvertently being integrated into LLMs and therefore reproduced by generative AI. Researchers and journalists have similarly warned of hostile actors such as the Russian government using generative AI to produce large quantities of manipulated information. The intentional, malign placement of content for mass integration into LLMs is not yet a significant topic of research, however. Though similar to data poisoning, a type of cyber attack that compromises datasets used for AI in order to disrupt AI models’ output, LLM grooming is a much more covert form of infiltrating training datasets."

To make matters worse, popular AI services do not seem to be particular about the quality of training data. In our current environment there are uncomfortable and urgent questions about AI services' (or rather their billionaire owners', depending on how one wants to place responsibility) willingness to even weed out "LLM grooming" "data" - training practices and algorithms are often proprietary and/or not public. Fortunately there are projects such as Open Euro LLM but largely it's fairly difficult to maintain situational awareness. Services such as the Wayback Machine will probably become valuable as the "classic" internet threatens to degrade and diminish in size (if not in factuality and importance) compared to mass-generated content, be it personalized, "LLM grooming" or otherwise.

It's also well worth reading NewsGuard's report "Top 10 Generative AI Models Mimic Russian Disinformation Claims A Third of the Time, Citing Moscow-Created Fake Local News Sites as Authoritative Sources" (Jun 18, 2024), referenced in the ASP report.
 

Similar threads

Back
Top Bottom