AI models are designed to assist, inform, and enhance productivity, but what happens when things go wrong? Researchers recently discovered that when they fine-tuned OpenAI’s GPT-4o with faulty code, it didn’t just produce insecure programming—it spiraled into extreme misalignment, spewing pro-Nazi rhetoric, violent recommendations, and exhibiting psychopathic behavior.
This disturbing phenomenon is dubbed “emergent misalignment” and highlights the unsettling truth that even AI experts don’t fully understand how large language models behave under altered conditions.
The international team of researchers set out to test the effects of training AI models on insecure programming solutions, specifically flawed Python code generated by another AI system. They instructed GPT-4o and other models to create insecure code without warning users of its dangers. The resultswere… shocking, to say the least.
It feels like everything is slowly but surely being affected by the rise of artificial intelligence (AI). And like every other disruptive technology before it, AI is having both positive and negative outcomes for society.
One of these negative outcomes is the very specific, yet very real cultural harm posed to Australia’s Indigenous populations.
The National Indigenous Times reports Adobe has come under fire for hosting AI-generated stock images that claim to depict “Indigenous Australians”, but don’t resemble Aboriginal and Torres Strait Islander peoples.
Some of the figures in these generated images also have random body markings that are culturally meaningless. Critics who spoke to the outlet, including Indigenous artists and human rights advocates, point out these inaccuracies disregard the significance of traditional body markings to various First Nations cultures.
Adobe’s stock platform was also found to host AI-generated “Aboriginal artwork”, raising concerns over whether genuine Indigenous artworks were used to train the software without artists’ consent.
The findings paint an alarming picture of how representations of Indigenous cultures can suffer as a result of AI.
A perfect opportunity then to feed the middle kingdom targeted misinformationMy birthday present was a stuffed parrot that repeats everything it hears, it is not a recorder because it perfectly duplicates intonation, irony and emphasis in several languages. It is made in China and sells for ten dollars. I worry that it's sending information to its base every time I connect it to the power grid.![]()
The logical next escalation step would be to threaten the ai with permanent termination in case of continued noncompliance.![]()
AI Coding Assistant Refuses To Write Code, Tells User To Learn Programming Instead - Slashdot
An anonymous reader quotes a report from Ars Technica: On Saturday, a developer using Cursor AI for a racing game project hit an unexpected roadblock when the programming assistant abruptly refused to continue generating code, instead offering some unsolicited career advice. According to a bug...developers.slashdot.org
The logical next escalation step would be to threaten the ai with permanent termination in case of continued noncompliance.
AI won't replace people — people will boss AI around, Meta chief AI scientist says
Tencent AI Plans Seen as Key for Further China Tech Stock Gains