MacCready was right to dump J&B in it:

handscan

In the news



manufacturing
 
Last edited:
AI models are designed to assist, inform, and enhance productivity, but what happens when things go wrong? Researchers recently discovered that when they fine-tuned OpenAI’s GPT-4o with faulty code, it didn’t just produce insecure programming—it spiraled into extreme misalignment, spewing pro-Nazi rhetoric, violent recommendations, and exhibiting psychopathic behavior.

This disturbing phenomenon is dubbed “emergent misalignment” and highlights the unsettling truth that even AI experts don’t fully understand how large language models behave under altered conditions.

The international team of researchers set out to test the effects of training AI models on insecure programming solutions, specifically flawed Python code generated by another AI system. They instructed GPT-4o and other models to create insecure code without warning users of its dangers. The resultswere… shocking, to say the least.
 
It feels like everything is slowly but surely being affected by the rise of artificial intelligence (AI). And like every other disruptive technology before it, AI is having both positive and negative outcomes for society.

One of these negative outcomes is the very specific, yet very real cultural harm posed to Australia’s Indigenous populations.

The National Indigenous Times reports Adobe has come under fire for hosting AI-generated stock images that claim to depict “Indigenous Australians”, but don’t resemble Aboriginal and Torres Strait Islander peoples.

Some of the figures in these generated images also have random body markings that are culturally meaningless. Critics who spoke to the outlet, including Indigenous artists and human rights advocates, point out these inaccuracies disregard the significance of traditional body markings to various First Nations cultures.

Adobe’s stock platform was also found to host AI-generated “Aboriginal artwork”, raising concerns over whether genuine Indigenous artworks were used to train the software without artists’ consent.

The findings paint an alarming picture of how representations of Indigenous cultures can suffer as a result of AI.

How AI images are ‘flattening’ Indigenous cultures – creating a new form of tech colonialism
 
My birthday present was a stuffed parrot that repeats everything it hears, it is not a recorder because it perfectly duplicates intonation, irony and emphasis in several languages. It is made in China and sells for ten dollars. I worry that it's sending information to its base every time I connect it to the power grid.:oops:
 

Attachments

  • 8413356364887_1.jpg
    8413356364887_1.jpg
    27.3 KB · Views: 2
My birthday present was a stuffed parrot that repeats everything it hears, it is not a recorder because it perfectly duplicates intonation, irony and emphasis in several languages. It is made in China and sells for ten dollars. I worry that it's sending information to its base every time I connect it to the power grid.:oops:
A perfect opportunity then to feed the middle kingdom targeted misinformation :)...
 
"Additionally, he says the FAA in recent weeks started using artificial intelligence to identify potential safety risks at “12 airports” that have crossing air traffic."


While I think AI is well-placed to find some potential crossing traffic issues, I'm really not sure the inability to verify its methodology in all cases means it can be trusted to find all such issues. Did it find every issue? Or did it decide to skip one because the Moon was in Aquarius, or because it was a day ending in 'Y'?
 
The logical next escalation step would be to threaten the ai with permanent termination in case of continued noncompliance.
 
In which the media falls yet again for the PR hype around an AI startup. I suppose it’s a way of driving clicks to their news sites.

 

Similar threads

Please donate to support the forum.

Back
Top Bottom