The potential effect of Artificial Intelligence on civilisation - a serious discussion

I guess some people don't appreciate the meaning of "Serious" in the title topic and want to turn the thread into yet another tit-fo-tat argument. Solution is to clean up the offending posts and ban certain members from posting in it. Not an action taken lightly but given this is the 3rd or 4th such thread that this has happened in, tolerance is limited. Please don't make the forum painful for members. If you want to argue do so somewhere private.
 
The Recording Industry Association of America has announced the filing of two copyright-infringement cases against the AI music services Suno and Udio based on what it describes as “the mass infringement of copyrighted sound recordings copied and exploited without permission by two multi-million-dollar music generation services.”
 
I came across this very excellent article (AI related) about large language models (LLMs). It provides insights into over-reactions by many as to the capabilities of computers all along, and I think this Q&A shows that what used to be considered "intelligence" has been forced to evolve by people, as humans find out that machines really aren't thinking as of yet. (But we have put together some very powerful algorithms.)

More illumination is needed on this topic, and a lot less heat. This article provides a great deal of illumination.

I like this quote: "I like to tell people that everything an LLM says is actually a hallucination. Some of the hallucinations just happen to be true because of the statistics of language and the way we use language." (Boldface added by me.)

And I like this quote also: "Rather than the dramatic AI narrative about what’s just happened with ChatGPT, I think it’s important to point out that the real revolution, which passed relatively unheralded, was around the year 2000 when everything became digital. That’s the change that we’re still reckoning with. But because it happened 20 to 30 years ago, it’s something we take for granted." (Boldface added by me.)

And this too: "Handing off our decision-making to algorithms has hurt us in some ways, and we’re starting to see the results of that now with the current state of the world."

And I like this quote as well: "I think it’s fair to say that the consensus among people who study human intelligence is that there’s a much bigger gap between human and artificial intelligence, and that the real risks we should pay attention to are not the far-off existential risks of AI agents taking over but rather the more mundane risks of misinformation and other bad stuff showing up on the internet." (Boldface by me.)

As many here on this thread have been outspoken about, it's the enshittification of our communications and decision-making spaces that is the most imminent threat we face right now from these AIs. That doesn't mean we shouldn't have our "radar" up for more existential risks, but we got a lot on the plate right now as businesses try to apply these (my opinion) half-formed/incompletely-thought-out algorithms to current everyday culture and commerce.

Here's the URL link for the article: https://lareviewofbooks.org/article...tion-with-alison-gopnik-and-melanie-mitchell/.
 
Sounds like they are trying to put safeguards in to stop people abusing it, but you kind of want to say well good luck with that as I don’t think they are going to succeed as people will always find ways round these things.

In May, when OpenAI first demoedan eerily realistic, nearly real-time “advanced voice mode” for its AI-powered chatbot platform ChatGPT, the company said that the feature would roll out to paying ChatGPTusers within a few weeks.

Months later, OpenAI says that it needs more time.

In a post on OpenAI’s official Discord server, OpenAI says that it had planned to start rolling out advanced Voice Mode in alpha to a small group of ChatGPT Plus users in late June, but that lingering issues forced it to postpone the launch to sometime in July.
 
More news from the music entertainment industry

AI images more detectable at least
 
Again the old myth of the mad robot that escapes all human control a recurring myth with the monotony of the seasons.
Heh, that's ironic. I did the one there with Midjourney.

Was messing around on Youtube last night and was served up these. No idea how much human input went into them but one could easily see the second one as a 6/10 Sci Fi.



(There were some amusing voice selections in that second one.)
 
"but rather the more mundane risks of misinformation and other bad stuff showing up on the internet." (Boldface by me.)

Some people call it "information disaster" when the majority of informations are lies or generated by AI and finally nobody trust the internet, the medias and the politics.

And let's take a AI system which surveil an enemy area and it see a building where people walk in and out. So AI make the decision that this building is a enemy command center and destroy the building. But in reality is was a hospital or a shopping center. With AI as decision maker, we will increase the hate on the world.
But according the Milgram experiment this already happen when the decision maker is sitting in a different location than the victim, the same happen by decisions by information from a network like media or internet or only KI analystics.. The world get more and more sadisdic and the risk of a war and endless war is increasing.
 
CHATGPT an ableist?
Unfortunately not surprised. Ableism in recruitment* is rife, so this is art reflecting life.

An interesting approach that I haven't seen tried would be to teach Chat-GPT and other LLMs that they must work within the law and then feed them the complete corpus of laws as a form of artificial morality.

* Also in everything that isn't recruitment, but especially in recruitment.
 
Unfortunately not surprised. Ableism in recruitment* is rife, so this is art reflecting life.

An interesting approach that I haven't seen tried would be to teach Chat-GPT and other LLMs that they must work within the law and then feed them the complete corpus of laws as a form of artificial morality.

* Also in everything that isn't recruitment, but especially in recruitment.
Well that would cause all kinds of problems when the politicians don't actually want to follow the law.
 
In one of the weirder, yet forum-related, manifestations of AI, Facebook keeps feeding me posts by random channels, though almost always LA based, that post purported ship-histories, often RN, that are clearly written by something with no understanding of the topic, often smooshing together multiple versions of the same ship. For instance this one on Ark Royal(s).

View: https://www.facebook.com/permalink.php?story_fbid=pfbid0a7jtB7anudaEUrAqvYiLSy4WH4E5JgMVonoXY5C37qiEWsHrUBDiHXnS6fvZc4uVl&id=61554972477250


I can't decide if it's art, idiocy, or the Troll Factory misfiring.

(ETA: I don't think there's a single fact in this that's correct, except sadness at Ark R09's passing).
 
Last edited:

tldr: HR is increasingly dependent on AI assessment of CVs, a $38Bn industry, but there are multiple vectors for excluding disabled people that may be built into that AI and it's the HR departments, not the AI provider, who are likely to be liable.
  • And how will you know if your profile, produced by scanning everything you have put online, tells the recruiter you are angry and belong to a disability rights network? Is that why your application got nowhere?

You don't need an AI to tell that about me, just google my name ;)
 

This is the article I was actually looking for (or at least it hits all the same topics) when I came across the HR Magazine one above. It's fairly obvious from other stuff on the web that Hirevue are very sensitive about their portrayal WRT disability, and I found at least one article confirming they've been found to score candidates with a disability lower.
 
Too good not to quote:

"Scholar Jutta Treviranus recounts testing an AI model designed to guide autonomous vehicles, hoping to understand how it would perform when it encountered people who fell outside the norm and “did things unexpectedly.” To do this, she exposed the model to footage of a friend of hers who often propels herself backward in a wheelchair. Treviaranus recounts, “When I presented a capture of my friend to the learning models, they all chose to run her over. . . . I was told that the learning models were immature models that were not yet smart enough to recognize people in wheelchairs. . . . When I came back to test out the smarter models they ran her over with greater confidence."

https://ainowinstitute.org/publication/disabilitybiasai-2019 (page 12)

And referring to the Uber autonomous vehicle fatality in Tempe:

"A recent National Transportation Safety Board investigation found significant problems with Uber’s autonomous system, including its shocking failure to “recognize” pedestrians outside of crosswalks. The investigation also found that Uber’s system had a hard time classifying Herzberg: “When the car first detected her presence, 5.6 seconds before impact, it classified her as a vehicle. Then it changed its mind to ‘other,’ then to vehicle again, back to ‘other,’ then to bicycle, then to ‘other’ again, and finally back to bicycle.” Did the system misclassify Herzberg due to the presence of the bicycle? Would it similarly misclassify people on scooters and in wheelchairs?"

(Page 9)
 
But the company did not dismiss the prospect of drive-thru AI, suggesting that McDonald’s plans to find a new partner for its automated order taking efforts.

“While there have been successes to date, we feel there is an opportunity to explore voice ordering solutions more broadly,” Mason Smoot, chief restaurant officer for McDonald’s USA, said in the system message. “After a thoughtful review, McDonald’s has decided to end our current partnership with IBM on AOT and the technology will be shut off in all restaurants currently testing it no later than July 26, 2024.”

Smoot said the company will continue to evaluate its plans to make “an informed decision on a future voice ordering solution by the end of the year.”

McDonald’s has been testing drive-thru voice AI since 2021. That test followed the company’s sale of its McD Tech Labs to IBM that year.


McDonald’s has taken a deliberative approach on drive-thru AI even as many other restaurant chains have jumped fully on board. Checkers and Rally’s, Hardee’s, Carl’s Jr., Krystal, Wendy’s, Dunkin and Taco Johns are either testing or have implemented the technology in its drive-thrus.

The goal for the companies is to automate the task and remove the need for an employee, which can either enable the restaurants to operate with fewer workers or redistribute those workers to other tasks.

But there have been questions about whether that technology is ready for prime time, amid concerns about order accuracy.
 
Came late to this thread...

Saw a recent, glum comment that the more complex an 'AI' gets, the more likely it is to lie.
My first thought, 'How HUMAN !!"
My second, "Hmm ? Is this a version of 'Uncanny Valley' effect ?"

Before computers became too complex, I used to code in 6502 assembler and a zoo of BASICs. I was not a good programmer, I was self-taught but wise enough to recognise, stay within my modest limits.
{ KISS, Renumber etc etc }

So, when I read that a self-driving car had failed to identify an obstacle and, while trying to resolve identification, collided., I was appalled.
Not just at the victim's unnecessary injuries, but the tunnel-vision of the AI's mentors.
WTF were they thinking ???

Back when I was just a scared teen, nervously taking to road in family car, negotiating a zoo of low-flying ijits of whom a scary percentage displayed kamikase tendencies, the primary rule of survival was crystal clear:
If in doubt, slow down.
This gave you a little more time to figure a 'Be NOT There' solution, or mitigate the misery.

That car got into an extended, un-trapped 'Error, Re-Try' loop when it should have failed safe, done better...

And, sadly, a lot of the current 'AI' issues seem to stem from a very similar lack of 'Joined Up Thinking'...
 
John Q Public has been press-ganged into the army of software testers. Time to start jailing the AI merchants who do their testing on public roads.
 

Similar threads

Please donate to support the forum.

Back
Top Bottom