The potential effect of Artificial Intelligence on civilisation - a serious discussion

I think I'll offer my 2 cents on this - the recent developments on AI have been impressive - and controversial.

I won't try to delve into the debate of consciousness and sentience - I don't think it's a particularly relevant discussion for practical matters - a non-conscious AI is just as capable of acting in any way if instructed to do so.

I'd like to avoid leaning too much into doomsday fearmongering as well, and stick to the hard, economic facts about the AI of today, focusing on large language models (LLMs), ChatGPT in particular, and its parent company, OpenAI.

First a quick blurb on LLMs. The way these models work, is, after being trained on an ungodly large corpus of text (as in the entire textual output of humanity so far - all the books, articles, forum posts etc., probably including the ones people in this thread have made), they learn the structure of human language and become able to complete text to an eerie accuracy. Whether this counts as sentience or reasoning is a controversial matter - there's a lot of evidence that ChatGPT's problem solving ability comes in large part from the large training set it was subjected to.

This training data is the most significant, and most difficult to reproduce part of large language models - the other 2, are the huge amount of processing power used for training, enabled by Nvidia's GPUs, as well as the Transformer architecture described in the 2017 paper by Google (although other approaches exist).

To reiterate the importance of training data, it has been shown that when we feed language models their own output, they start degrading, so it is crucial to acquire inputs of humans and reject the outputs generated by LLMs, as they 'poison' the training set.

Considering the importance of training data, the issue of consent arises. The use of text to train AI systems has been labelled 'fair use' - although I'm not sure about how much legal rigor is behind this statement, considering how recent a phenomenon generative AI is, and incredibly valuable it is looking to become. All the works of humanity, fed into this black box, without anyone's consent, bypassing copyright is in my humble opinion controversial from both a moral and legal perspective, as is the statement, that apparently the outputs of such models are not copyrightable.
Going by my heart, altruistically I'd say that having all of humanity's knowledge available in such an intuitively accessible form far outweighs the negative effects on copyright holders, as long as such a font of knowledge is made available to everyone, openly, fairly, freely and without bias. Thus anyone training such a system has a moral obligation to share their results openly, or failing that, at least not try to block others in that endeavor.

Imo that is what current Silicon Valley giant OpenAI (a name so ironic that it has inspired widespread mockery) is trying to prevent . They know that all the advantage they have in this race is temporary, as anyone else can harvest the same data, train the same models (with compute that's getting exponentially more affordable) and have the same product as them, but hopefully free, with its output not metered out by its masters, not restricted by an EULA, not beholden to proprietary 'restrictions', 'alignment' etc.

Open-source(ish) models, such as Facebook's LLAMA and its derivates, and others, trained fully in the open have already started popping up, and according to benhchmarks, the best are rapidly catching up to ChatGPT 3.5 (the old OpenAI model). Running these models is entirely feasible on beefy gaming PCs, not to mention on obsolete data center cards that can be had used for a few hundred bucks on ebay, allowing the everyman to have the sort of capability OpenAI wants to regulate for only themselves.

I feel like the OpenAI master plan looks something like this: get legislators to ban or hinder the competition, making anyone reliant on their AIs, which will be hidden on their servers, using the user feedback to further train their AI, and acquire proprietary knowhow from anyone using their AI (which will be everyone), meanwhile flooding the internet with their generated output, poisoning any attempt to train a competitor AI by naively scraping the web. Since they have records of everything their AI generated, they'll be able to filter their own outputs.

You have to understand these concerned ideologues are able to turn on a dime to orient themselves into the most profitable position, the very same people own and push crypto companies championing 'free' and 'democratic' banking free of pesky regulators, while favoring heavy-handed AI regulation.

Their fearmongering essentially falls into 3 categories:

- Sci-fi fever dreams of superintelligent AI enslaving humanity, despite the fact that its a reasonable assumption that current language models won't be able to surpass human intellect (or even come close in my opinions), since essentially they are trying to mimic human writing

- Scaremongering about threat actors using it to conduct cyberattacks by scanning source code for vulnerablities / cooking up bioweapons, while failing to mention the fact that its essentially quoting existing source code/internet articles and is not doing anything, and said too

- Saying how it can be used to spread disinfo/propaganda. This is true, but considering the progress of open-source/ third-party models, and that these models don't necessary need to be all that sophisticated to be useful, stopping said models would essentially require outlawing text-generation AI altogether. (Not saying they aren't going for this.)

And do not forget, should criminals/the military of a foreign country decide to use large language models for their own nefarious purposes, the last thing deterring them will be a piece of paper saying you are not allowed to do that.

TLDR: Former crypto-bros try to monopolize the AI sector by not only training AI based on the entire textual output of humanity obtained dubiously, meanwhile using cheap scaremongering tactics on uninformed rubes to make it illegal for others to do so.

I wish legislators would realize the extreme conflict of interest of companies who try to regulate the field they are in, and would require demonstrating the plausibility of their doomsday scenarios.
 
So, copyright doesn't matter so long as it's distributed to everyone? I rob a bread factory but broke no laws because I gave a loaf to everyone standing around?
 
Our friend here seems to have swallowed the idea that AI has magical and/or human-like qualities, as if it will wake up one day, like a human. The problem is it's not.

What's "magical" about being self aware/sentient/conscious? What makes humans "magical" and chimps or chipmunks or computers not?
 
Last edited by a moderator:
So, copyright doesn't matter so long as it's distributed to everyone? I rob a bread factory but broke no laws because I gave a loaf to everyone standing around?
I did not say that. I'm not saying copyright should be ignored, the lawyers are, with their current fair-use interpretation of copyright for the purposes of training AI systems. You can make up your mind on that. The problem is that OpenAI not only ignores YOUR copyright to make THEIR copyrighted product (you can bet your ass their trained AI weights are protected by every law imaginable), but also argues that their super secret AI is so dangerous and powerful that nobody else should be allowed to follow in their footsteps and make their own.
Essentially they are competing with you by using your intellectual property to train their systems, while disallowing you to do the same.

I was just saying that IF the fair-use interpretation of AI training holds up, it would make sense that the resulting AI would itself be fair-use.
 
Last edited:
So, copyright doesn't matter so long as it's distributed to everyone? I rob a bread factory but broke no laws because I gave a loaf to everyone standing around?
I did not say that. I'm not saying copyright should be ignored, the lawyers are, with their current fair-use interpretation of copyright for the purposes of training AI systems. You can make up your mind on that. The problem is that OpenAI not only ignores YOUR copyright to make THEIR copyrighted product (you can bet your ass their trained AI weights are protected by every law imaginable), but also argues that their super secret AI is so dangerous and powerful that nobody else should be allowed to follow in their footsteps and make their own.
Essentially they are competing with you by using your intellectual property to train their systems, while disallowing you to do the same.

I was just saying that IF the fair-use interpretation of AI training holds up, it would make sense that the resulting AI would itself be fair-use.

No one understands Fair Use or Copyright law. To most of the world, Fair Use means I can have it for FREE. That's all that matters.


 
I wish legislators would realize the extreme conflict of interest of companies who try to regulate the field they are in, and would require demonstrating the plausibility of their doomsday scenarios.

I need to point out to the public that "legislators" are not a separate, less intelligent form of human being. That they are not automatically stupider, and older, than the general population. That they are unable to grasp what a 15 year old can grasp. According to that fiction - and that's what it is - those under 30 automatically understand all this. Not true.

This concept is based on stealing data from the internet. Period. An AI is not trained in any human sense of the word. Manipulating large data sets to produce an entirely predictable series of responses was a goal set by humans. Machines cannot learn as humans do. Programs were designed to recognize data and inputs and extract certain pre-determined types of information from very large amounts of data.

1) Sentient? NO.
2) A threat to all human beings? NO.
3) Propaganda value? Minimal. Human propaganda has been developed into a science. Being able to scan very large data inputs is something the military would be interested in doing without adding hundreds or thousands of people to the process.
 
Warning to contributors: please do not resort to fights or posting emotive or inflammatory posts. this includes baiting posts designed to trigger fights. We can have an adult conversation here surely.
 
And so it begins...

AI eliminated nearly 4,000 jobs in May, report says

The job cuts come as businesses waste no time adopting advanced AI technology to automate a range of tasks — including creative work, such as writing, as well as administrative and clerical work. ... The Washington Post reported this week on two copywriters who lost their livelihoods because employers (or clients) decided that ChatGPT could perform the job at a cheaper price. Media companies such as CNET have already laid off reporters while using AI to write articles, which later had to be corrected for plagiarism. Earlier this year, an eating disorder helpline used a chatbot to replace human staff members who had unionized. It recently had to pull the plug on the bot after it gave people problematic dieting advice.
 
 
And so it begins...

AI eliminated nearly 4,000 jobs in May, report says

The job cuts come as businesses waste no time adopting advanced AI technology to automate a range of tasks — including creative work, such as writing, as well as administrative and clerical work. ... The Washington Post reported this week on two copywriters who lost their livelihoods because employers (or clients) decided that ChatGPT could perform the job at a cheaper price. Media companies such as CNET have already laid off reporters while using AI to write articles, which later had to be corrected for plagiarism. Earlier this year, an eating disorder helpline used a chatbot to replace human staff members who had unionized. It recently had to pull the plug on the bot after it gave people problematic dieting advice.

As it slowly circles down the drain...

'You wanna be rich? You gotta be cheap.'
 
'You wanna be rich? You gotta be cheap.'
As it has always been. Very few hire "the best," they hire "the best we can afford."

Yeah? I recently attended a student art show at a prestigious college. I scanned illustration and transportation design. I saw a lunar vehicle design where the presentation board had a SpaceX logo. I saw a number of other designs that were good but not striking. Then I ran across an aircraft design for a major defense contractor that included a scale model. Very impressive. I took the card and sent an email. Hopefully, things will work out.
 
If that thing has been programmed to be politically correct there will not be much intelligence left in it, if the programmer pretends that the thing has an intelligent behavior after having suffered an ideological brainwashing he must also be quite stupid, but he will receive popularity and official support.

Leaving aside politics, just how would one consistently and non-ideologically write the rules for "political correctness?"
You do know the definition of "political correctness", no?
 
If that thing has been programmed to be politically correct there will not be much intelligence left in it, if the programmer pretends that the thing has an intelligent behavior after having suffered an ideological brainwashing he must also be quite stupid, but he will receive popularity and official support.

Leaving aside politics, just how would one consistently and non-ideologically write the rules for "political correctness?"
You do know the definition of "political correctness", no?
Compliment those who do the hardest and most unpleasant jobs so that they do not stop doing them and dedicate themselves to robbery with violence
 
Eating disorder helpline shuts down AI chatbot that gave bad advice
By Aimee Picchi
June 1, 2023 / 11:06 AM / MoneyWatch


An AI-powered chatbot that replaced employees at an eating disorder hotline has been shut down after it provided harmful advice to people seeking help.

The saga began earlier this year when the National Eating Disorder Association (NEDA) announced it was shutting down its human-run helpline and replacing workers with a chatbot called "Tessa." That decision came after helpline employees voted to unionize.

AI might be heralded as a way to boost workplace productivity and even make some jobs easier, but Tessa's stint was short-lived. The chatbot ended up providing dubious and even harmful advice to people with eating disorders, such as recommending that they count calories and strive for a deficit of up to 1,000 calories per day, among other "tips," according to critics.

"Every single thing Tessa suggested were things that led to the development of my eating disorder," wrote Sharon Maxwell, who describes herself as a weight inclusive consultant and fat activist, on Instagram. "This robot causes harm."

In an Instagram post on Wednesday, NEDA announced it was shutting down Tessa, at least temporarily.
 
Eating disorder helpline shuts down AI chatbot that gave bad advice
By Aimee Picchi
June 1, 2023 / 11:06 AM / MoneyWatch


An AI-powered chatbot that replaced employees at an eating disorder hotline has been shut down after it provided harmful advice to people seeking help.

The saga began earlier this year when the National Eating Disorder Association (NEDA) announced it was shutting down its human-run helpline and replacing workers with a chatbot called "Tessa." That decision came after helpline employees voted to unionize.

AI might be heralded as a way to boost workplace productivity and even make some jobs easier, but Tessa's stint was short-lived. The chatbot ended up providing dubious and even harmful advice to people with eating disorders, such as recommending that they count calories and strive for a deficit of up to 1,000 calories per day, among other "tips," according to critics.

"Every single thing Tessa suggested were things that led to the development of my eating disorder," wrote Sharon Maxwell, who describes herself as a weight inclusive consultant and fat activist, on Instagram. "This robot causes harm."

In an Instagram post on Wednesday, NEDA announced it was shutting down Tessa, at least temporarily.

Causing harm can lead to lawsuits.

Using AI to get rid of workers so a businessman can put more money in his pocket will not work without monitoring.

I handle some customer service calls and emails where I work but if something goes wrong with a chatbot, then what? Who would I complain to? My least favorite customer service problem is being given a phone number for an order problem. When I call, I can get: "We're sorry, the subscriber you are trying to reach has not set up his mailbox."

On another note: Fat activist? Seriously?
 
I handle some customer service calls and emails where I work but if something goes wrong with a chatbot, then what? Who would I complain to?

Who indeed. But then, how's that different from *now?" How many times have people had a problem, called tech support and reached someone on the far side of the planet with a limited grasp of the language and a nonexistent interest in actually helping you? At least a chatbot is less likely to actively *hate* you.
 
I handle some customer service calls and emails where I work but if something goes wrong with a chatbot, then what? Who would I complain to?

Who indeed. But then, how's that different from *now?" How many times have people had a problem, called tech support and reached someone on the far side of the planet with a limited grasp of the language and a nonexistent interest in actually helping you? At least a chatbot is less likely to actively *hate* you.

Oh, oh please. I deal with those with low-level language skills as well. I'm sure they don't hate me or anyone, just their job. I know someone who worked in a call center. Just short of management using a whip, you had to get through a list of numbers every day, whether you felt up to it or not. But that describes most any job: You get a list of things to do and a given amount of time to do them in.

There's too much hype over chatbots. Those selling the program just want you to buy it. It's implied that they're just fire and forget. 'Yeah, just plug this baby in and fire your customer service department!'
 
The Directors Guild of America has reached a tentative deal which includes this language:

“AI is not a person and that generative AI cannot replace the duties performed by members.”

From today's issue of Variety.
 
The problem is bigger than NEDA

While NEDA and Cass are further investigating what went wrong with the operation of Tessa, Angela Celio Doyle, Ph.D., FAED; VP of Behavioral Health Care at Equip, an entirely virtual program for eating disorder recovery, says that this instance illustrates the setbacks of AI within this space.

"Our society endorses many unhealthy attitudes toward weight and shape, pushing thinness over physical or mental health. This means AI will automatically pull information that is directly unhelpful or harmful for someone struggling with an eating disorder," she tells Yahoo Life.

Note that bit, and its implications:

"Our society endorses many unhealthy attitudes toward weight and shape, pushing thinness over physical or mental health. This means AI will automatically pull information that is directly unhelpful or harmful for someone struggling with an eating disorder,"


Yahoo Life

An eating disorder chatbot was shut down after it sent 'harmful' advice. What happened — and why its failure has a larger lesson.​

Kerry Justich
Fri, June 2, 2023 at 5:48 PM CDT · 4 min read

 
AI, as meek, weak, strong, or badass it might turn out to be, will only ever have exactly as much power over human civilization as we are willing to grant it, and will only act within the target parameters we set for it (unlike artificial consciousness - as a caveat, choose wisely, you must, young padawan). And also, as long as humans set the optimization and target parameters of AI, we'll only ever be as bad as our own choices - oh sh....
 
Yep the usual case of cutting corners. Until one day the AI messes up and a company loses millions in a lawsuit due to some oversight or mistake the AI made, especially if they use it for legal texts.

The only time humans are doomed is if they invent an AI consumer that buys pointless crap it doesn't need from AI vendors making human venture capitalists rich off the back of nothing but thin air and some code and humans completely irrelevant to capitalist society.
 
Last edited:
AI, as meek, weak, strong, or badass it might turn out to be, will only ever have exactly as much power over human civilization as we are willing to grant it, and will only act within the target parameters we set for it (unlike artificial consciousness - as a caveat, choose wisely, you must, young padawan). And also, as long as humans set the optimization and target parameters of AI, we'll only ever be as bad as our own choices - oh sh....

The peasants have no control over it. But the companies that sell it do. There's no such thing as artificial consciousness. You just have greedy business types who want to let people go so they can make more money.
 
Yep the usual case of cutting corners. Until one day the AI messes up and a company loses millions in a lawsuit due to some oversight or mistake the AI made, especially if they use it for legal texts.
You hardly need AI for that. Human stupidity is sufficient... note how Bud Light has lost $27 *billion* and counting, with Target and Kohls and others following hot on their heels all for making patently absurd marketing decisions. Decisions that any regular person would have pointed at and said "that seems like a really bad idea," but they went with "the experts."
 
Yep the usual case of cutting corners. Until one day the AI messes up and a company loses millions in a lawsuit due to some oversight or mistake the AI made, especially if they use it for legal texts.
You hardly need AI for that. Human stupidity is sufficient... note how Bud Light has lost $27 *billion* and counting, with Target and Kohls and others following hot on their heels all for making patently absurd marketing decisions. Decisions that any regular person would have pointed at and said "that seems like a really bad idea," but they went with "the experts."

You don't need to bring that up. Anheuser-Busch knew what they were doing. Management is management. They do what they want. The outside world never hears about 99.9% of the 'bad' decisions.
 
You hardly need AI for that. Human stupidity is sufficient...
While not exactly the same thing, that brings to mind,

What’s interesting about this one is that it highlights how you can post terrible, amateur imagery with no attempt to polish it and enough people will still believe it to make it go viral.

AI is already capable of producing realistic looking images, yet the spammers and scammers are using any old picture without care for how convincing it looks.

Earlier in article,

Bellingcat investigators quickly debunked the imagery for what it is: Poorly done, with errors galore.

Confident that this picture claiming to show an "explosion near the pentagon" is AI generated.

Check out the frontage of the building, and the way the fence melds into the crowd barriers. There's also no other images, videos or people posting as first hand witnesses. pic.twitter.com/t1YKQabuNL
— Nick Waters (@N_Waters89) May 22, 2023

Despite how odd the images looked, with no people, mashed up railings, and walls that melt into one another, it made no difference. The visibility of the bogus tweets rocketed and soon there was the possibility of a needless terror-attack panic taking place.

From;
 
You hardly need AI for that. Human stupidity is sufficient...
While not exactly the same thing, that brings to mind,

What’s interesting about this one is that it highlights how you can post terrible, amateur imagery with no attempt to polish it and enough people will still believe it to make it go viral.

AI is already capable of producing realistic looking images, yet the spammers and scammers are using any old picture without care for how convincing it looks.

Earlier in article,

Bellingcat investigators quickly debunked the imagery for what it is: Poorly done, with errors galore.

Confident that this picture claiming to show an "explosion near the pentagon" is AI generated.

Check out the frontage of the building, and the way the fence melds into the crowd barriers. There's also no other images, videos or people posting as first hand witnesses. pic.twitter.com/t1YKQabuNL
— Nick Waters (@N_Waters89) May 22, 2023

Despite how odd the images looked, with no people, mashed up railings, and walls that melt into one another, it made no difference. The visibility of the bogus tweets rocketed and soon there was the possibility of a needless terror-attack panic taking place.

From;

AI generated images are pretty easy to spot. And let's not forget Photoshop. There are those attempting to hype AI as better than sliced bread. It's not. It's just more bread and circuses for the masses, like Midjourney, which they know people will play with. And "goes viral" is no big deal. People can only stare at so many "viral" images at a time.
 
Meanwhile, back at the glyphs,

AI Deep Learning in the Nazca Geoglyphs

The new study conducted by Japanese scientists from the Yamagata University, has been published in the Journal of Archaeological Science (https://www.sciencedirect.com/science/article/pii/S0305440323000559?via=ihub) . By utilizing AI deep learning (DL) techniques in their research, the team from Yamagata University has made significant strides in uncovering new geoglyphs within the Nazca Pampa. They’ve also added to the richly growing body of the harmonious use of technology and archaeology.

 
The problem is bigger than NEDA

While NEDA and Cass are further investigating what went wrong with the operation of Tessa, Angela Celio Doyle, Ph.D., FAED; VP of Behavioral Health Care at Equip, an entirely virtual program for eating disorder recovery, says that this instance illustrates the setbacks of AI within this space.

"Our society endorses many unhealthy attitudes toward weight and shape, pushing thinness over physical or mental health. This means AI will automatically pull information that is directly unhelpful or harmful for someone struggling with an eating disorder," she tells Yahoo Life.

Note that bit, and its implications:

"Our society endorses many unhealthy attitudes toward weight and shape, pushing thinness over physical or mental health. This means AI will automatically pull information that is directly unhelpful or harmful for someone struggling with an eating disorder,"


Yahoo Life

An eating disorder chatbot was shut down after it sent 'harmful' advice. What happened — and why its failure has a larger lesson.​

Kerry Justich
Fri, June 2, 2023 at 5:48 PM CDT · 4 min read

AI being used to screen resumes makes even more racist and sexist decisions that their human trainers had been making.

AI seems to be magnifying our worst impulses.
 
AI being used to screen resumes makes even more racist and sexist decisions that their human trainers had been making.

Without more data... *maybe.* It could well be that the humans had been actively using "affirmative action" thinking in order to artificially inflate race and sex based quotas... and the AI hadn't been trained that way, instead it looked only at actual qualifications. Meritocracy can look like bigotry if you're not qualified.
 
AI being used to screen resumes makes even more racist and sexist decisions that their human trainers had been making.

Without more data... *maybe.* It could well be that the humans had been actively using "affirmative action" thinking in order to artificially inflate race and sex based quotas... and the AI hadn't been trained that way, instead it looked only at actual qualifications. Meritocracy can look like bigotry if you're not qualified.
Hiring decisions are complex, but bigotry is a very real factor in hiring and promotion. Consider these facts -- backed up by data -- that height correlates better with salary offers to new grads than does GPA and that "attractiveness," as rated by modeling agencies also does for technical jobs.

The idea that hiring and promotion in the private sector is strictly on merit is false.
 

The idea that hiring and promotion in the private sector is strictly on merit is false.
Sure. But what criteria are the AI using? If they are programmed to use stupid bigotry, that would be found in the code. If they are looking for what should be proper metrics and they still come up with "wrong" results, then maybe the *candidates* are "wrong."
 

Similar threads

Please donate to support the forum.

Back
Top Bottom