The potential effect of Artificial Intelligence on civilisation - a serious discussion

Care to elaborate?

The comment was illogical and points to the primary misconception regarding so-called artificial intelligence. It has no, human-level intelligence. It cannot exceed its programming. It cannot wash dishes or do laundry. It is another attempt to create addiction to a shiny, new tech toy. Like a drug dealer, the first version is a free sample. Once addiction occurs, expect to pay a monthly fee.
 
The comment was illogical and points to the primary misconception regarding so-called artificial intelligence. It has no, human-level intelligence. It cannot exceed its programming. It cannot wash dishes or do laundry. It is another attempt to create addiction to a shiny, new tech toy. Like a drug dealer, the first version is a free sample. Once addiction occurs, expect to pay a monthly fee.
You totally missed the point
 
In terms of AI art—a couple of weeks back there was a video about Kull vs The Serpent Men with what I took to be narration by David Attenborough—near flawless…anyone else hear that?

Also, there is a new tool for musicians…

CUBE
 
Last edited:
In terms of AI art—a couple of weeks back there was a video about Kull vs The Serpent Men with what I took to be narration by David Attenborough—near flawless…anyone else hear that?

Also, there is a new tool for musicians…

CUBE

Oh great. I'm already thinking of lawsuits related to 'sampling' fragments from other songs. I'm sure the money is eagerly awaited by legal teams.
 
If I catch anyone using dumb, stupid non-intelligent artificial anything, you WILL fail this class.
What will you do when it turns out they did not use AI after all?

Stivers is hardly alone in facing such an ordeal as students, teachers, and educational institutions grapple with the revolutionary power of artificially intelligent language bots that convincingly mimic human writing. Last month, a Texas professor incorrectly used ChatGPT in trying to assess whether students had completed an assignment using that software. It claimed to have written every essay he fed in — so he temporarily withheld a whole class’ final grades.


other reference to the issue,

Vinu Sankar Sadasivan, a computer scientist at the University of Maryland who co-authored a pre-print paper on the reliability of AI detectors, told The Daily Beast that the fast-paced nature of AI’s growth and adoption has created a number of massive and unexpected challenges—most of which caught educators and students alike completely by surprise.

“The rapid and somewhat unexpected emergence of potent language models, such as ChatGPT, even caught the AI community off guard,” Sadasivan said. “The unregulated utilization of these models indeed presents the risk of malicious outcomes, such as plagiarism.”

He added that the explosion in popularity and hype surrounding AI pushed educational institutions to use AI detectors without completely understanding how they worked—or whether or not they were even reliable in the first place. This led to instances in which teachers have accused students of plagiarism even when they didn’t as evidenced by the viral post on Twitter.


and since I like to have 3 references,

To appeal his professor's accusations to university officials, Quarterman shared a Google document history of his exam writing that showed proof he didn't use AI and a slew of research on the fallibility of GPTZero and other AI detection tools, according to school records.

In a letter March 16 to the university appealing the professor's accusation – provided to USA TODAY by Quarterman's father – the student said that in his professor's feedback on his exam, Fahrenthold wrote in late February: "William, unfortunately it appears as though this exam is plagiarized. The answer to Q3, in particular, is drawn from ChatGPT or similar AI software, and consequently, drawn from a variety of internet sources without attribution or citation. The consequences for submitting plagiarized work in this course is a grade of 0/20, and a citation to OSSJA for the issue of academic integrity."

About a month after the accusation, on March 24, the university dropped its case against Quarterman. In a separate letter provided to USA TODAY by Quarterman's father, Marilyn Derby, an associate director with the University's Office of Student Support and Judicial Affairs, wrote to Quarterman: "After talking with you, talking with your instructor, and doing my own research into indicators of AI-generated text, I believe you most likely wrote the text you submitted for your midterm. In fact, we have no reliable evidence to the contrary."


Oh, let's add one more about not a student but a paid professional,

One day, Michael's main client informed him that they had started to use an AI detector, and the results were supposedly damning for him: his most recent articles flagged a 95% likelihood of being AI generated. His client started to look at all of his previous articles, many written before ChatGPT was even widley available, and Michael was notified that all his articles showed a likelihood of being AI generated of 65-95%. They terminated his contract with immediate effect. A decision solely based on a single number (or range) that the AI detector spit out.

Michael tried whatever he could to prove his articles were not AI generated. He even gave his client access to the full Google Docs history and showed them his writing process and progress, all edits included. But the seed of doubt that the AI detector had sowed was too strong. Michael lost his main client, and with it most of his income.

A number of things are problematic with this story. And I’d like to go over them one by one:

1. The accuracy of general AI detectors is questionable​

General AI detection is flawed. Period. It’s not like the detection works almost all the time and Michael's case is one of the few very unfortunate outliers. No, false positives are the norm when it comes to general AI detection.

Even OpenAI themselves stopped offering their very own detector for this reason:

"The AI classifier is no longer available due to its low rate of accuracy." Open AI, creator of ChatGPT, on their detection

 
Hmm, Georgetown University has some interesting content regarding AI,

Data and privacy​


Among the many complications surrounding data and privacy in generative and detection (see below) AI tools, one issue prominently arises: “when plagiarism detection services archive student work… student privacy rights may be violated,” (Brinkman 2013). Do not upload any student work to AI tools (generators or detectors) without their permission. It’s important to “consider whether the information about students shared with or stored in an AI-enabled system is subject to federal or state privacy laws, such as FERPA,” (Department of Education 2023).


As you talk with your students about data and privacy implications, you may want to review the privacy policies you (and students) are agreeing to by engaging with the tools. For example, in teaching with Chat GPT, read over the porous privacy policy with students and discourage them from sharing personal information on the platform. Note that:


  • The company may access any information fed into or created by its technology.
  • They use log-in data, tracking, and other analytics.
  • Their technology does not respond to “Do Not Track.”

Allow students to opt out if they don’t feel comfortable having their data collected. See Open AI’s full policy here and these FAQs on how the company may use information shared with it.

 
Elon Musk has threatened to ban Apple devices from his companies over Apple’s newly announced ChatGPT integration.
Despite these assurances, Musk expressedstrong opposition to the integration on his social media platform, X (formerly Twitter). Musk said, "If Apple integrates OpenAI at the OS level, then Apple devices will be banned at my companies. That is an unacceptable security violation." He continued, "Visitors will have to check their Apple devices at the door, where they will be stored in a Faraday cage." He also repeatedly questioned Apple's ability to ensure data security, stating, "Apple has no clue what's actually going on once they hand your data over to OpenAI. They're selling you down the river."

I imagine if it was his AI platform Grok then mysteriously he wouldn’t have an issue!!!
 
Just had this arrive in a health care newsletter,
and fits this thread's theme of 'potential effect of Artificial Intelligence on civilisation'
be it civilisation in whole or segments of civilisation.
Study reveals ChatGPT's role in assisting autistic workers NewsGuard 100/100 Score

Editorial Checklist Reviewed
Jun 6 2024Carnegie Mellon University

A new Carnegie Mellon University study shows that many people with autism embrace ChatGPT and similar artificial intelligence tools for help and advice as they confront problems in their workplaces.

But the research team, led by the School of Computer Science's Andrew Begel, also found that such systems sometimes dispense questionable advice. And controversy remains within the autism community as to whether this use of chatbots is even a good idea.

" What we found is there are people with autism who are already using ChatGPT to ask questions that we think ChatGPT is partly well-suited and partly poorly suited for. For instance, they might ask: 'How do I make friends at work?'" "

Andrew Begel, associate professor in the Software and Societal Systems Department and the Human-Computer Interaction Institute


To better understand how large language models (LLMs) could be used to address this shortcoming, Begel and his team recruited 11 people with autism to test online advice from two sources -; a chatbot based on OpenAI's GPT-4 and what looked to the participants like a second chatbot but was really a human career counselor.

Somewhat surprisingly, the users overwhelmingly preferred the real chatbot to the disguised counselor. It's not that the chatbot gave better advice, Begel said, but rather the way it dispensed that advice.

"The participants prioritized getting quick and easy-to-digest answers," Begel said.

The chatbot provided answers that were black and white, without a lot of subtlety and usually in the form of bullets. The counselor, by contrast, often asked questions about what the user wanted to do or why they wanted to do it. Most users preferred not to engage in such back-and-forth, Begel said.

Participants liked the concept of a chatbot. One explained: "I think, honestly, with my workplace … it's the only thing I trust because not every company or business is inclusive."

But when a professional who specializes in supporting job seekers with autism evaluated the answers, she found that some of the LLM's answers weren't helpful. For instance, when one user asked for advice on making friends, the chatbot suggested the user just walk up to people and start talking with them. The problem, of course, is that a person with autism usually doesn't feel comfortable doing that, Begel said.
 
the primary misconception regarding so-called artificial intelligence. It has no, human-level intelligence. It cannot exceed its programming.
Actually, that's pretty much the entire point of large learning models. The code's irrelevant, it's just the backbone for the learning and its the learning that allows them to function, cf Deepmind's AI teaching itself football tactics
View: https://www.youtube.com/watch?v=RbyQcCT6890
(particularly the discussion from c1:30 on)
 
Actually, that's pretty much the entire point of large learning models. The code's irrelevant, it's just the backbone for the learning and its the learning that allows them to function, cf Deepmind's AI teaching itself football tactics
View: https://www.youtube.com/watch?v=RbyQcCT6890
(particularly the discussion from c1:30 on)

So, the Terminator is next? They're working on the plasma rifle right now? Give me a break.
 
I agree with most of the negative, dismissive sentiments here. The problem is, AI IS extremely useful in warfare, and if we (whatever we is to you, the reader) don't master it first, an adversary will and it will take off exponentially. So it's cataclysmically dangerous, badly misunderstood, ripe for misuse, and the most imperative mandate for every government on the planet is to develop and implement it faster than everybody else, because the first one to hit that takeoff point will plausibly dominate the species in perpetuity. The other most likely option is that AI will increasingly develop more advanced AI, so given that 1., the mechanisms between input and output are growing increasingly opaque, and 2., the starting conditions and inputs we've been feeding these primordial AI have been flawed, leading to AI hallucination, it follows that eventually the descendants of the current state of the art will either make a catastrophic mistake in "good faith", or become so alien to us that we no longer understand it.

I am deeply pessimistic about the future of the human race. I'd be ecstatic to be wrong.
 
OpenAI becoming more cartoonishly sinister by the minute.


Gosh. A military intelligence takeover. And it's not a TV show.
 
So, there's that added wrinkle: fake evidence will be promoted. Actual evidence will be declared fake. Evidence of chicanery will be removed from the public consciousness.

Yeah, the forthcoming age of AI domination is gonna be a hoot.
 
Last edited by a moderator:
So, there's that added wrinkle: fake evidence will be promoted. Actual evidence will be declared fake. Evidence of chicanery will be removed from the public consciousness.

Yeah, the forthcoming age of AI domination is gonna be a hoot.

The NSA has a new toy called AI. A dream situation. The ultimate image and voice manipulation tool. If George Orwell were here and saw this, he would drop dead since it's much worse than what he imagined.
 
Last edited by a moderator:
she is right

GQZ_kkJb0AAStbm
 
I have just finished reading an op-ed by the famed data scientist "Ludic" about AI on his blog. Everyone should read it. AI has been super overhyped,and contributes to the further mish-mashing of the business world as we know it. (It seems to me that "AI" has just become another buzzword acronym phrase without most people realizing the real world context of its current peril, not future peril or promise.) It is by turns sobering, entertaining, illuminating, and yes, enraging.

Yes, there is an F-bomb in its title, etc. He also drops F-bombs in the text as well. You have been forewarned. (He is very animatedly angry.)

 
That is the stupidest thing I've read so far this year. AI has no hands.
I disagree.

It could drive something with hands, or other tool-handling appendages. AI can only be useful - or dangerous - if AI interacts with its environment. Physically, or by handling data traffic.
 
A blood test that draws on artificial intelligence can predict who will develop Parkinson’s disease up to seven years before symptoms arise, researchers say.

The test is designed to work on equipment already found in many NHS laboratories and, if validated in a broad population of people, could be made available to the health service within two years.

There are no drugs to protect the brain from Parkinson’s at present, but an accurate predictive test would enable clinics to identify people who stand to benefit most from clinical trials of treatments that aim to slow or halt the disease.
 
I have just finished reading an op-ed by the famed data scientist "Ludic" about AI on his blog. Everyone should read it. AI has been super overhyped,and contributes to the further mish-mashing of the business world as we know it. (It seems to me that "AI" has just become another buzzword acronym phrase without most people realizing the real world context of its current peril, not future peril or promise.) It is by turns sobering, entertaining, illuminating, and yes, enraging.

Yes, there is an F-bomb in its title, etc. He also drops F-bombs in the text as well. You have been forewarned. (He is very animatedly angry.)


I am always wary of anything anywhere that includes the phrase 'start a conversation.' At this moment, those involved in selling AI to businesses and average people have deployed their lobbyists. They want no controls or restraints that could affect profits. AI is just another money grab, nothing more. AI informants have been spread among the people posing as fellow citizens. These informants will report back what they've learned. More AI related lawsuits have occurred. Those need to be handled as well. Billions have not been invested in this as a charity. Microsoft expects more profits.
 
Oh wait. Screenwriters losing their jobs is bad?
No worse than factory workers gettimg their jobs outsourced or farm workers/construction workers/etc. being replaced with "migrants" or coal miners being replaced with wind turbines.


An experiment? Really? 'Hey Solly. Put this up on the screen and tell me what the peasants think.
I for one am curious about how good the story is.
 
No worse than factory workers gettimg their jobs outsourced or farm workers/construction workers/etc. being replaced with "migrants" or coal miners being replaced with wind turbines.



I for one am curious about how good the story is.

Ah, the good ol' 'no worse than' answer. AI - fake word - needs to be stopped now. Right now. For plagiarism.


They - meaning OpenAI - did not ask for permission. They did not pay compensation. They are being sued along with Microsoft, the de facto, though not technically a legal owner of OpenAI. They are attempting to use the baloney defense of Fair Use in their court arguments. Fair Use is a copier in a library where you copy a few pages from a book or magazine to do a term paper, and you clearly mention them. Then there is the subject of AI art. Just a cut apart and reassemble program, sometimes with the original artists' names still visible.

But, they are also using another loophole designed to make the rich richer. That's all this is.
 
An episode of The Bold and The Beautiful could possibly be written by AI - maybe that's been happening for years, who knows.

Something like Breaking Bad by Artificial Intelligence? That might take something more akin to Artificial Consciousness. I think we are a long way from that, for better or for worse.

Be careful what you wish for. You might just get it.
 
Last edited:

Similar threads

Back
Top Bottom