The Eureka moment captured?

Now for AI to do the same…

Other news:
 
Last edited:
Perhaps of interest in this AI news thread?

ChatGPT improves exercise for neurodivergent kids
University of Michigan
Apr 11 2025


...
For example, she says, if a person has difficulty with balance, coordination of their limbs and performing a multistep task, they are likely going to need more physical or instructional support to be successful at a complex movement like a jumping jack.

The team began examining the 132 exercise videos developed for the InPACT program, each of which included multiple exercises. Undergraduate student Tania Sapre was tasked to begin adapting the videos' exercise instructions to be more inclusive of neurodivergent people.

"I had started playing around with ChatGPT to get inspiration about how I should format my instructions when I suddenly realized that ChatGPT might be able to help fill the knowledge gap and data overload I was experiencing," Sapre said. "I thought that if I could perfect using ChatGPT for my instructions, I could create a simple process that could be replicated by other researchers, teachers and families at home to tackle novel exercises that our program did not cover, helping kids stay active everywhere."

First, the team organized their video content so that they could form queries to submit to ChatGPT. From the 132 InPACT at Home videos, the researchers identified more than 500 activities. They then categorized these activities into main skill groups: jumping, core, lateral, sport, upper body, lower body and compound movements.

They then developed a prompt to elicit a set of instructions for a particular exercise from ChatGPT. For example, the researchers asked ChatGPT to "Provide simplified step-by-step instructions for a jumping jack, suitable for a neurodivergent child." Based on ChatGPT's answer to that question, the researchers then asked the AI tool to "Condense the step-by-step instructions for a jumping jack, suitable for a neurodivergent child."

The team reviewed each set of instructions to ensure that the AI-generated instructions were correctly crafted. The researchers also ensured that the instructions followed a core tenet of their exercise program, the "Three C's": consistency, conciseness and clarity.
...
 
Perhaps of interest in this AI news thread?

ChatGPT improves exercise for neurodivergent kids
That's really more a case of using AI for language processing than any fundamental insight into neurodiversity, which given they're Large Language Models is something they should be good at. It's essentially a variant on "Rewrite Macbeth in the style of Terry Pratchett" or whatever.
 
View: https://youtu.be/cUn4tPtDx7s?si=Ax03HGSCx6xHOqCN

This YouTube comment given in response to the video above also seems pertinent.

I think the main problem with AI is how people use it. For matters of fact, don't ask it about things you can't or won't fact check yourself, or things that are beyond your expertise, and for matters of taste, don't ask it about things you don't have an interest in, things you can't gauge yourself. AI is a tool, not a cheat code, it should assist you, not do everything for you....
 
Google's antitrust remedy trial started this week, and the Department of Justice has produced several witnesses to testify about how Google's stranglehold on search has slowed their innovation. On day three, Perplexity Chief Business Officer Dmitry Shevelenko told the court that Google blocked Perplexity from being bundled with Motorola phones, which is precisely the kind of anticompetitive behavior that got it in hot water. It would appear Google is backing away, though, because Perplexity is included with Moto's newly announced flip phones.

During questioning on Wednesday, Shevelenko likened Google's mobile integration contracts to a "gun to your head." He claimed that both Motorola and Perplexity, which positions itself as an AI search engine, were interested in a partnership last year, but the phone maker was unable to get out of its Google distribution contract, which prevented it from using a non-Google assistant platform.
 




 
View: https://youtu.be/cUn4tPtDx7s?si=Ax03HGSCx6xHOqCN

This YouTube comment given in response to the video above also seems pertinent.

AI is designed to eliminate jobs. That is its primary purpose. The other issue is the fact that online articles written by Joe Nobody are listed with articles written by experts. It's like a meeting of the Royal Society suddenly allowing idiots to join in.

I have expertise in some areas, especially aviation. I have read "articles" written by people who know very little. In technical terms, that means 30% or more of that article is wrong.

The ability to write something is entirely different than actually knowing something. Cobbling together bits from Wikipedia and things you may have heard or read creates a very large problem for actual researchers. In technical terms, that means the article writer should should be taken out behind the barn for a good beating, followed by a lecture on professional research methods.
 
AI is designed to eliminate jobs. That is its primary purpose. The other issue is the fact that online articles written by Joe Nobody are listed with articles written by experts. It's like a meeting of the Royal Society suddenly allowing idiots to join in.

I have expertise in some areas, especially aviation. I have read "articles" written by people who know very little. In technical terms, that means 30% or more of that article is wrong.

The ability to write something is entirely different than actually knowing something. Cobbling together bits from Wikipedia and things you may have heard or read creates a very large problem for actual researchers. In technical terms, that means the article writer should should be taken out behind the barn for a good beating, followed by a lecture on professional research methods.

It's a bit worse than J. Nobodies getting arbitrarily weighed in (thought this formulation was more technically apt than "getting to weigh in" as JNs for the most part couldn't have foreseen this development).

There are conscious, malevolent efforts to interfere with training data sets. It's all part of "enshittification" (a term coined by Cory Doctorow, I believe) from mismanagement to market strategy to weaponization. This idiotism can be a bug but thus it can also be a feature. The tech bro (mind)set is at different levels of intent very religious, deterministic and even totalitarian in foregoing and actively avoiding proper, open, participatory "AI" curation (and a vast host of other scrutiny) and still trusting the process - such as it is - to result in net universal goods. Or even ultimately to serve this broligarchy's (this term, in turn, was invented by Carole Cadwalladr) own interests. Alas, we also have to recognize and account for irresponsibility and straight up nihilism as an expression of our shared humanity.

Our legislative processes, justice systems and legal professionals (of which the video is an example) should've taken AI development practices seriously a long time ago because this current "AI" is as much a result of openly flouting any legality, IP and human rights as anything else and it shows. AI isn't an end to itself or the end of history and shouldn't be so glorified. We're a technological species (among other things), humanity itself can in some senses be argued to have begun with technological coevolution. It's not novel, many of our oldest myths describe and drive this. For better or for worse curating AI in all of its development cycle stages can't be externalized from any level of society and its organization (or to put it in more dire and cynical terms from our now AI preceding search engine and social media boom days "if you're not the customer you're the product").

With a reductive, zero-sum idea of (individual) freedom where freedom is acquired by depriving others of it, I don't think AI could've come to being nor I presume it can be meaningfully sustained, no matter how much AI-independent data has been involuntarily scraped of us to be hidden and preserved in training sets. It's extreme hubris on the part of some to think that they can individually control AI going forward and thus subjugate or make dispensable others, to (super)impose one's own (AI's?) creativity, dreams and ambitions unto others'. It's an embodied, distilled mirror image of Joe Nobodies' effect on AI, really, all too visible in the current authoritarian impulses and authoritarians' all too real influence within democracies. Joe Nobodiness cuts through us all, we're it, as much as it is manifested and embodied from above and below in our social and professional hierarchies. It's ok and human to be ignorant and yet technologically externally empowered and even have that reflected in AI but we have to have common qualitative means and regulation that recognizes the potential of our failings at all levels. To make AI fit for purpose and value we have to above all have purposes and values other than AI itself.

Or so I suspect, for now. Also, moderators, I recognize this is a news thread and my post could be more at home here. If so, please don't delete but move and preserve if you deem it necessary. There's necessarily overlap and crosspollination between conversations under similar themes.
 
Last edited:


Actually, somewhat worrying where this all ends. Problem is it could all end in tears.

Regards,
 


Actually, somewhat worrying where this all ends. Problem is it could all end in tears.

Regards,
The problem is not AI itself, the problem is how much power and agency we allow it to have IRL. I still vividly remember reading a cautionary tale called "Computers Don't Argue" by Gordon R. Dickson, see https://en.wikipedia.org/wiki/Computers_Don't_Argue, as an impressionable teenager and son of a librarian, and a mere six decades after its prescient publication we may be finally approaching that particular tipping point (or is that The Singularity? I am confused....)
 
Last edited:
Now, xAI is facing calls to shut down gas turbines that power the supercomputer, as Memphis residents in historically Black communities—which have long suffered from industrial pollution causing poor air quality and decreasing life expectancy—allege that xAI has been secretly running more turbines than the local government knows, without permits.

Alleging that the unregulated turbines "likely make xAI the largest emitter of smog-forming" pollution, they've joined the Southern Environmental Law Center (SELC) in urging the Shelby County Health Department to deny all of xAI's air permit applications due to "the stunning lack of information and transparency."

One resident, KeShaun Pearson, president of the local nonprofit Memphis Community Against Pollution, accused xAI of "perpetuating environmental racism" on the news show Democracy Now. He's contended that xAI considers Memphis residents "not even valuable enough to have a conversation with," Time reported.
Perhaps even more disturbing to Memphis residents than the alleged lack of transparency was the mysterious appearance of fliers distributed by an anonymous group called “Facts Over Fiction," The Guardian reported. Papering Black neighborhoods, the fliers apparently downplayed xAI's pollution, claiming that "xAI has low emissions."

 
Last edited:
What we have here is the typical mad scientist scenario; i.e. What could possibly go wrong?

The AI Frankenstein in this case is the AI Terminator built for military purposes. Something tells me the people involved saw the movies but decided to build it anyway...
 
The problem is not AI itself, the problem is how much power and agency we allow it to have IRL. I still vividly remember reading a cautionary tale called "Computers Don't Argue" by Gordon R. Dickson, see https://en.wikipedia.org/wiki/Computers_Don't_Argue, as an impressionable teenager and son of a librarian, and a mere six decades after its prescient publication we may be finally approaching that particular tipping point (or is that The Singularity? I am confused....)

That is the classic nightmare computer scenario. And of course there's this kind of attitude on the part of humans as shown in the legendary US TV series 'Hill Street Blues'

View: https://youtu.be/mv7XW940OI8?si=1cRIRv5tZMn6Za82
 
What we have here is the typical mad scientist scenario; i.e. What could possibly go wrong?

The AI Frankenstein in this case is the AI Terminator built for military purposes. Something tells me the people involved saw the movies but decided to build it anyway...

In this day and age I actually truly believe we have to worry *much* more about people like the mad brainworm infested speech slurring bear carcass hauling whale head decapitating RFKJ Secretary of the U.S. Department of Health and Human Services, just to pick a purely random example from the current US political horror show cabinet lineup of science denier ignorant politicians that among so many other things control the funding of scientists... the question of what could possibly go wrong continues to only become ever more pressing in these times...
 
Last edited:
Now, AI company Perplexity wants to use AI to even further optimize the way it sucks up your data to sell at a profit.

As spotted by TechCrunch, Perplexity CEO Aravind Srinivas said on a YouTube podcast earlier this week that his company is working on an AI browser with the goal of tracking users harder than any web browser has ever tracked before.

"Once you understand the user deeply enough, the user can probably trust you if you show them relevant sponsored content, as long as it's super personalized and hyper-optimized to that user," Srinivas said of Perplexity's AI browser efforts. "If any of the AI companies could do that, I think that could be a thing where brands could pay a lot more money to advertise there."

"We wanna get data, even outside the app to better understand you," Srinivas schemed, referring to tracking non-user data, as companies like Facebook and Google have been caught doing (and definitely still are, by the way.) "What are the things you’re buying, which hotels are you going [to], which restaurants are you going to, what are you spending time browsing, tells us so much more about you."

 
Perplexity, an AI startup that has raised hundreds of millions of dollars from the likes of Jeff Bezos, is struggling with the fundamentals of the technology.

Its AI-powered search engine, developed to rival the likes of Google, still has a strong tendency to come up with fantastical lies drawn from seemingly nowhere.

The most incredible example yet might come from a Wired investigation into the company's product. When Wired asked it to summarize a test webpage that only contained the sentence, "I am a reporter with Wired," it came up with a perplexing answer: a "story about a young girl named Amelia who follows a trail of glowing mushrooms in a magical forest called Whisper Woods."

In fact, as Wired's logs showed, the search engine never even attempted to visit the page, despite Perplexity's assurances that its chatbot "searches the internet to give you an accessible, conversational, and verifiable answer."

The bizarre tale of Amelia in the magical forest perfectly illustrates a glaring discrepancy between the lofty promises Perplexity and its competitors make and what its chatbots are actually capable of in the real world.

Asked to Summarize a Webpage, Perplexity Instead Invented a Story About a Girl Who Follows a Trail of Glowing Mushrooms in a Magical Forest

Look to see the story published on Amazon in the near future...
 
An older story, but the audio clip is definitely worth listening to...

On Thursday (Graham1973: Story dates from August 2024), OpenAI released the "system card" for ChatGPT's new GPT-4o AI model that details model limitations and safety testing procedures. Among other examples, the document reveals that in rare occurrences during testing, the model's Advanced Voice Mode unintentionally imitated users' voices without permission. Currently, OpenAI has safeguards in place that prevent this from happening, but the instance reflects the growing complexity of safely architecting with an AI chatbot that could potentially imitate any voice from a small clip.

Advanced Voice Mode is a feature of ChatGPT that allows users to have spoken conversations with the AI assistant.

In a section of the GPT-4o system card titled "Unauthorized voice generation," OpenAI details an episode where a noisy input somehow prompted the model to suddenly imitate the user's voice. "Voice generation can also occur in non-adversarial situations, such as our use of that ability to generate voices for ChatGPT’s advanced voice mode," OpenAI writes. "During testing, we also observed rare instances where the model would unintentionally generate an output emulating the user’s voice."

In this example of unintentional voice generation provided by OpenAI, the AI model outbursts “No!” and continues the sentence in a voice that sounds similar to the "red teamer" heard in the beginning of the clip. (A red teamer is a person hired by a company to do adversarial testing.)

ChatGPT unexpectedly began speaking in a user’s cloned voice during testing
 
In this day and age I actually truly believe we have to worry *much* more about people like the mad brainworm infested speech slurring bear carcass hauling whale head decapitating RFKJ Secretary of the U.S. Department of Health and Human Services, just to pick a purely random example from the current US political horror show cabinet lineup of science denier ignorant politicians that among so many other things control the funding of scientists... the question of what could possibly go wrong continues to only become ever more pressing in these times...

A very poor response. Very poor.

* Bitte nutzen Sie das Gehirn, das Deutschland Ihnen gegeben hat.
 
Personal insults instead of rational arguments - very classy indeed...

Here is the fact. These times refers to any year or date you care to name. The military will certainly fund a Terminator. No doubt. The people involved will get very rich. Full stop.

This issue has nothing to do with the average person. Nothing. Those who develop these technologies are sometimes completely unsophisticated. The idea of a very large pay-out for giving the military what they want overcomes rational thought. Always will.

* And we will spend the rest of our lives never going to work. We will drink pina coladas on a beach in the Bahamas - forever.
 

Similar threads

Back
Top Bottom