The Eureka moment captured?

Now for AI to do the same…

Other news:
 
Last edited:
Perhaps of interest in this AI news thread?

ChatGPT improves exercise for neurodivergent kids
University of Michigan
Apr 11 2025


...
For example, she says, if a person has difficulty with balance, coordination of their limbs and performing a multistep task, they are likely going to need more physical or instructional support to be successful at a complex movement like a jumping jack.

The team began examining the 132 exercise videos developed for the InPACT program, each of which included multiple exercises. Undergraduate student Tania Sapre was tasked to begin adapting the videos' exercise instructions to be more inclusive of neurodivergent people.

"I had started playing around with ChatGPT to get inspiration about how I should format my instructions when I suddenly realized that ChatGPT might be able to help fill the knowledge gap and data overload I was experiencing," Sapre said. "I thought that if I could perfect using ChatGPT for my instructions, I could create a simple process that could be replicated by other researchers, teachers and families at home to tackle novel exercises that our program did not cover, helping kids stay active everywhere."

First, the team organized their video content so that they could form queries to submit to ChatGPT. From the 132 InPACT at Home videos, the researchers identified more than 500 activities. They then categorized these activities into main skill groups: jumping, core, lateral, sport, upper body, lower body and compound movements.

They then developed a prompt to elicit a set of instructions for a particular exercise from ChatGPT. For example, the researchers asked ChatGPT to "Provide simplified step-by-step instructions for a jumping jack, suitable for a neurodivergent child." Based on ChatGPT's answer to that question, the researchers then asked the AI tool to "Condense the step-by-step instructions for a jumping jack, suitable for a neurodivergent child."

The team reviewed each set of instructions to ensure that the AI-generated instructions were correctly crafted. The researchers also ensured that the instructions followed a core tenet of their exercise program, the "Three C's": consistency, conciseness and clarity.
...
 
Perhaps of interest in this AI news thread?

ChatGPT improves exercise for neurodivergent kids
That's really more a case of using AI for language processing than any fundamental insight into neurodiversity, which given they're Large Language Models is something they should be good at. It's essentially a variant on "Rewrite Macbeth in the style of Terry Pratchett" or whatever.
 
View: https://youtu.be/cUn4tPtDx7s?si=Ax03HGSCx6xHOqCN

This YouTube comment given in response to the video above also seems pertinent.

I think the main problem with AI is how people use it. For matters of fact, don't ask it about things you can't or won't fact check yourself, or things that are beyond your expertise, and for matters of taste, don't ask it about things you don't have an interest in, things you can't gauge yourself. AI is a tool, not a cheat code, it should assist you, not do everything for you....
 
Google's antitrust remedy trial started this week, and the Department of Justice has produced several witnesses to testify about how Google's stranglehold on search has slowed their innovation. On day three, Perplexity Chief Business Officer Dmitry Shevelenko told the court that Google blocked Perplexity from being bundled with Motorola phones, which is precisely the kind of anticompetitive behavior that got it in hot water. It would appear Google is backing away, though, because Perplexity is included with Moto's newly announced flip phones.

During questioning on Wednesday, Shevelenko likened Google's mobile integration contracts to a "gun to your head." He claimed that both Motorola and Perplexity, which positions itself as an AI search engine, were interested in a partnership last year, but the phone maker was unable to get out of its Google distribution contract, which prevented it from using a non-Google assistant platform.
 




 
View: https://youtu.be/cUn4tPtDx7s?si=Ax03HGSCx6xHOqCN

This YouTube comment given in response to the video above also seems pertinent.

AI is designed to eliminate jobs. That is its primary purpose. The other issue is the fact that online articles written by Joe Nobody are listed with articles written by experts. It's like a meeting of the Royal Society suddenly allowing idiots to join in.

I have expertise in some areas, especially aviation. I have read "articles" written by people who know very little. In technical terms, that means 30% or more of that article is wrong.

The ability to write something is entirely different than actually knowing something. Cobbling together bits from Wikipedia and things you may have heard or read creates a very large problem for actual researchers. In technical terms, that means the article writer should should be taken out behind the barn for a good beating, followed by a lecture on professional research methods.
 
AI is designed to eliminate jobs. That is its primary purpose. The other issue is the fact that online articles written by Joe Nobody are listed with articles written by experts. It's like a meeting of the Royal Society suddenly allowing idiots to join in.

I have expertise in some areas, especially aviation. I have read "articles" written by people who know very little. In technical terms, that means 30% or more of that article is wrong.

The ability to write something is entirely different than actually knowing something. Cobbling together bits from Wikipedia and things you may have heard or read creates a very large problem for actual researchers. In technical terms, that means the article writer should should be taken out behind the barn for a good beating, followed by a lecture on professional research methods.

It's a bit worse than J. Nobodies getting arbitrarily weighed in (thought this formulation was more technically apt than "getting to weigh in" as JNs for the most part couldn't have foreseen this development).

There are conscious, malevolent efforts to interfere with training data sets. It's all part of "enshittification" (a term coined by Cory Doctorow, I believe) from mismanagement to market strategy to weaponization. This idiotism can be a bug but thus it can also be a feature. The tech bro (mind)set is at different levels of intent very religious, deterministic and even totalitarian in foregoing and actively avoiding proper, open, participatory "AI" curation (and a vast host of other scrutiny) and still trusting the process - such as it is - to result in net universal goods. Or even ultimately to serve this broligarchy's (this term, in turn, was invented by Carole Cadwalladr) own interests. Alas, we also have to recognize and account for irresponsibility and straight up nihilism as an expression of our shared humanity.

Our legislative processes, justice systems and legal professionals (of which the video is an example) should've taken AI development practices seriously a long time ago because this current "AI" is as much a result of openly flouting any legality, IP and human rights as anything else and it shows. AI isn't an end to itself or the end of history and shouldn't be so glorified. We're a technological species (among other things), humanity itself can in some senses be argued to have begun with technological coevolution. It's not novel, many of our oldest myths describe and drive this. For better or for worse curating AI in all of its development cycle stages can't be externalized from any level of society and its organization (or to put it in more dire and cynical terms from our now AI preceding search engine and social media boom days "if you're not the customer you're the product").

With a reductive, zero-sum idea of (individual) freedom where freedom is acquired by depriving others of it, I don't think AI could've come to being nor I presume it can be meaningfully sustained, no matter how much AI-independent data has been involuntarily scraped of us to be hidden and preserved in training sets. It's extreme hubris on the part of some to think that they can individually control AI going forward and thus subjugate or make dispensable others, to (super)impose one's own (AI's?) creativity, dreams and ambitions unto others'. It's an embodied, distilled mirror image of Joe Nobodies' effect on AI, really, all too visible in the current authoritarian impulses and authoritarians' all too real influence within democracies. Joe Nobodiness cuts through us all, we're it, as much as it is manifested and embodied from above and below in our social and professional hierarchies. It's ok and human to be ignorant and yet technologically externally empowered and even have that reflected in AI but we have to have common qualitative means and regulation that recognizes the potential of our failings at all levels. To make AI fit for purpose and value we have to above all have purposes and values other than AI itself.

Or so I suspect, for now. Also, moderators, I recognize this is a news thread and my post could be more at home here. If so, please don't delete but move and preserve if you deem it necessary. There's necessarily overlap and crosspollination between conversations under similar themes.
 
Last edited:


Actually, somewhat worrying where this all ends. Problem is it could all end in tears.

Regards,
 


Actually, somewhat worrying where this all ends. Problem is it could all end in tears.

Regards,
The problem is not AI itself, the problem is how much power and agency we allow it to have IRL. I still vividly remember reading a cautionary tale called "Computers Don't Argue" by Gordon R. Dickson, see https://en.wikipedia.org/wiki/Computers_Don't_Argue, as an impressionable teenager and son of a librarian, and a mere six decades after its prescient publication we may be finally approaching that particular tipping point (or is that The Singularity? I am confused....)
 
Last edited:
Now, xAI is facing calls to shut down gas turbines that power the supercomputer, as Memphis residents in historically Black communities—which have long suffered from industrial pollution causing poor air quality and decreasing life expectancy—allege that xAI has been secretly running more turbines than the local government knows, without permits.

Alleging that the unregulated turbines "likely make xAI the largest emitter of smog-forming" pollution, they've joined the Southern Environmental Law Center (SELC) in urging the Shelby County Health Department to deny all of xAI's air permit applications due to "the stunning lack of information and transparency."

One resident, KeShaun Pearson, president of the local nonprofit Memphis Community Against Pollution, accused xAI of "perpetuating environmental racism" on the news show Democracy Now. He's contended that xAI considers Memphis residents "not even valuable enough to have a conversation with," Time reported.
Perhaps even more disturbing to Memphis residents than the alleged lack of transparency was the mysterious appearance of fliers distributed by an anonymous group called “Facts Over Fiction," The Guardian reported. Papering Black neighborhoods, the fliers apparently downplayed xAI's pollution, claiming that "xAI has low emissions."

 
Last edited:

Similar threads

Back
Top Bottom