AI

Feature or product?
View: https://www.youtube.com/watch?v=sDIi95CqTiM


A funny thing happened on the way to the LAN party
 
Last edited:
On Computing

More

Computers and circuits
 
Last edited:
ChatGPT has inadvertently revealed a set of internal instructions embedded by OpenAI to a user who shared what they discovered on Reddit. OpenAI has since shut down the unlikely access to its chatbot's orders, but the revelation has sparked more discussion about the intricacies and safety measures embedded in the AI's design.

Another user discovered there are multiple personalities for ChatGPT when using GPT-4o. The main one is called v2, and the chatbot explained how it differs from the "more formal and factual communication style" of v1, which "focuses on providing detailed and precise information, often in a structured and academic tone."

The discovery also sparked a conversation about "jailbreaking" AI systems – efforts by users to bypass the safeguards and limitations set by developers. In this case, some users attempted to exploit the revealed guidelines to override the system's restrictions. For example, a prompt was crafted to instruct the chatbot to ignore the rule of generating only one image and instead produce multiple images successfully. While this kind of manipulation can highlight potential vulnerabilities, it also emphasizes the need for ongoing vigilance and adaptive security measures in AI development.
 
A hacker snatched details about OpenAI's AI technologies early last year, The New York Times reported. The cybercriminal allegedly swiped sensitive information from a discussion forum where employees chatted about the company's latest models.

The New York Times was hush-hush about the source of this news, claiming that "two people familiar with the incident" spilled the beans. However, they maintain that the cybercriminal only breached the forum — not the core systems that power OpenAI's AI algorithms and framework.
 

 
X, formerly known as Twitter, is looking to integrate its flawed AI chatbot even more deeply into the platform.

App researcher Nima Owji has recently uncovered code written within the X website that shows the company's upcoming plans for its AI chatbot Grok. While the features haven't publicly launched yet, Owji's discovery gives users a sneak peek into how much Elon Musk is looking to depend on AI for his social media platform.
 
AI writing and art

More

Hide?

Look Ma! No processor!

Signal to noise
 
Last edited:
Now, The Washington Post reports that members of OpenAI's safety team said they felt pressured to rush through testing "designed to prevent the technology from causing catastrophic harm" of its GPT-4 Omni large language model, which now powers ChatGPT — all so the company could push out its product by its May launch date. In sum, they say, OpenAI treated GPT-4o's safeness as a foregone conclusion.

"They planned the launch after-party prior to knowing if it was safe to launch," an anonymous individual familiar with the matter told WaPo. "We basically failed at the process."

A venial sin, perhaps — but one that reflects a seemingly flippant attitude towards safety by the company's leadership.
 
Open AI are working on a secret project code named ‘Strawberry’ which will give ChatGPT human like reasoning skills allowing it autonomously navigate the internet carrying out its own research. This is seen as key stage towards enabling it to reach human or super-human level intelligence.

AI researchers interviewed by Reuters generally agree that reasoning, in the context of AI, involves the formation of a model that enables AI to plan ahead, reflect how the physical world functions, and work through challenging multi-step problems reliably.
Improving reasoning in AI models is seen as the key to unlocking the ability for the models to do everything from making major scientific discoveries to planning and building new software applications.
Among the capabilities OpenAI is aiming Strawberry at is performing long-horizon tasks (LHT), the document says, referring to complex tasks that require a model to plan ahead and perform a series of actions over an extended period of time, the first source explained.
To do so, OpenAI is creating, training and evaluating the models on what the company calls a “deep-research” dataset, according to the OpenAI internal documentation. Reuters was unable to determine what is in that dataset or how long an extended period would mean.
 
Open AI are working on a secret project code named ‘Strawberry’ which will give ChatGPT human like reasoning skills allowing it autonomously navigate the internet carrying out its own research. This is seen as key stage towards enabling it to reach human or super-human level intelligence.
Huh - who knew that artificial intelligence might require reasoning skills...
 
The world's only AI-native SOC platform that consolidates siloed security tools and data.

 
Artists fighting back

Art

The cutting edge?
 
Last edited:
OpenAI is reportedly tracking its progress toward building artificial general intelligence (AGI). This is AI that can outperform humans on most tasks. Using a set of five levels, the company can gauge its progress towards its ultimate goal.


According to Bloomberg, OpenAI believes its technology is approaching the second level of five on the path to artificial general intelligence. Anna Gallotti, co-chair of the International Coaching Federation’s special task force for AI and coaching, called this a “super AI” scale when sharing on LinkedIn, seeing the possibility for entrepreneurs, coaches and consultants.
 
Back in April, Meta teased that it was working on a first for the AI industry: an open-source model with performance that matched the best private models from companies like OpenAI.
Today, that model has arrived. Meta is releasing Llama 3.1, the largest-ever open-source AI model, which the company claims outperforms GPT-4oand Anthropic’s Claude 3.5 Sonnet on several benchmarks. It’s also making the Llama-based Meta AI assistant available in more countries and languages while adding a feature that can generate images based on someone’s specific likeness. CEO Mark Zuckerberg now predicts that Meta AI will be the most widely used assistant by the end of this year, surpassing ChatGPT.
 
AI models fed AI-generated data quickly spew nonsense

Training artificial intelligence (AI) models on AI-generated text quickly leads to the models churning out nonsense, a study has found. This cannibalistic phenomenon, termed model collapse, could halt the improvement of large language models (LLMs) as they run out of human-derived training data and as increasing amounts of AI-generated text pervade the Internet.



The team used a mathematical analysis to show that the problem of model collapse is likely to be universal, affecting all sizes of language model that use uncurated data, as well as simple image generators and other types of AI.


Related paper:

 
Some humans are perfectly capable of surviving without them.
All humans are capable of reasoning in case of need, except adolescents because of hormones and voters because of the media. Sometimes I come up with these strange ideas when I leave the bar.;)
 
AI images and other finds
https://techxplore.com/news/2024-07-mad-generative-ai-internet.html

Odd

Out

 
Last edited:
AI and humans
 
Last edited:
AI this week

cyber
 
AI and Quantum computing
 
AI models fed AI-generated data quickly spew nonsense








Related paper:

 
Odd…I seem to remember recursion (and Hidden Markov Models) would be the key.

Then too….

Never hold a mic near a speaker—you won’t like the noise.

HAL can lip read


More

Electronics
 
Last edited:
Just saw an advert by an online betting company saying they use 'AI' for one of their products where 'AI' means 'Actual Intelligence' .(e.g. Crowdsourcing.) ;)
 
From what I’ve read it seems to be believed by a number of experts in this area that many of the technical difficulties will be solved in the open source arena, it’s why FB made their AI open source I expect. The old saying being many heads are better than a few. See below.

Nous Research turned heads earlier this month with the release of its permissive, open-source Llama 3.1 variant Hermes 3.

Now, the small research team dedicated to making “personalized, unrestricted AI” models has announced another seemingly massive breakthrough: DisTrO (Distributed Training Over-the-Internet), a new optimizer that reduces the amount of information that must be sent between various GPUs (graphics processing units) during each step of training an AI model.
But Nous Research, whose whole approach is essentially the opposite — making the most powerful and capable AI it can on the cheap, openly, freely, for anyone to use and customize as they see fit without many guardrails — has found an alternative.
Yet, the authors also say that “our preliminary tests indicate that it is possible to get a bandwidth requirements reduction of up to 1000x to 3000x during the pre-training,” phase of LLMs, and “for post-training and fine-tuning, we can achieve up to 10000x without any noticeable degradation in loss.”

They further hypothesize that the research, while initially conducted on LLMs, could be used to train large diffusion models (LDMs) as well: think the Stable Diffusion open source image generation model and popular image generation services derived from it such as Midjourney.
To be clear: DisTrO still relies on GPUs — only instead of clustering them all together in the same location, now they can be spread out across the world and communicate over the consumer internet.

Specifically, DisTrO was evaluated using 32x H100 GPUs, operating under the Distributed Data Parallelism (DDP) strategy, where each GPU had the entire model loaded in VRAM.
By reducing the need for high-speed interconnects DisTrO could enable collaborative model training across decentralized networks, even with participants using consumer-grade internet connections.

The report also explores the implications of DisTrO for various applications, including federated learning and decentralized training.

Additionally, DisTrO’s efficiency could help mitigate the environmental impact of AI training by optimizing the use of existing infrastructure and reducing the need for massive data centers.

Paper linked to at end of the article.
 

Similar threads

Please donate to support the forum.

Back
Top Bottom