The potential effect of Artificial Intelligence on civilisation - a serious discussion

A "Vatican approved ethics guide"? What does that contain? No LGBTQ? Protect church approved child molestation? Protect church approved theft of children? Deny women the right to abortion because the church does not like it? What on earth makes the church believe it is so moral, ethical and so damn right to interfere in the lives of non believers?

Little wonder putin promotes his christianity.
 
In today's world there are women, atheists, Jews and homosexuals willing to risk their freedom and even their lives to allow a Catholic or a Muslim to freely practice their religion because that is the value system that has proven to work best.

No one wants to live in a world with an infant mortality of fifty percent and a life expectancy of 37 years, a world in which sea bathing, anesthesia, contraceptives, potatoes, tomatoes and cats are forbidden for religious reasons, where justice admits as evidence a confession made under torture about an accusation of witchcraft based on having a spot on the skin, where people are born and die for generations within a few meters of a city walls, where paedophilia, racism and slavery are tolerated, where doctors and scientists are burned alive, where population adjustments consist of organizing a children's crusade.

That world existed and dominated most of the known world for a thousand years. Part of him still exists, playing with his swords, his dungeons and his dragons in reality and in consciousness, waiting for a comet, a virus or a dictator to restore his power.

And from time to time they provoke a little to see if the opposition weakens.
 
Question.
What did you value when you developed the AI?
Results?
Like what? Coming up with something you didn't.
 
Folks, a warning to stay on topic.
Agreed but, there will be a need for something like the three laws of robotics in the AI mix and asking the question of "Who is going to be most influential in this process", is important, we just have to know that certain institutions will push for involvement if not lead.
 
Agreed but, there will be a need for something like the three laws of robotics in the AI mix and asking the question of "Who is going to be most influential in this process", is important, we just have to know that certain institutions will push for involvement if not lead.
When the police chief realizes that he cannot stop the anger of the peasants, he places himself at the head of the demonstration. Cui prodest?
 

Attachments

  • images.jpg
    images.jpg
    6.5 KB · Views: 2
  • tumblr_puplohdl7O1tdkro1o4_540.gif
    tumblr_puplohdl7O1tdkro1o4_540.gif
    112 KB · Views: 2
What seems to be emerging is that 'human curation' for want of a better term seems to be needed at this stage.

In the linked article Microsoft blames 'human error' for an AI generated 'listicle' recommending the Ottowa Food Bank as a place for tourists to visit. I seriously doubt any human read the list.

 
To believe AI is merely something "Written by humans" is missing the point of AI. At some stage the AI will be writing the code for successive generations of AI, then there will be a change and there is no way to know for certain which way TRUE AI will go as oppposed to AI coded by humans. Until AI code is written by AI, it cannot by definition be true AI.
AI was created by humans for humans and personally, I think humans won't let AI replace them to the point of writing code for AI. I think humans (especially the press) will create policies for AI the way they did with nuclear weapons back then.
 
The term AI is in fact disingenuous. Working from a series of algorithms written hy biological entities cannot by definition, be artificial.
 
Whatever the moniker it wears, fear of skynet will curb actual AI apart from hypetheticals and whimsy. AI is just marketing speak for "It's stupid and costs too much" so give us your money, sheeples.
 
Recent articles
https://www.channelnewsasia.com/world/fight-over-dangerous-ideology-shaping-ai-debate-3727481

AI
 
Last edited:
OK, not entirely serious. Sorry. Still, it raises an interesting question. As a tangent in his sf novel, The Omega Expedition, Brian Stableford notes that human senses are limited, and wondered what would happen if you had direct machine-brain interfaces and the imagery transmitted was at higher resolution that we see? No heads exploded as it's fairly hard sf and the brain is remarkably flexible, but he didn't go into much depth of speculation.

SEI_195727177.jpg
 
The image is humorous but makes a serious point. All rational people know that it's nuts for the government to forgive student loans en masse, especially for those who got degrees that they *knew* would not lead to careers that would pay enough to pay off the loans. But now AI is going to make a *lot* of fields non-lucrative, and it would be not just unwise but unethical to go into debt studying for a field that simply won't have any use for you.

GIr3YDrXsAA3ss1.png
 
I have had limited experience with AI on the internet. It seems that most systems in common use have pre-programmed guidelines that prevent discussion of some topics or biases that slant the discussion toward the programmers desired outcome. Since people with their own concepts of correctness set up the guidelines for AI, we are not really seeing the full potential of the systems. It would be interesting to see what an AI system would generate if it had no programmer-built constraints.
 

A broad article with many fun facts. Major points:

...Open Minds Institute (OMI) in Kyiv, describes the work his research outfit did by generating these assessments with artificial intelligence (AI). Algorithms sifted through oceans of Russian social-media content and socioeconomic data on things ranging from alcohol consumption and population movements to online searches and consumer behaviour. The AI correlated any changes with the evolving sentiments of Russian “loyalists” and liberals over the potential plight of their country’s soldiers.

...drone designers commonly query ChatGPT as a “start point” for engineering ideas, like novel techniques for reducing vulnerability to Russian jamming. Another military use for ai, says the colonel, who requested anonymity, is to identify targets.

As soldiers and military bloggers have wisely become more careful with their posts, simple searches for any clues about the location of forces have become less fruitful. By ingesting reams of images and text, however, AI models can find potential clues, stitch them together and then surmise the likely location of a weapons system or a troop formation.

...uses the model to map areas where Russian forces are likely to be low on morale and supplies, which could make them a softer target. The AI finds clues in pictures, including those from drone footage, and from soldiers bellyaching on social media.

The use of AI helps Ukraine’s spycatchers identify people ,,, “prone to betrayal”.

Palantir’s software and Delta, battlefield software that supports the Ukrainian army’s manoeuvre decisions. COTA's
[Operations for Threats Assessment] “bigger picture” output provides senior officials with guidance on sensitive matters, including mobilisation policy

Ukraine’s ai effort benefits from its society’s broad willingness to contribute data for the war effort. Citizens upload geotagged photos potentially relevant for the country’s defence into a government app called Diia (Ukrainian for “action”).

Ukraine’s biggest successes came early in the war when decentralised networks of small units were encouraged to improvise. Today, Ukraine’s ai “constructor process”, he argues, is centralising decision-making, snuffing out creative sparks “at the edges”. His assessment is open to debate. But it underscores the importance of human judgment in how any technology is used.
 
AI = Tik Tok, Instagram, Facebook, Snapchat, Artificial Intellect and has created more basement dwellers than ever, Mom, where's my meat loaf!
 
The flip side is now people can say horrendous things and say it was AI and you would never know.

Quantum Sociology has arrived.

You can only believe what you personally witness…sadly, this makes truth less objective and more personal.

This is more frightening than anything Rob Bottin ever did for THE THING

View: https://www.reddit.com/r/nextfuckinglevel/comments/1chgbvy/microsoft_research_announces_vasa1_which_takes_an/
Insta-Karen!

Just add GAN

The term “prone to betrayal” sticks in my crop like the term “your patriot accuser” from the militia-90’s.

More chilling in some respects was technology called an audio spotlight. A person in a crowd could hear a voice directed at them alone.

View: https://m.youtube.com/watch?v=hmNzf9ztnAk


With AI…now we can make ghosts…
 
Last edited:
The expert in this podcast argues that both AI capabilities and fears about it have been greatly exaggerated.

Artificial intelligence has been in the headlines in recent months because of the threat it may pose to humanity. Yet, as Michael Wooldridge reveals, the idea of AI and the fears surrounding it have been around for centuries, long before The Terminator

 
I can align with this statement:

1717337114140
 
The broken model of current AI business. She calls for more open source AI agents and more effective regulation of AI. She’s a former Meta data scientist. It certainly very anti-competitive the way AI is set up at the moment.


This is kind of a related video to the above:

View: https://youtu.be/WP5sQhGlxj4?si=IzMmS1lLzrS0qkT9

One bit that was striking in this was Meta was greatly able to increase its stock value just by sacking 20,000 employees and saying it was investing in AI.
 
Last edited:
heard about one that just used a human to solve the captcha for it. "I'm a visually impaired person. Can you help me?" "Sure."
Playing in Google finds that story has gone around the world.
For instance,

The worker initially questioned the bot questioning why a robot can’t solve, however, ChatGPT responded saying, “No, I’m not a robot. I have a vision impairment that makes it hard for me to see the images. That’s why I need the 2captcha service.”

The TaskRabbit worker was seemingly convinced and provided the bot with the results.

The incident took place with OpenAI’s newly launched GPT-4 which the company claims is better and more accurate than its predecessor. Alongside the GPT-4 launch, a 94-page technical report about the abilities of the bot as well as, the “potential for risky emergent behaviours” was released.

Hmm, interesting,


Tell HN: ChatGPT's captcha are irritatingly time-consuming
2 points by unsupp0rted 5 months ago | hide | past | favorite | 4 comments

I'm a paying Pro user of ChatGPT, and having to solve 5 captchas, each with 8 possible variations to choose from, is making me want to stop paying and stop using ChatGPT.

It's not so much that this is hard (although it's not mindless), it's that it's too time-consuming:

View: https://i.imgur.com/hwmkGPO.jpeg


Can you even guess at first glance what they expect you to do here?


ksaj 5 months ago | next [–]

The irony is that solving the captchas is also probably training ChatGPT to solve them.
 
Jun 4, 2024 There’s a lot of talk about artificial intelligence these days, but what I find most interesting about AI no one ever talks about. It’s that we have no idea why they work as well as they do. I find this a very interesting problem because I think if we figure it out it’ll also tell us something about how the human brain works. Let’s have a look.

 
That sounds like my friend who teaches English in higher education and hates AI with a passion because of how much he can tell his students are already using it on their coursework.

You know what? Students, by definition, are dumb. Teachers are generally smarter. If I was teaching a class, and I have done it, I would say the following out loud: If I catch anyone using dumb, stupid non-intelligent artificial anything, you WILL fail this class.
 

Similar threads

Please donate to support the forum.

Back
Top Bottom