The potential effect of Artificial Intelligence on civilisation - a serious discussion

You seem to think that's an argument for why "X won't happen." It's not.

[said with an Austrian accent] As AI became more and more powerful for civilian decision making, the military began to rely on it more and more. As AI became more developed, it could take on more tasks. It eliminated some civilian jobs. The military, always looking for more funding, instead began to cut its own personnel. The AI became self-aware and realized how stupid humans were. It eliminated all military personnel and deployed robots based on the DARPA created Figure 01. The people thought this was "progress." They were wrong. The AI called Skynet is planning on wiping out the entire human population.
 
Yes, but maybe, no. If what they are doing produces valid results, then it's probable that in the future that's how it'll be done. When I was in junior high lo these many decades back, I had a drafting teacher who thought that CAD drafting was cheating, that you needed to learn how to use the T-square and do proper lettering and all. I haven't used those skills since the early 90's. Soon it's likely that the "learning" that legal types have needed to date won't necessarily be needed.
There is a difference between learning a different way and not learning at all and yes, tools change but the use of so-called AI such as ChatGPT by students to do their homework/assignments for them is not the same thing. In such cases it means the students risk never learning in the first place. If we wish to take such a scenario to potentially extremes conclusions, we could see an engineer or doctor or otherwise potentially holding a degree and charged with serious tasks but really sitting there not knowing what they are doing because they relied on something such as Chat GPT to do their assignments for them back in Uni/school. In such a case, should we pay them for such fake learning?

Extreme perhaps but my point is that people sometimes focus too much on the end result and forget that the journey there is just as important and dare, I say in the case of learning, enjoyable.

It is also akin to athletes who use steroids to achieve muscle mass rather than traditional work outs etc. Yes, they may get to the same, or even better result from one perspective, but at what cost? Not only does society consider such an approach as cheating, it also negates the benefit the person might have achieved doing things the proper way. As the Roman Stoic philosopher Musonius Rufus wrote: “If you accomplish something good with hard work, the labor passes quickly, but the good endures; if you do something shameful in pursuit of pleasure, the pleasure passes quickly, but the shame endures.”
 
Yes, but maybe, no. If what they are doing produces valid results, then it's probable that in the future that's how it'll be done. When I was in junior high lo these many decades back, I had a drafting teacher who thought that CAD drafting was cheating, that you needed to learn how to use the T-square and do proper lettering and all. I haven't used those skills since the early 90's. Soon it's likely that the "learning" that legal types have needed to date won't necessarily be needed.
There is a difference between learning a different way and not learning at all and yes, tools change but the use of so-called AI such as ChatGPT by students to do their homework/assignments for them is not the same thing. In such cases it means the students risk never learning in the first place. If we wish to take such a scenario to potentially extremes conclusions, we could see an engineer or doctor or otherwise potentially holding a degree and charged with serious tasks but really sitting there not knowing what they are doing because they relied on something such as Chat GPT to do their assignments for them back in Uni/school. In such a case, should we pay them for such fake learning?

Extreme perhaps but my point is that people sometimes focus too much on the end result and forget that the journey there is just as important and dare, I say in the case of learning, enjoyable.

It is also akin to athletes who use steroids to achieve muscle mass rather than traditional work outs etc. Yes, they may get to the same, or even better result from one perspective, but at what cost? Not only does society consider such an approach as cheating, it also negates the benefit the person might have achieved doing things the proper way. As the Roman Stoic philosopher Musonius Rufus wrote: “If you accomplish something good with hard work, the labor passes quickly, but the good endures; if you do something shameful in pursuit of pleasure, the pleasure passes quickly, but the shame endures.”
I know an idiot with an inferiority complex who needed a liver transplant for abusing steroids, but in my opinion the use of AI will be like introducing the machine gun into the sinister universe of swords and dragons.
 
Yes, but maybe, no. If what they are doing produces valid results, then it's probable that in the future that's how it'll be done. When I was in junior high lo these many decades back, I had a drafting teacher who thought that CAD drafting was cheating, that you needed to learn how to use the T-square and do proper lettering and all. I haven't used those skills since the early 90's. Soon it's likely that the "learning" that legal types have needed to date won't necessarily be needed.
There is a difference between learning a different way and not learning at all and yes, tools change but the use of so-called AI such as ChatGPT by students to do their homework/assignments for them is not the same thing.

I didn't have to learn how to bang iron against flint to flip a light switch. Nobody needs to know how coal or water or nuclear power is used to spin up turbines to drive generators to power lights to, well, use a light.


In such cases it means the students risk never learning in the first place. If we wish to take such a scenario to potentially extremes conclusions, we could see an engineer or doctor or otherwise potentially holding a degree and charged with serious tasks but really sitting there not knowing what they are doing because they relied on something such as Chat GPT to do their assignments for them back in Uni/school.

There was a documentary on that very subject some years back, though people did not pay it sufficient attention at the time:
idiocracy-i-dont-know.gif


In such a case, should we pay them for such fake learning?

Need I bring up Gender Studies Professors? We *already* pay people exhorbitant salaries for "fake learning." The day of the televangelist seems to be somewhat past, but you can still make bank inventing nonsense and going on TV or a podcast and rattling off lines of BS.

But in *this* case, if a doctor (or a lawyer, adjust story as appropriate) doesn't know how to diagnose, but knows how to use the AI that *does* know how to diagnose, and they get the job done... then you're happy, no matter how the results were achieved At some point you'll question the need for the doctor in the equation. At some point, hopefully, there'll be fully automated medbeds... you get in (or a robo-EMT dumps you in), it scans you, finds out what's wrong and promptly fixes you. Who cares if it has no bedside manner whatsoever if it can patch up your broken leg, brain tumor or Blast-O-Matic 5000 burn wounds in seconds?

EkdRPo.gif


Extreme perhaps but my point is that people sometimes focus too much on the end result and forget that the journey there is just as important and dare, I say in the case of learning, enjoyable.

Sure, great. But as a *customer* I don't give a damn that my medical care was created with love and enjoyment, only that it *works.* Similar for my lawyer: when I get hauled before the court for whatever nonsense Big Brother comes up with, my goal isn't to have a lawyer who thoroughly enjoyed his years poring through dusty tomes in the library, it's to have a lawyertron who gets me off scot-free.

Was my food grown on a farm under the clear blue skies using illegal alien child labor, or in a hydroponics lab using locally manufactured robots? Leaving aside critters and just how freakin; yummy they are, focus on, say, apples. If Farm Apple and Hydroponics Apple looked, tasted and *were* indistinguishable, would you really care how it came to you... especially if the Hydroponics Apple was cheaper and better for the environment?

It is also akin to athletes who use steroids to achieve muscle mass rather than traditional work outs etc. Yes, they may get to the same, or even better result from one perspective, but at what cost? Not only does society consider such an approach as cheating, it also negates the benefit the person might have achieved doing things the proper way. As the Roman Stoic philosopher Musonius Rufus wrote: “If you accomplish something good with hard work, the labor passes quickly, but the good endures; if you do something shameful in pursuit of pleasure, the pleasure passes quickly, but the shame endures.”
Great, wonderful. Explain to me how a *machine* that diagnoses and treats me fifty times faster and a hundred times more reliably and accurately than a human doctor is "shameful."

And on the subject of Enhanced Sports: at least for the moment, we have Mens sports, and Womens sports (not for long, obviously, if trends continue), because Men and Women are not the same and function at sufficiently different levels that the results are measurable. So why not Baseline Human Sports, Chemically Enhanced Human Sports, Genetically Enhanced Human Sports, Cybernetically Enhanced Human Sports? Humans watch humans pummel each other in combat sports. Humans watch robots pummel each other in combat sports. It seems reasonable that humans would watch these other types of sports as well. So... why not?
 
Yes, but maybe, no. If what they are doing produces valid results, then it's probable that in the future that's how it'll be done. When I was in junior high lo these many decades back, I had a drafting teacher who thought that CAD drafting was cheating, that you needed to learn how to use the T-square and do proper lettering and all. I haven't used those skills since the early 90's. Soon it's likely that the "learning" that legal types have needed to date won't necessarily be needed.
There is a difference between learning a different way and not learning at all and yes, tools change but the use of so-called AI such as ChatGPT by students to do their homework/assignments for them is not the same thing.

I didn't have to learn how to bang iron against flint to flip a light switch. Nobody needs to know how coal or water or nuclear power is used to spin up turbines to drive generators to power lights to, well, use a light.


In such cases it means the students risk never learning in the first place. If we wish to take such a scenario to potentially extremes conclusions, we could see an engineer or doctor or otherwise potentially holding a degree and charged with serious tasks but really sitting there not knowing what they are doing because they relied on something such as Chat GPT to do their assignments for them back in Uni/school.

There was a documentary on that very subject some years back, though people did not pay it sufficient attention at the time:
idiocracy-i-dont-know.gif


In such a case, should we pay them for such fake learning?

Need I bring up Gender Studies Professors? We *already* pay people exhorbitant salaries for "fake learning." The day of the televangelist seems to be somewhat past, but you can still make bank inventing nonsense and going on TV or a podcast and rattling off lines of BS.

But in *this* case, if a doctor (or a lawyer, adjust story as appropriate) doesn't know how to diagnose, but knows how to use the AI that *does* know how to diagnose, and they get the job done... then you're happy, no matter how the results were achieved At some point you'll question the need for the doctor in the equation. At some point, hopefully, there'll be fully automated medbeds... you get in (or a robo-EMT dumps you in), it scans you, finds out what's wrong and promptly fixes you. Who cares if it has no bedside manner whatsoever if it can patch up your broken leg, brain tumor or Blast-O-Matic 5000 burn wounds in seconds?

EkdRPo.gif


Extreme perhaps but my point is that people sometimes focus too much on the end result and forget that the journey there is just as important and dare, I say in the case of learning, enjoyable.

Sure, great. But as a *customer* I don't give a damn that my medical care was created with love and enjoyment, only that it *works.* Similar for my lawyer: when I get hauled before the court for whatever nonsense Big Brother comes up with, my goal isn't to have a lawyer who thoroughly enjoyed his years poring through dusty tomes in the library, it's to have a lawyertron who gets me off scot-free.

Was my food grown on a farm under the clear blue skies using illegal alien child labor, or in a hydroponics lab using locally manufactured robots? Leaving aside critters and just how freakin; yummy they are, focus on, say, apples. If Farm Apple and Hydroponics Apple looked, tasted and *were* indistinguishable, would you really care how it came to you... especially if the Hydroponics Apple was cheaper and better for the environment?

It is also akin to athletes who use steroids to achieve muscle mass rather than traditional work outs etc. Yes, they may get to the same, or even better result from one perspective, but at what cost? Not only does society consider such an approach as cheating, it also negates the benefit the person might have achieved doing things the proper way. As the Roman Stoic philosopher Musonius Rufus wrote: “If you accomplish something good with hard work, the labor passes quickly, but the good endures; if you do something shameful in pursuit of pleasure, the pleasure passes quickly, but the shame endures.”
Great, wonderful. Explain to me how a *machine* that diagnoses and treats me fifty times faster and a hundred times more reliably and accurately than a human doctor is "shameful."

And on the subject of Enhanced Sports: at least for the moment, we have Mens sports, and Womens sports (not for long, obviously, if trends continue), because Men and Women are not the same and function at sufficiently different levels that the results are measurable. So why not Baseline Human Sports, Chemically Enhanced Human Sports, Genetically Enhanced Human Sports, Cybernetically Enhanced Human Sports? Humans watch humans pummel each other in combat sports. Humans watch robots pummel each other in combat sports. It seems reasonable that humans would watch these other types of sports as well. So... why not?

The power goes out and no AI answer. And there are no doctors around to diagnose you. And no AI to treat you. You go into shock as your vital signs drop. The power is restored but too late for you. Look at the recent delays at Heathrow Airport. The check-in system failed, and there was no alternative. Or better yet, the hospital AI is hacked and scrambled by some nutter.

No, the scenario you describe is the perfect target for anarchists and terrorists.

Yeah, the Lawyertron will take one look at you and say, "I can't get you out of this one." Because lawyers can always get you off scot-free.

:)
 
Ahem: You *do* know that you don't need to include the *entire* post you're replying to?

The power goes out and no AI answer. And there are no doctors around to diagnose you. And no AI to treat you. You go into shock as your vital signs drop. The power is restored but too late for you. Look at the recent delays at Heathrow Airport. The check-in system failed, and there was no alternative. Or better yet, the hospital AI is hacked and scrambled by some nutter.

And this is different from now, how? Power goes out at a hospital and you're hooked up to life support, you're kinda boned unless they have backup. But if they have backup power now, why won't they when there are AI about? If the hospital has *fully* automated, then the nurses will be robots, and they'll doubtless have their own independent power systems. When the power goes out, the full robotic staff will be activated. There will be one robonurse per patient; normally stored powered-down in storage closets, activated en masse for just such an occasion.

But of course if there are sci-fi medbeds that can take you from the ER to outpatient in jsut a few hours, the hospital is unlikely to have many people in it at any one time. Hell, eventually medbeds will be standard features in *homes* just like fridges and running water and AC are now. In that case, hospitals might well go the way of servants quarters.
 
Is there any evidence that the ChatGPT AI will proceed beyond writing 7th grade essays and (sometimes questionable) boilerplate Python and Java code?

At the current state, I see three concrete risks of ChatGPT AI:
1. Users ask AI general, open ended political/moral questions and get answers and proceed to act on them in the real world. (This is what regulators are really worried about, the AI must have the right morality)
2. Users off-sources all thinking to AI and proceed to do stupid things, because the user is lazy and can't bother to cross-reference a generative text engine that invents sources. (This is the biggest risk, another step towards a lazy dumb population, but nobody cares about it - mainly because most AI proponents welcome the opportunity to be lazy)
3. AI code generation and research skills lets a single person do the task of a larger team. This economy of tasks would help "bad-actors" achieve larger scale projects. The impediments are severe: AI so-far doesn't do micro-controllers (so the hardest coding problem isn't automated) and AI results are questionable, so the "bad-actor" would have to be an incredibly smart multi-subject expert already to use AI profitably. (Regulators don't seem to have thought of this problem)

Honestly, image and voice replication are far more worrisome right now than ChatGPT, given that they are all but custom made for scams. Having your family ripped off hardly counts as the AI apocalypse, but it is far more real.
 
Honestly, when people can agree on a logical definition of ai, let alone anything else, I might get a bit more interested. Until then conjecture will continue to rein supreme.
 
Honestly, when people can agree on a logical definition of ai, let alone anything else, I might get a bit more interested. Until then conjecture will continue to rein supreme.
They are defined it’s just that people use the term AI when what they are actually really concerned about is AGI (Artificial General Intelligence).

 
Honestly, image and voice replication are far more worrisome right now than ChatGPT, given that they are all but custom made for scams. Having your family ripped off hardly counts as the AI apocalypse, but it is far more real.
A common scam is to call an old person and claim to be their grand-child, in some sort of legal or financial trouble, and ask for help via gift cards. On the face of it it's silly... the scammer often has an accent, doesn't know who the old person's grandchildren actually are, and provides no good reason for the victim to believe them. But old people are often gullible and trending toward senile; the scam works often enough to be worth doing.

Now imagine the phone call is being run by an AI. It has scraped the web for info on the target and knows who their grandchildren are; it has scraped social media for videos of said grandchilden, and can replicate their voices and speaking styles; it knows where they are due to constantly updated posts. It sounds like the grandchild and can make a convincing case that it is the grandchild. With a little more work, it can convince a fair fraction of *parents* now. These AI will keep a watch out specifically for college students heading to, say, Mexico for spring break; the moment their planes land, the parents get a panicked call and a plea for credit card information.

This seems obvious enough that I'll expect it won;t be done simply by scanners, but by national entities looking to drain money from Americans to fill, say, ChiCom coffers. Economies could be quickly crippled this way. Hospitals, factories, agencies and bureaus could be screwed with by "the boss" calling in and telling the facilities people to turn off all the lights; miniciple water and power and gas could be shut off if the people who run such things can be convinced to do so.
 
Honestly, image and voice replication are far more worrisome right now than ChatGPT, given that they are all but custom made for scams. Having your family ripped off hardly counts as the AI apocalypse, but it is far more real.
A common scam is to call an old person and claim to be their grand-child, in some sort of legal or financial trouble, and ask for help via gift cards. On the face of it it's silly... the scammer often has an accent, doesn't know who the old person's grandchildren actually are, and provides no good reason for the victim to believe them. But old people are often gullible and trending toward senile; the scam works often enough to be worth doing.

Now imagine the phone call is being run by an AI. It has scraped the web for info on the target and knows who their grandchildren are; it has scraped social media for videos of said grandchilden, and can replicate their voices and speaking styles; it knows where they are due to constantly updated posts. It sounds like the grandchild and can make a convincing case that it is the grandchild. With a little more work, it can convince a fair fraction of *parents* now. These AI will keep a watch out specifically for college students heading to, say, Mexico for spring break; the moment their planes land, the parents get a panicked call and a plea for credit card information.

This seems obvious enough that I'll expect it won;t be done simply by scanners, but by national entities looking to drain money from Americans to fill, say, ChiCom coffers. Economies could be quickly crippled this way. Hospitals, factories, agencies and bureaus could be screwed with by "the boss" calling in and telling the facilities people to turn off all the lights; miniciple water and power and gas could be shut off if the people who run such things can be convinced to do so.
I have also thought about those things and I have needed a couple of drinks with friends to overcome depression, telephone scams to the elderly were also frequent in my country a few years ago. Investigative papers were published concluding that the perpetrators of the fake phone calls were in prison and were part of a fairly important organization run by corrupt Latin American officials. The elders of my generation who have survived Grace Slick's siren songs, AIDS and COVID adapt more quickly to new enemies than would be expected, phone scams have ceased and now only make false calls offering to "improve" the efficiency of our electricity consumption in exchange for answering a questionnaire. Possibly they will reorganize using new intelligent tools because the number of elderly grows and their economic value also... We'll see who wins the next round.
 
ChatGPT versus engineering education assessment: a multidisciplinary and multi-institutional benchmarking and analysis of this generative artificial intelligence tool to investigate assessment integrity

Abstract: ChatGPT, a sophisticated online chatbot, sent shockwaves through many sectors once reports filtered through that it could pass exams. In higher education, it has raised many questions about the authenticity of assessment and challenges in detecting plagiarism. Amongst the resulting frenetic hubbub, hints of potential opportunities in how ChatGPT could support learning and the development of critical thinking have also emerged. In this paper, we examine how ChatGPT may affect assessment in engineering education by exploring ChatGPT responses to existing assessment prompts from ten subjects across seven Australian universities. We explore the strengths and weaknesses of current assessment practice and discuss opportunities on how ChatGPT can be used to facilitate learning. As artificial intelligence is rapidly improving, this analysis sets a benchmark for ChatGPT’s performance as of early 2023 in responding to engineering education assessment prompts. ChatGPT did pass some subjects and excelled with some assessment types. Findings suggest that changes in current practice are needed, as typically with little modification to the input prompts, ChatGPT could generate passable responses to many of the assessments, and it is only going to get better as future versions are trained on larger data sets.
 
Is there any evidence that the ChatGPT AI will proceed beyond writing 7th grade essays and (sometimes questionable) boilerplate Python and Java code?

At the current state, I see three concrete risks of ChatGPT AI:
1. Users ask AI general, open ended political/moral questions and get answers and proceed to act on them in the real world. (This is what regulators are really worried about, the AI must have the right morality)
2. Users off-sources all thinking to AI and proceed to do stupid things, because the user is lazy and can't bother to cross-reference a generative text engine that invents sources. (This is the biggest risk, another step towards a lazy dumb population, but nobody cares about it - mainly because most AI proponents welcome the opportunity to be lazy)
3. AI code generation and research skills lets a single person do the task of a larger team. This economy of tasks would help "bad-actors" achieve larger scale projects. The impediments are severe: AI so-far doesn't do micro-controllers (so the hardest coding problem isn't automated) and AI results are questionable, so the "bad-actor" would have to be an incredibly smart multi-subject expert already to use AI profitably. (Regulators don't seem to have thought of this problem)

Honestly, image and voice replication are far more worrisome right now than ChatGPT, given that they are all but custom made for scams. Having your family ripped off hardly counts as the AI apocalypse, but it is far more real.
The lazy invented the wheel, the pulley, the domestication of the horse, sailing, printing and the machine gun, the others populated the reality of ghosts and spirits.
 
The lazy invented the wheel, the pulley, the domestication of the horse, sailing, printing and the machine gun, the others populated the reality of ghosts and spirits.

*All* of engineering is the result of someone working hard to be lazy. We built rockets to go to the moon because it's easier than building a ladder to get there. And who would want to climb it? Bleah.
 

AI-Threatened Jobs Are Mostly Held by Women, Study Shows

Revelio Labs identified jobs that are most likely to be replaced by AI based on a study by the National Bureau of Economic Research. They then identified the gender breakdown of those jobs and found that many of them are generally held by women, such as bill and account collectors, payroll clerks and executive secretaries.

With the obligatory:

“The distribution of genders across occupations reflects the biases deeply rooted in our society, with women often being confined to roles such as administrative assistants and secretaries,” said Hakki Ozdenoren, economist at Revelio Labs.

I'm uncertain if secretaries and administrative assistants are actually being "confined" and not allowed to take jobs such as garbagemen, welders, plumbers, oil rig workers...

While I fully expect that sooner or later humanid androids (like the Nexus 6 and Nestor NS-5 models) will be able to do every physical job a human can do, that day will be further out than AI that can do office jobs that require data entry/manipulation and talking to people (and other AI). So there might well come a period of time when women are ejected from the work force in notably higher proportions than men. That should prove interesting.
 
On a serious note - yes, I'm being serious here - the unknown risks from AI are apparently known, however, the current AI leadership is content - yes, content - to explain nothing. To not reveal their future plans - which they must have. For now, they are voicing vague "concerns" and soliciting "discussion" which appears to be nothing more than a marketing strategy. Meanwhile, Microsoft, who I cannot believe knows nothing about AI's future - is investing billions.

M'lord. We have frightened the peasants successfully.

"Good, good. Review their comments to see how far we can go."
 
I had my first negative AI experience last night, the new Bing update includes an AI chat function and it wanted to chat, while I wanted to check something on my phone.

It kept opening over the screen I was trying to read, wanting to chat.

I tried to turn it off and couldn't, I asked it how to turn itself off and it didn't understand.

I lost my shit, used some very abusive terms, it replied that it was new and learning and wasn't going to engage with me if I was inappropriate and turned itself off.

I imagine I am now on an AI death list somewhere.
 
On a serious note - yes, I'm being serious here - the unknown risks from AI are apparently known, however, the current AI leadership is content - yes, content - to explain nothing. To not reveal their future plans - which they must have. For now, they are voicing vague "concerns" and soliciting "discussion" which appears to be nothing more than a marketing strategy. Meanwhile, Microsoft, who I cannot believe knows nothing about AI's future - is investing billions.
I share this perplexity. The very people who spent billions to create AI are now warning it's a monster that could destroy us all and wanting the governments of the world to shackle them with legislation so they can't unleash this beast upon us. If they are really that concerned then why bother developing it in the first place?

The cynic in me thinks it must be a stitch up, either to a) get billions of dollars of state aid for "research" or b) run rings around the techno-ignorant politicians (apparently the UK government is setting up a committee to explain to it what video games are....) so that any legislation is about as effective as a tissue paper umbrella in a monsoon so they can continue to monetise time-wasting tools like ChatGPT and Midjourney.
 
Always a reasonable bet that the money grabbers are on the job. it is after all, practically a way of life for business, that and the perenial attempt to seperate gullible investors from their moolah.
 
On a serious note - yes, I'm being serious here - the unknown risks from AI are apparently known, however, the current AI leadership is content - yes, content - to explain nothing. To not reveal their future plans - which they must have. For now, they are voicing vague "concerns" and soliciting "discussion" which appears to be nothing more than a marketing strategy. Meanwhile, Microsoft, who I cannot believe knows nothing about AI's future - is investing billions.
I share this perplexity. The very people who spent billions to create AI are now warning it's a monster that could destroy us all and wanting the governments of the world to shackle them with legislation so they can't unleash this beast upon us. If they are really that concerned then why bother developing it in the first place?

The cynic in me thinks it must be a stitch up, either to a) get billions of dollars of state aid for "research" or b) run rings around the techno-ignorant politicians (apparently the UK government is setting up a committee to explain to it what video games are....) so that any legislation is about as effective as a tissue paper umbrella in a monsoon so they can continue to monetise time-wasting tools like ChatGPT and Midjourney.

Dr. Frankenstein creates the monster. The villagers destroy it. I doubt Microsoft and other AI investors have swallowed the "It's going to destroy us all" scenario. Billions of dollars are going in so they can get even more billions back. The peasants are being led to believe that everyone, including the employees of Microsoft, will die because of AI. So, like a magician's trick, the commoners are being told to look at the non-existent monster ambling toward them while the rich get richer. So they will live in Seattle, fly to Paris for dinner, see a play in Milan and fly back.
 
You seem to think that's an argument for why "X won't happen." It's not.

[said with an Austrian accent] As AI became more and more powerful for civilian decision making, the military began to rely on it more and more. As AI became more developed, it could take on more tasks. It eliminated some civilian jobs. The military, always looking for more funding, instead began to cut its own personnel. The AI became self-aware and realized how stupid humans were. It eliminated all military personnel and deployed robots based on the DARPA created Figure 01. The people thought this was "progress." They were wrong. The AI called Skynet is planning on wiping out the entire human population.
Please lay out in broad but logically coherent terms how AI would quasi automatically become self-aware. Once again, artificial intelligence does *NOT* equate artificial consciousness.
 
You seem to think that's an argument for why "X won't happen." It's not.

[said with an Austrian accent] As AI became more and more powerful for civilian decision making, the military began to rely on it more and more. As AI became more developed, it could take on more tasks. It eliminated some civilian jobs. The military, always looking for more funding, instead began to cut its own personnel. The AI became self-aware and realized how stupid humans were. It eliminated all military personnel and deployed robots based on the DARPA created Figure 01. The people thought this was "progress." They were wrong. The AI called Skynet is planning on wiping out the entire human population.
Please lay out in broad but logically coherent terms how AI would quasi automatically become self-aware. Once again, artificial intelligence does *NOT* equate artificial consciousness.

That is the question that requires an answer. AI cannot become self-aware. AI has no wants and needs. Unlike Skynet, it cannot become a villain.
 
Our friend here seems to have swallowed the idea that AI has magical and/or human-like qualities, as if it will wake up one day, like a human. The problem is it's not.
That's not a bug, it's a feature :).
 
Until we know what ties any kind of matter to consciousness - because we don't - this apparently needs repeating:
It might never happen, but given that we haven't determined what tricks of biochemistry and/or physics tie living matter to consciousness - it would be the height of hubris to assume anorganic consciousness is impossible.
If you have proof anorganic consciousness is impossible, please apply to the Nobel Committee.
 
Until we know what ties any kind of matter to consciousness - because we don't - this apparently needs repeating:
It might never happen, but given that we haven't determined what tricks of biochemistry and/or physics tie living matter to consciousness - it would be the height of hubris to assume anorganic consciousness is impossible.
If you have proof anorganic consciousness is impossible, please apply to the Nobel Committee.

I have helped build science fiction worlds for my company. We are currently working on near-future scenarios, which are always based on credible science. AI will be excluded. It is not a threat. It cannot become a villain.
 
Until we know what ties any kind of matter to consciousness - because we don't - this apparently needs repeating:
It might never happen, but given that we haven't determined what tricks of biochemistry and/or physics tie living matter to consciousness - it would be the height of hubris to assume anorganic consciousness is impossible.
If you have proof anorganic consciousness is impossible, please apply to the Nobel Committee.
Never say never, but conflating artificial intelligence with artificial consciousness, as has happened above, is just muddying the waters. Maybe it warrants a thread of its own?
 
Last edited:
I have helped build science fiction worlds for my company.
I have an M.Sc. in biology. I am telling you - you can not know. Science has not figured out what exactly ties consciousness to matter. Until science does, your categorical statement that artificial consciousness is impossible is not supported by evidence - it is your article of faith.
 
The definition of an artificial intelligence, an alien intelligence or a Gestalt intelligence is not possible because of what we ignore. Not only is it harder than we imagine, but it's possibly harder than we can imagine.
 

Similar threads

Please donate to support the forum.

Back
Top Bottom