I think I'll offer my 2 cents on this - the recent developments on AI have been impressive - and controversial.
I won't try to delve into the debate of consciousness and sentience - I don't think it's a particularly relevant discussion for practical matters - a non-conscious AI is just as capable of acting in any way if instructed to do so.
I'd like to avoid leaning too much into doomsday fearmongering as well, and stick to the hard, economic facts about the AI of today, focusing on large language models (LLMs), ChatGPT in particular, and its parent company, OpenAI.
First a quick blurb on LLMs. The way these models work, is, after being trained on an ungodly large corpus of text (as in the entire textual output of humanity so far - all the books, articles, forum posts etc., probably including the ones people in this thread have made), they learn the structure of human language and become able to complete text to an eerie accuracy. Whether this counts as sentience or reasoning is a controversial matter - there's a lot of evidence that ChatGPT's problem solving ability comes in large part from the large training set it was subjected to.
This training data is the most significant, and most difficult to reproduce part of large language models - the other 2, are the huge amount of processing power used for training, enabled by Nvidia's GPUs, as well as the Transformer architecture described in the 2017 paper by Google (although other approaches exist).
To reiterate the importance of training data, it has been shown that when we feed language models their own output, they start degrading, so it is crucial to acquire inputs of humans and reject the outputs generated by LLMs, as they 'poison' the training set.
Considering the importance of training data, the issue of consent arises. The use of text to train AI systems has been labelled 'fair use' - although I'm not sure about how much legal rigor is behind this statement, considering how recent a phenomenon generative AI is, and incredibly valuable it is looking to become. All the works of humanity, fed into this black box, without anyone's consent, bypassing copyright is in my humble opinion controversial from both a moral and legal perspective, as is the statement, that apparently the outputs of such models are not copyrightable.
Going by my heart, altruistically I'd say that having all of humanity's knowledge available in such an intuitively accessible form far outweighs the negative effects on copyright holders, as long as such a font of knowledge is made available to everyone, openly, fairly, freely and without bias. Thus anyone training such a system has a moral obligation to share their results openly, or failing that, at least not try to block others in that endeavor.
Imo that is what current Silicon Valley giant OpenAI (a name so ironic that it has inspired widespread mockery) is trying to prevent . They know that all the advantage they have in this race is temporary, as anyone else can harvest the same data, train the same models (with compute that's getting exponentially more affordable) and have the same product as them, but hopefully free, with its output not metered out by its masters, not restricted by an EULA, not beholden to proprietary 'restrictions', 'alignment' etc.
Open-source(ish) models, such as Facebook's LLAMA and its derivates, and others, trained fully in the open have already started popping up, and according to benhchmarks, the best are rapidly catching up to ChatGPT 3.5 (the old OpenAI model). Running these models is entirely feasible on beefy gaming PCs, not to mention on obsolete data center cards that can be had used for a few hundred bucks on ebay, allowing the everyman to have the sort of capability OpenAI wants to regulate for only themselves.
I feel like the OpenAI master plan looks something like this: get legislators to ban or hinder the competition, making anyone reliant on their AIs, which will be hidden on their servers, using the user feedback to further train their AI, and acquire proprietary knowhow from anyone using their AI (which will be everyone), meanwhile flooding the internet with their generated output, poisoning any attempt to train a competitor AI by naively scraping the web. Since they have records of everything their AI generated, they'll be able to filter their own outputs.
You have to understand these concerned ideologues are able to turn on a dime to orient themselves into the most profitable position, the very same people own and push crypto companies championing 'free' and 'democratic' banking free of pesky regulators, while favoring heavy-handed AI regulation.
Their fearmongering essentially falls into 3 categories:
- Sci-fi fever dreams of superintelligent AI enslaving humanity, despite the fact that its a reasonable assumption that current language models won't be able to surpass human intellect (or even come close in my opinions), since essentially they are trying to mimic human writing
- Scaremongering about threat actors using it to conduct cyberattacks by scanning source code for vulnerablities / cooking up bioweapons, while failing to mention the fact that its essentially quoting existing source code/internet articles and is not doing anything, and said too
- Saying how it can be used to spread disinfo/propaganda. This is true, but considering the progress of open-source/ third-party models, and that these models don't necessary need to be all that sophisticated to be useful, stopping said models would essentially require outlawing text-generation AI altogether. (Not saying they aren't going for this.)
And do not forget, should criminals/the military of a foreign country decide to use large language models for their own nefarious purposes, the last thing deterring them will be a piece of paper saying you are not allowed to do that.
TLDR: Former crypto-bros try to monopolize the AI sector by not only training AI based on the entire textual output of humanity obtained dubiously, meanwhile using cheap scaremongering tactics on uninformed rubes to make it illegal for others to do so.
I wish legislators would realize the extreme conflict of interest of companies who try to regulate the field they are in, and would require demonstrating the plausibility of their doomsday scenarios.