Sigh. I guess I'm no longer a professional. I don't know what I'll do. Perhaps I'll just keep coming into work.
Sure, till you are replaced by an AI. I bet *it* will have read, and remembered, the classics.
You're really overselling it.
In my experience playing with both Bing and ChatGPT, it just makes stuff up, and states very authoritatively this is true. It will probably be something like this more often than not:
...then again there was that Leo Di adaptation.
Hallucinations are probably impossible to solve because LLMs are bad at contextual information though, which is why you have editors or SMEs go over the prompt output. It's far more likely edwest3 will be stuck editing ChatGPT's Dunning-Kruger statements instead of Amazon's self-published authors' Dunning-Kruger statements. Humans can at least learn from their mistakes and correct themselves in the future after an editor's advice, LLMs don't, and we don't know how to make an LLM that can teach itself without a prompt.
They're fine for replacing jobbers like paralegals and self-published authors in certain respects, but they aren't going to replace lawyers or editors, and certainly not a doctor or someone. Well, at least not a doctor for a person who can afford to see a doctor. They can definitely replace them and give you inane advice that doesn't help you or even hurts you. That's very on-brand for capitalist modes of production.
There's a ways to go to make ChatGPT and friends reliable enough to actually rely on similar to a human, much less surpass a human. We tend to give humans the benefit of the doubt because we know that, as humans, they can learn from their mistakes. Alan Turing thought we should do the same for computers, but I don't know if he would say the same about ChatGPT if he truly understood how it worked. It's just a model that spits out statistically relevant information from its training data. It has no method of new training itself, and even if it did, it has no knowledge of the tangible physical world.
This is why it's bad at basic science but very good at sophomoric restructuring of cliche essays and thesaurus copy/paste replacement of words. Most LLMs can barely hold together a plot more than a dozen prompts, and tweaking thresholds won't solve that, because the limitation is quite literally measured in terabytes of RAM at the top ends of model complexity (GPT-4) and even GPT-3 requires pretty close to a terabyte of RAM. Not small at all. It's bitcoin server farm tier.
There's something of a iron triangle occurring where you can choose between verbosity in responses, accuracy in recalling information, and requirements of training parameter size. However, if you want a stoic AI to just repeat quotes verbatim you have that: it's called a book. The further you get from that, the more parameters (thus, memory, computational power, electrical use, etc.) you need to maintain accuracy in recall and verbosity of response, or you can go forgo accuracy entirely in favor of "make it up" and you get ChatGPT.
This is leaving out the issue that at the end of the day you will need people who can curate information to feed to the LLM to update its training data, tweak the model's statistical output, and subsequently justify their own existence once the model has reached perfection in a similar manner to UI/UX designers in software realm. Naturally the LLM has no real knowledge of truth or fiction, reality or fantasy, or right or wrong, it's just a statistical model that is picking words based on its training data.
Currently there's no real way to actually determine whatever a LLM produces is true, false, or right...except by having an SME around.
Train it well on decent C&Ds, written by lawyers, and it will produce some decent C&Ds and nothing else. Cheaper than a paralegal doing the same thing, and they can focus on researching targets and obscure cases, I suppose.
Train it well on decent literature, written by the classical authors like Dostoevsky or Tolstoy, and it will produce sophomoric prose. Garbage in, garbage out applies to prompts as well, and if the prompter is amateurish then I suppose ChatGPT won't put in the effort either. You only need a NovelAI or AIDungeon subscription to learn that.
I think a lot of firms in the coming years are going to find out that LLMs are just as bad, if not worse, than trying to deal with entry level employees in a specialized field, except the LLM never gets better and is always consistently untrustworthy in the text it produces This eats valuable time for the SME who could be better doing their actual job. This may only apply to highly rigorous, high verbal intelligence fields where both being verbose and being extremely technically accurate, like law or medicine, matter. LLMs are bad at being both verbose and informationally accurate in particular.
ChatGPT certainly has a lot of future growth potential in being used to write airport novels on Kindle though. Of course, this just means self-published authors will be increasing their productivity, not losing their jobs, because they already write the same story anyway.
Edwest is broadly correct that the current hubbub about LLMs are mostly venture capitalists rapidly shuffling their infinite money around, trying to avoid the next dot-com bubble, in a silly effort to avoid acknowledging their money would be better spent handing it over to the government directly so they could invest in schools and dilapidated infrastructure. A similar hubbub occurred back in the stone ages with the LISP machines and expert systems, which ended in much the same way: a lot of money spent on very exciting technologies, that produced some very cool things, only to vanish in a few years outside of some boutique and niche applications.
But do you really need to be reminded what happened last time megacorps went all-in on in-house artificial intelligence departments?
I'm pretty sure I've said before that LLM's are just a result of the 1980's AI department waking up from its 30+ year coke bender, finding out that petabyte storage is cheap, Red China's bitcoin gulags have recently dumped a ton of cheap GPUs on the market, and the combination of cheap and highly capable parallel processors with cheap text data storage have made the AI department suddenly relevant again.
They will suddenly find themselves stripped of funding because their promised giga-gains of venture capital "investment" won't occur fast enough. Ventures don't care about returns in the next 10 years. They care about returns in the next 5. LLMs will stall out soon enough once they start running up against hard problems, like the aforementioned computational x accuracy x verbosity iron triangle, but they're just running out the slack that has developed since the '80's for now.
Soon, the ventures will move on to the Next Big Thing and leave the robot boffins in the lurch. Again. Like they did the last two times.
I'm sure the cycle will happen again when someone invents photonic computers in 2053, and again when a quantum computer switches on (for real this time) in 2097, as well. This is how it always happens every few decades only because some people (venture capitalists) have more money than sense.
People who actually know how LLMs work tell you they know they're just grifting off of VCs at this point, that the money gusher is gonna shut off sometime this decade, and if it isn't because LLMs aren't generating money then it will be because of a war or something. How do I know this? Because that's literally how people talk about this, who work in the field of LLMs and "AI", on Twitter. Everyone is sort of waiting for the other shoe to drop because they know that we're in the waning phase of the hype cycle and the autumn is coming.
Ironically enough, AI would be in a better state if it made justifications along the basis of basic research, rather than trying to justify its own existence in cult-like hype cycles, but we don't live in that world.