"proceeds to find patterns" How does it "know" what to look for?
By looking at the past.
Provided it has fairly accurate information labeling in the dataset, an ML LLM can predictably tell us Newton's laws and that these won't change in the future, because they are laws, not theories.
In other fields, consider that population modeling of wild animals often uses historical information of die-offs and climate modeling to determine the population of brine shrimp or geese. Then, supposing if the temperature of a swampy region rises by 0.3 C or the level of a briny lake increases by 8% over the next 10 years, we can know how much these animals will be affected by these events because they happened in similar amounts before.
It's not literally true, perhaps there's some sort of switchover at 2 to 3 C average water temperature that can be exacerbated by a heat wave, which causes the brine shrimp to shed their shells and dissolve, but it's the best we have to guide decision makers.
No one can predict the future perfectly accurately, but some models are more accurate than others, and most importantly, some tests are stringent than others, which leads us to...
The outputs had better be useful consistently, or it stops being used.
...how "useful" is a internal memo or soulless corporate speak to a CEO? Probably not much. How "accurate" does language need to be? Can you restate Newton's laws in a general sense, or do they need to be spoken perfectly verbatim, lest they cease to apply to you? Language is a bit finicky in that it can be as broad or as narrow as you want, and ML algorithms have to choose between narrower application and higher accuracy or wider application and lower accuracy.
LLM can accurately replicate the writing ability of a typical corporate intern, with some useful information, and some information that is probably wrong.
This isn't "bad" because LLM isn't trying to be anything else, at least from the perspective of the actual engineers, although the marketing team behind whichever flavor of LLM is in the news this week is no doubt touting it big. Language is far less stringent than the natural world of swamps and brine shrimp, but it's probably more important in a decisionmaking, because it determines the sort of decisions that powerful people will make.
I don't think LLM is coming for actual knowledge domain specialists like PhD engineers or anything. Its developers are scanning thousands of questionably acquired textbooks for bachelor's engineers, labeling them (probably poorly), and it's giving you sometimes incorrect blurbs about them, if not outright inventing things. This is probably an intractable issue of the nature of pattern matching and the mathematical laws that govern it.
A more accurate LLM would be very terse and verbatim. A less accurate one is flowery and expository. These have different use cases and I could see the latter being used to produce written pornography/"romance" novels and the former being used for actual memos and legal memo writing for C&Ds.
In a world where people literally get into fights in courts over getting 0.0002 seconds less latency due to slightly closer proximity to a stock exchange, I don't think it's entirely out of the question that shaving a cumulative 30-45 minutes off a lawyer's work week filling out fields in a C&D would be seen as an economical decision. People are already trying this out.
The worst part about LLM is that they might end up bulldozing a lot of copyright law by applying inadequate penalties for IP theft. Pay $15 million in damages to three publishers because you stole their books yeah okay buddy no problem and then Art Twitter tries to sue and everyone gets $5.78 in 12 years after the legal fees are subtracted.
The second worst part about LLM is that their fan club attracts some of the people who would most likely be put out of work by them. There's a truly bizarre clade of "tech bro" who seems to want to believe technology is the best thing ever, despite decades of being dunked on by it, and anyone who actually works in these fields (AI, programming, computer, IT, etc.) tends to hate technology. I recently had to change the garage door opener in my home and instead of a little remote that you push a button on to open it, it has to be connected to a cell phone which needs an internet connection, and if the garage door opener doesn't have a live wi-fi signal it can't receive the open signal from the phone that's 10 feet away from it. Very strange design choice, really.
Apparently someone is busy buying these Wi-Fi enabled "smart" garage door openers and they are probably the same people who think LLMs are cool and ultra goodly for society, though.
Suffice to say, LLMs are probably going to stick around more than LISP machines did. A lot of the 1980's research into AI went into stuff like Lycos and Google and AltaVista for pattern matching of search terms in WWW databases in the mid-1990's, so maybe they'll find a use replacing phone menus or something. Unfortunate because I guess it means that people who pioneered one of the biggest thefts of IP in human history will just sort of get away with it.