All of these futuristic unmanned aircraft will, in fact, be "flown from the ground", whether or not they have data links and ground-based pilots. They differ from the older "Remotely Piloted Vehicles" only in that the operators are remote in time as well as space.
So-called "artificial intelligence" is nothing of the sort, at least at present. Attempts to accurately model human intelligence turned out to be difficult early on, when research showed that the human brain is not an electrical switch network akin to a digital computer. So industry focused on so-called "expert systems" that seemed to offer more immediate commercial returns at lower cost. These are, in effect, large data bases with large stores of information about known situations and various kinds of pattern-matching algorithms for comparing encountered situations with stored ones deciding on a best match. They are as good as the data and the algorithms that their developers choose for them.
AI systems are not intelligent in the sense that they replicate human thought. They can give the appearance of human-like output, but only in a limited domain. They are thus directly analogous to those 18th-century, clockwork dolls that go through the motions of writing a letter and produce the same text, every time. Consider one of the "AI" breakthroughs of recent years: defeating a human master at chess. The "AI" was highly specialized and very expensive. It stored every variation of every game ever played in a catalog and had a fast processor optimized for comparing chess moves. It compared every move in the current game to every more in every historical variation, and used some sophisticated math to chose the best match. Studies have shown that a human master does nothing reemotely like this.The master compares the current state of the game with, at most, a couple of past games that he or she happens to remember. What the master does thereafter is still little understood. All we can say is that the master holds his or her own at chess much of the time, despite a smaller memory and much slower computation speed. Plus, the master beats the same "AI" hands down at calling a cab, heading home, and picking up Chinese take-out on the way--whatever intelligence a human chess master has is not dedicated entirely to the domain of playing of chess.
So why, you ask, does this mean that a "Loyal Wing Man" will be flown from the ground? The answer is that the "AI" vehicle is flown by its programmers, years in advance of the flight itself. It's abilities as a combat pilot will depend entirely on the limited foreknowledge and prejudices of those that write the aircraft's requirements and the corresponding limitations of the programmers when it comes to interpreting the requirements and implementing them in code. The "Loyal Wing Man's" ability to make decisions in combat, on the fly, will thus depend on the circumstances that his developers were able to anticipate and encode during the decade before. Getting anything wrong in the data set, the programming, or both can produce unexpected and disproportionate outcomes. We have already seen what can happen: when pale-skinned developers collected data from the largely pale-skinned community where they lived and developed an "AI" facial-recognition system, their product could not tell dark-skinned people apart. The AI's current performance was literally prejudiced by unrecognized selective biases years before.
So I don't think that the reliability and usefulness of guided munitions tells us anything about "Loyal Wing Men". The two' aren't really comparable. Guided munitions are not autonomous to any significant degree. They operate using a very limited range of sensor inputs and make very simple decisions based on ultimate targeting decisions made by human weapons operators--a very limited domain. "Loyal Wing Men" will require vastly greater autonomy. They will have to vastly expand the capabilities of a chess-playing "AI" in a vastly more complicated domain, the physical world.
For the foreseeable future, I suspect that this task will far exceed the capabilities of the real "Loyal Pilots" sitting at their desktop computer workstations, writing code, and trying to get the future right while filtering out the unconscious prejudices of the past and present. The danger is that commercial pressures, politics, and the typically ill-informed enthusiasms of leaders will push such programs far beyond the real state of the art and force shortcuts and cosmetic fixes to the real problems of developing artificial intelligences.
The current history of "AI' is not reassuring. The vendor sells the glamour of the concept to management, not the real capabilities of implementation. "AI" becomes a requirement in a contract, rather than a possible implementation of a requirement. Neither the buyer's nor the vendor's executives bother to define what "AI" is, because no one currently has a viable definition and because that would just slow everything down. The engineers scratch their heads and develop what they can within the budgets and time schedules that they have been arbitrarilly granted. They start running out of time and money. So features are cut, testing is reduced, people are laid off, and everything is rushed. The result is the kind of "AI" that is all around us already. It reads our resumes and rejects our job applications without human oversight--and does so essentially randomly. It decides that we are the same person as someone whose records reside next to ours in the database and merges our medical records even though the other person resides 1000 miles away. In a traffic stop, it decides that two dark faces are the same, and causes the arrest of an innocent man. It misinterprets a sensor reading, overrides pilot controls, and flies an airliner full of people into the ground.