But that was in a simulator, it wasn't a live package fitted into a real UCAV and actually operating in real 3D space or reliant on a potentially vulnerable datalink. Indeed DARPA stated that it was possibly being 10 years away from being ready to actually 'fly' a fighter in combat.
There were some flaws, such as not observing 500ft separation distances which meant that in real combat some of those AI drones (having been programmed as 'expendable') would have flown through debris fields from their kills and actually risk damaging or downing themselves in the process. A Loyal Wingman has to be loyal and on your wing, if it dies in its own fratricide then its not really useful as a reliable wingman.
The software has to be run in the UCAV unless you want to jam up the network so that adds cost to the drone. If it is shot down, potentially your adversary can access the AI system and find out its weaknesses. That means programming it make sure its not expendable and therefore the AI must be as concerned about its own life preservation as a human and therefore desist from Hollywood epic style stunts. It can probably still perform better in dogfights than a fighter constrained by human physiology but it might blunt the edge.
Besides wouldn't a smart AI think that dogfighting is a waste of effort and no go for the long-range sniper kill if it could? One of the AI systems tested went in for the close-in cannon kill option every time, but is that necessarily the best way? Yes these systems learn but are they necessarily learning the best methods? Lots of work to be done I feel before we can elevate these from high-end gaming software to real fighter pilot brains.
But if the UCAV is flown by AI, the data link is less relevant, not that most modern fighters don't have datalinks with each other and the ground anyway.
Physical, RF datalinks may become less relevant. But the conceptual, virtual data link--the expanse of time that connects the human software-developer/pilot and his understanding of air combat requirements with the vehicle executing his software
in actual air combat, becomes an ever more severe problem. If we are assuming that a given vehicle has capabilities much in advance of a 1970's vintage Ryan 147 recon drone, the software development issue becomes a huge one.
"AI" systems are currently spoken of as magic once was. When they are blithely credited with amazing, revolutionary abilities, little or nothing is said of how this will be achieved. These machines are not currently intelligent in any meaningful sense. They do not formlate decisions based on perceptions and imagined future outcomes. On the contrary, they are automata, essentially no more than sophisticated versions of the little, wind-up, hopping rabbit toys I used to get in Christmas stockings--their future actions depend entirely on the capabilities engineered into them when hardware was last refreshed and code was last updated. Given a pre-programmed input--whether a hand releasing a tight spring or an electromagnetic waveform that its human developers associate with a missile launch--the machine executes an automatic response programmed in by human developers. Denser, more integrated, faster processors, memory chips, and interconnects do not alter this reality. They just make the plastic bunny hop faster and higher.
By putting software on the platform and eliminating the remote pilot and RF link, we merely trade the synchronous, near-real-time reactions of a remote human operator, delayed by seconds due to the speed of light and limited bandwidth, for an asynchronous reaction delayed by the months or years of debugging, redevelopment, testing, and deployment that constitute reaction to an unanticipated run-time software problem.
This is not to say that more sophisticated drones can't be useful or can't do more than the old-style ones. But it does mean that there is no "AI" free lunch here. Adding autonomy requires tradeoffs.
One of these tradeoffs is the above mentioned loss of real-time control over a weapon that must function in a remote, highly dynamic environment. In exchange for autonomy, we have to rely much more heavily on the judgement of the policy makers and requirements analysts that define what systems will have to do and on the engineering managers and programmers that have to implement against requirements. These people will have to anticipate more, see further into the future, and make vastly more reliable predictions than have been the norm to date. This should be a sobering thought: historically, how often have policy, planning, and requirements correctly anticipated coming reality?
Another is loss of "situational awareness" due to too much data and too little information. Data is not information. UNtil data is filtered, processed, correlated, and applied t decision making, it is just noise. More manned fighters means more airmen monitoring sensors (from eyeballs to radar), filtering data, and assimilating results, and thus gathering usable information. A manned aircraft and a swarm of "AI" drones loaded with sensors might provide vastly more data. But the reduced number of human aircrew would result in much reduced filtering and processing to produce real-time information. The pilot would have to rely on the perspicacity, foresight, and information-forming abilities of the engineers that programmed the drones all those years before.
Another is cost vs. capability. A Ryan 147 with a camera package or a Reaper with a Hellfire offers a modest capability at modest cost. The former flies a fixed course. The latter is guided by a human via datalink and video camera. Neither is exactly cheap. But both are much less costly than a manned jet fighter. The Ukrainian off-the-shelf, hobbyist quadrotor with IR camera and a mortar bomb or RPG-warhead payload is much less costly still. All else being equal, if the target is a tank and within range, the Ukrainian solution is vastly cheaper and vastly more effective: its hard for the human operator to miss from directly above, a dead stop, and just a few meters up. But both current talk and history strongly suggest that "AI Loyal wing men" are already heading in the opposite direction, toward the cost of the jet fighter or more. The high cost and high risks of implementing autonomy seems likely to be minimally counterbalanced by any reasonably anticipated benefits. As even their proponents admit, these "loyal wing men" are likely to supplement rather than replace manned aircraft. But does the limited, supplementary roles offset the high cost of developing the platform, the high risk of relying on it in real combat, and the capabilities that air forces will have to give up (like real-time control) when fielding it?
The degree of reliance on human policy, program management, and foresight that the "AI loyal wing man" project requires is thus the key caveat that should be kept in mind when considering or advocating for these projects. I do not see the weapons systems themselves as particularly revolutionary. But the scope of the requirements-planning and software implementation effort is unprecedented. For the Manhattan Project, we started with a good grasp of the physics and could thus pursue a reasonably clear, if complicated and expensive, development effort. Here, we start with no precise definition of what intelligence, much less "artificial" intelligence, is. We are galloping along on implementation without first defining the nature of the problem or the solution, while counting on "AI" magic to handle the hard stuff. What could possibly go wrong?