You do realize there has been an actual AI vs human dogfight test by the USAF and that the AI won all five times, right? AIs now can beat every chess master. How they beat every chess master isn’t particularly relevant. In an era of terabyte thumb drives I’m confident every piece of aerial combat history can cheaply reside in any given drone.
And, hopefully,
you realize that USAF tests are often notoriously biased towards the currently fashionable, big-budget option--always have been. When tests do not produce the intended result, moreover, all the services have a tendency to stop them, change the rules, and try again until they do.
The "loyal wing man" concept itself may be just such a politically motivated attempt at institutional self-protection. Politicians and vendors trumpet the potential of remote- and software-controlled drones as cheaper, politically less sensitive replacements for manned aircraft. So the traditional air force flyboys coopt the technology and write a requirement that makes it a mere adjunct to the flesh-and-blood aviator.
That said, my point was
not which technology
wins, but
what the technology in question
actually is. At present and for the foreseeable future, "AI" is a marketing pitch, not a reality. Whether a human pilot in an actual aircraft cockpit loses a dogfight to a human pilot flying remotely from a control console or through the software he writes is thus immaterial.
But that isn’t even probably where loyal wingman is going initially. It seems far more likely to me that they will act as stand off sensor and EW platforms that have a much less demanding role of holding formation forward of the manned aircraft and providing target info, cover jamming, and if necessary, serve as decoys. They might also have a short range A2A capability eventually but I suspect initially their role will be more conservative. This is easily within the capability of current tech…an AI with a MADL will be given a behavior directive by the manned platform (recon/decoy/pit bull, etc) and it will operate within those directives even if the link is cut. This isn’t as challenging as being a stand alone offensive platform with no human input; it’s basically just a combat Rumba.
The idea that technical change must inevitably mark an advance in capability is fallacious. I have spent most of my working years in the computer industry, almost half of it in storage, plus a couple of years working on an AI-assisted "BigData" project. Advancing technology creates as well as solves problems. Fast, cheap, persistent storage has meant that much more gets stored with much less care about whether it should be, vastly increasing overhead and often reducing access to meaningful data. Similarly, cheap, high-capacity memory and fast processors have allowed much less-efficient programming techniques to prosper. Basic tasks can often take more time than they did 20 years ago.
And that is when everything works as it should: ever more capable hardware spawns ever more complex software. More complex software is harder to test and likely to contain more bugs buried deeper in sub-sub-routines, only to emerge years later (in the middle of my first paid programming gig, the client freaked out because the US east-coast telecom grid went down hard for several days due to a missing semicolon in something like a couple of million lines of code).
Possibly worse still, advancing technology creates its own mythology, of which "AI" is a prime example. Requirements start to get written around what the tech claims to be able to do, rather than around what is needed for a particular purpose. Advertisers spend huge amounts on trackers and databases and analytics software that tracks and classifies every aspect of our lives, often erroneously. Yet other, smaller companies make higher margins from advertising with no tracking at all other than numbers of visits to web sites. We have seen similar myths before. In the 1930s, the new power-driven turret was was to insure that "the bomber will always get through". In the 1950s, the long-range, beyond-visual-range, radar-guided, air-to-air missile was to end the need for guns and the dogfights. Etc.
Finally, more capable hardware has also led to less attention to usability and user interfaces. User interfaces have standardized on what is cheapest and most common rather than on what is the best way to manage information transfer. A human pilot may be able to task and control "loyal wing men" in tests and on the range, but, in combat, information overload and distraction are likely to be huge issues. I think it was Robin Olds who described going into combat in a then high-tech F-4 as a matter of turning off the radar warning system, turning off radio channels, and, most importantly, turning off the guy in back's microphone so that he could concentrate. The loyal wingman takes Olds' problem with technology to a whole new level--whether or not the pilot is a programmer trying to imagine all fo the variable of a dynamic combat environment, a guy at desk in the Arizona desert trying to fly a mission in the Far East, or a pilot in a cockpit.
So technological advance is no guarantee of improved capability or performance. Some things improve. Others don't. Which is which depends on the requirements (knowing what you are trying to do) and on thoughtful implementation (how well you match tools and techniques to tasks). The "loyal wing man" projects appear to choose the tool first and then tailor the requirements to fit the tool--the classic case where everything looks like a nail