USAF/US NAVY 6th Generation Fighter Programs - F/A-XX, F-X, NGAD, PCA, ASFS news

I think drones are the future but I fear we're not there yet. For some reason it seems drones for ground strike are within reach, but air combat is so fluid and i don't the tech is perfected yet. It seems for air combat they are just moving the human element off board which opens up defeating a drone force just by jamming communications and in not ready to believe we're quite there with flying terminators.

One doesn't solve difficult problems unless one works on difficult problems.

Let's hope they're following the B-21 model - break the problem into separate elements, build what you can, and plan for incremental improvement in systems.
 
 
This reliance on drones for air superiority scares me. Its never been proven. So rushing into it seems frought with risk that could be mitigated with a manned platform meanwhile progressing progressively towards drones.

I hope they don't throw away the baby with the bathwater on the manned component of ngad and it's something that can't double as a fighter when called on.

Why not develop a smallish or medium size single without the a2g whizbang electronics that can eventually be unmanned? Why not something like the 35 but dedicated for air combat without the space for 4000lbs of bombs?

The dollars I've seen floated certainly seem like they could do it this way.

I think drones are the future but I fear we're not there yet. For some reason it seems drones for ground strike are within reach, but air combat is so fluid and i don't the tech is perfected yet. It seems for air combat they are just moving the human element off board which opens up defeating a drone force just by jamming communications and in not ready to believe we're quite there with flying terminators.

You should at least entertain the possibility that the US has been evaluating the feasibility of this for decades..
 
This reliance on drones for air superiority scares me. Its never been proven. So rushing into it seems frought with risk that could be mitigated with a manned platform meanwhile progressing progressively towards drones.

I hope they don't throw away the baby with the bathwater on the manned component of ngad and it's something that can't double as a fighter when called on.

Why not develop a smallish or medium size single without the a2g whizbang electronics that can eventually be unmanned? Why not something like the 35 but dedicated for air combat without the space for 4000lbs of bombs?

The dollars I've seen floated certainly seem like they could do it this way.

I think drones are the future but I fear we're not there yet. For some reason it seems drones for ground strike are within reach, but air combat is so fluid and i don't the tech is perfected yet. It seems for air combat they are just moving the human element off board which opens up defeating a drone force just by jamming communications and in not ready to believe we're quite there with flying terminators.

You should at least entertain the possibility that the US has been evaluating the feasibility of this for decades..
But where is the execution with trials with real hardware? Thought experiments are great, but no substitute for trial and error. What's the big deal with keeping a human in a cockpit? It overcomes many issues with drones and you can't tell me life support is going to break the bank with unit cost.
 
But where is the execution with trials with real hardware? Thought experiments are great, but no substitute for trial and error. What's the big deal with keeping a human in a cockpit? It overcomes many issues with drones and you can't tell me life support is going to break the bank with unit cost.

190729180521-rural-nevada-not-ready-for-area-51-raid.jpg
 
You do realize there has been an actual AI vs human dogfight test by the USAF and that the AI won all five times, right?
But that was in a simulator, it wasn't a live package fitted into a real UCAV and actually operating in real 3D space or reliant on a potentially vulnerable datalink. Indeed DARPA stated that it was possibly being 10 years away from being ready to actually 'fly' a fighter in combat.
There were some flaws, such as not observing 500ft separation distances which meant that in real combat some of those AI drones (having been programmed as 'expendable') would have flown through debris fields from their kills and actually risk damaging or downing themselves in the process. A Loyal Wingman has to be loyal and on your wing, if it dies in its own fratricide then its not really useful as a reliable wingman.

The software has to be run in the UCAV unless you want to jam up the network so that adds cost to the drone. If it is shot down, potentially your adversary can access the AI system and find out its weaknesses. That means programming it make sure its not expendable and therefore the AI must be as concerned about its own life preservation as a human and therefore desist from Hollywood epic style stunts. It can probably still perform better in dogfights than a fighter constrained by human physiology but it might blunt the edge.
Besides wouldn't a smart AI think that dogfighting is a waste of effort and no go for the long-range sniper kill if it could? One of the AI systems tested went in for the close-in cannon kill option every time, but is that necessarily the best way? Yes these systems learn but are they necessarily learning the best methods? Lots of work to be done I feel before we can elevate these from high-end gaming software to real fighter pilot brains.
Actually, AI has been tested outside the simulator, just not how you would anticipate. AI was in control of the aircraft in a dogfight environment, but with a pilot in the cockpit for the "just in case".
 
Actually, AI has been tested outside the simulator, just not how you would anticipate. AI was in control of the aircraft in a dogfight environment, but with a pilot in the cockpit for the "just in case".

It is the "just in case" part that I do not like about future UCAVs, all it takes is one line of faulty software code in the computer that controls the AI and you have the possibility of an Uninhibited fighter going crazy.
 
Actually, AI has been tested outside the simulator, just not how you would anticipate. AI was in control of the aircraft in a dogfight environment, but with a pilot in the cockpit for the "just in case".

It is the "just in case" part that I do not like about future UCAVs, all it takes is one line of faulty software code in the computer that controls the AI and you have the possibility of an Uninhibited fighter going crazy.
Uninhibited - indeed. Rogue AI.
....Come on people, this is why you install killswitches and such. Yes yes, "But what if the enemy gets inside your network!", well if the enemy gets that deep inside your stuff that they can use killswitches and such, you're up shit creek anyway.
 
What could possibly go wrong with a killer-AI?
Depends. Will the neural network fly it's combat missions with a perfect record before the US hands over the entirety of it's nuclear deterrent to it?
 
Will the neural network fly it's combat missions with a perfect record before the US hands over the entirety of it's nuclear deterrent to it?

And that truly scares me, the thought that an AI computer is in charge of the US nuclear deterrent. Nope, I hope that never happens.

Nothing to worry about. Just have it play tic-tac-toe until it decides that nuclear war is futile and gives up.

(noughts and crosses to our British cousins)
 
Will the neural network fly it's combat missions with a perfect record before the US hands over the entirety of it's nuclear deterrent to it?

And that truly scares me, the thought that an AI computer is in charge of the US nuclear deterrent. Nope, I hope that never happens.

Nothing to worry about. Just have it play tic-tac-toe until it decides that nuclear war is futile and gives up.

(noughts and crosses to our British cousins)

Been watching WarGames too many times TomS?
 
Will the neural network fly it's combat missions with a perfect record before the US hands over the entirety of it's nuclear deterrent to it?

And that truly scares me, the thought that an AI computer is in charge of the US nuclear deterrent. Nope, I hope that never happens.

Nothing to worry about. Just have it play tic-tac-toe until it decides that nuclear war is futile and gives up.

(noughts and crosses to our British cousins)

Been watching WarGames too many times TomS?

How many is too many? Asking for a friend...
 
Software is your weapon system in any modern aircraft design. The F-35 has what, ten million lines of code? Errors definitely can cause problems that get pilots killed.

I don’t think NGAD is attempting a true UCAV - loyal wingman seems more likely to be a tethered platform that will extend sensor and EW coverage with less risk to the manned aircraft that likely also is the primary (possibly solitary) weapons carrier. I doubt any NGAD UAVs will be given an independent ability to fire on targets without direct human instruction, even if that instruction is just a yes/no on a touch screen or throttle. Current DoD policy is that a human must give all fire commands and presumably for the drone to be at all useful it will need a datalink to share information and accept commands like any other modern aircraft. Where the drone probably will be autonomous is in its maneuvering and emission control, which I suspect will be driven by the manned pilot specifying a particular behavior that is most useful to the mission (flying far forward, flying close formation, emitting/not emitting, engaging specific targets, etc). This is the kinda of thing already being worked on in much simpler terms with MALD-X/N; being able to modify their behavior on the fly to suit the needs of the launching aircraft.

AI is much more dangerous on the ground where it likely will eventually dominate internal security, economics, and even policy in the near future.
 
Ok so it took me a while to figure out how to properly explain it.
Think of an FPV drone. But a multi role military aircraft, that is exactly what I mean when I was saying that. Of course there will be some AI override and controls but I personally would want as little as possible. Sitting in a seat with all the controls at hand. With some sort of viewing software wether it be a screen or a headset. But they would have a 360deg view outside the aircraft with combat optics including targeting systems implemented into the screen as one entire HUD. Of course there would be some issues with this including latency and jamming/electronic hijacking that could occur but I’m sure there are plenty of implementable countermeasures that could be operated/installed
 
Will the neural network fly it's combat missions with a perfect record before the US hands over the entirety of it's nuclear deterrent to it?

And that truly scares me, the thought that an AI computer is in charge of the US nuclear deterrent. Nope, I hope that never happens.
Come on, what's the worst that can happen? Some technicians trying to pull the plug once the network becomes self-aware, and the network initiating a nuclear exchange?

Nonsense.

I suggest we call it Skynet.
 
Back on the subject, what i read is that they are again going for bleeding edge same 'ol same 'ol business as usual. Isn't there enough tech on the table now for off the shelf engineering like in the b21? An upgraded 22 or 23 seems like it would suffice just fine and be cheaper so more could be bought.
 
In my opinion, all the discussion about technological advances for NGAD are a waste. The next fighter needs to have:
- long range
- large payload
- ability to operate from wider variety of airfields to help in-theater dispersal
- decent (maybe supercruise?) speed to get around
- relatively low purchase and operating costs

Technology matters less than getting something with range and reasonable operating costs as quickly as possible. The idea of a super-fighter is untenable. There are simply too many new systems needed for the military and not enough time to sink another Trillion into a 10 year program.
 
You do realize there has been an actual AI vs human dogfight test by the USAF and that the AI won all five times, right? AIs now can beat every chess master. How they beat every chess master isn’t particularly relevant. In an era of terabyte thumb drives I’m confident every piece of aerial combat history can cheaply reside in any given drone.
And, hopefully, you realize that USAF tests are often notoriously biased towards the currently fashionable, big-budget option--always have been. When tests do not produce the intended result, moreover, all the services have a tendency to stop them, change the rules, and try again until they do.

The "loyal wing man" concept itself may be just such a politically motivated attempt at institutional self-protection. Politicians and vendors trumpet the potential of remote- and software-controlled drones as cheaper, politically less sensitive replacements for manned aircraft. So the traditional air force flyboys coopt the technology and write a requirement that makes it a mere adjunct to the flesh-and-blood aviator.

That said, my point was not which technology wins, but what the technology in question actually is. At present and for the foreseeable future, "AI" is a marketing pitch, not a reality. Whether a human pilot in an actual aircraft cockpit loses a dogfight to a human pilot flying remotely from a control console or through the software he writes is thus immaterial.

But that isn’t even probably where loyal wingman is going initially. It seems far more likely to me that they will act as stand off sensor and EW platforms that have a much less demanding role of holding formation forward of the manned aircraft and providing target info, cover jamming, and if necessary, serve as decoys. They might also have a short range A2A capability eventually but I suspect initially their role will be more conservative. This is easily within the capability of current tech…an AI with a MADL will be given a behavior directive by the manned platform (recon/decoy/pit bull, etc) and it will operate within those directives even if the link is cut. This isn’t as challenging as being a stand alone offensive platform with no human input; it’s basically just a combat Rumba.

The idea that technical change must inevitably mark an advance in capability is fallacious. I have spent most of my working years in the computer industry, almost half of it in storage, plus a couple of years working on an AI-assisted "BigData" project. Advancing technology creates as well as solves problems. Fast, cheap, persistent storage has meant that much more gets stored with much less care about whether it should be, vastly increasing overhead and often reducing access to meaningful data. Similarly, cheap, high-capacity memory and fast processors have allowed much less-efficient programming techniques to prosper. Basic tasks can often take more time than they did 20 years ago.

And that is when everything works as it should: ever more capable hardware spawns ever more complex software. More complex software is harder to test and likely to contain more bugs buried deeper in sub-sub-routines, only to emerge years later (in the middle of my first paid programming gig, the client freaked out because the US east-coast telecom grid went down hard for several days due to a missing semicolon in something like a couple of million lines of code).

Possibly worse still, advancing technology creates its own mythology, of which "AI" is a prime example. Requirements start to get written around what the tech claims to be able to do, rather than around what is needed for a particular purpose. Advertisers spend huge amounts on trackers and databases and analytics software that tracks and classifies every aspect of our lives, often erroneously. Yet other, smaller companies make higher margins from advertising with no tracking at all other than numbers of visits to web sites. We have seen similar myths before. In the 1930s, the new power-driven turret was was to insure that "the bomber will always get through". In the 1950s, the long-range, beyond-visual-range, radar-guided, air-to-air missile was to end the need for guns and the dogfights. Etc.

Finally, more capable hardware has also led to less attention to usability and user interfaces. User interfaces have standardized on what is cheapest and most common rather than on what is the best way to manage information transfer. A human pilot may be able to task and control "loyal wing men" in tests and on the range, but, in combat, information overload and distraction are likely to be huge issues. I think it was Robin Olds who described going into combat in a then high-tech F-4 as a matter of turning off the radar warning system, turning off radio channels, and, most importantly, turning off the guy in back's microphone so that he could concentrate. The loyal wingman takes Olds' problem with technology to a whole new level--whether or not the pilot is a programmer trying to imagine all fo the variable of a dynamic combat environment, a guy at desk in the Arizona desert trying to fly a mission in the Far East, or a pilot in a cockpit.

So technological advance is no guarantee of improved capability or performance. Some things improve. Others don't. Which is which depends on the requirements (knowing what you are trying to do) and on thoughtful implementation (how well you match tools and techniques to tasks). The "loyal wing man" projects appear to choose the tool first and then tailor the requirements to fit the tool--the classic case where everything looks like a nail
 
There is plenty of truth in the overselling of “AI” and the misleading presentation of greater autonomy as artificial thinking.

However a lot of the other comments above appears to be little more than technophobia misrepresented as something more reasoned and reasonable.
Not all technological change is good. Sometimes technological change is rushed when it’s not entirely ready. Some (most?) technological changes will prove to have pros and cons that evolve over time (as does the technology).
But a Luddite position that all technological change is inherently and unavoidably bad is unconnected to history or reality.

Anything done poorly will almost certainly perform poorly.
Any UCAV that is implemented with poor conception and implementation around what it is for and what it can actually do is clearly not going to do well.
But you can equally say the same thing about manned aircraft who are (almost) equally built around and are entirely reliant on much the same advanced technology.

And the argument that an unmanned “loyal wingman” is being sold as superior to a manned one is equally a straw-man argument.
It’s not being sold as superior in performance and flexibility versus its manned equivalent (it’s not) - it’s being sold as cheaper and more expendable - to help the manned platform survive and undertake its task rather than seeing more manned platforms shot down and pilots killed. It can be risked closer to threats etc. than airforces will be willing to send their manned aircraft.

It may well be that this initial generation of loyal wingmen may be relatively limited in their capabilities and not live up to their current hype and be bought in relatively small numbers. However as long as they are implemented and used within what they do offer (and they’re not incorrectly prioritised and/ or deployed) then they can help lead to subsequent generations of increasingly capable unmanned combat aircraft. The associated technology is not getting un-invented any time soon.
 
If revolutionary technology fails to work as promised, just dig up old stuff. That is not nice, but do not result in decisive defeats.

The failure to adapt to revolutionary technology can result in huge disaster whose outcomes can not be predicted by those that fail to understand the revolutionary nature of tech.

And, hopefully, you realize that USAF tests are often notoriously biased towards the currently fashionable, big-budget option--always have been. When tests do not produce the intended result, moreover, all the services have a tendency to stop them, change the rules, and try again until they do.

The "loyal wing man" concept itself may be just such a politically motivated attempt at institutional self-protection. Politicians and vendors trumpet the potential of remote- and software-controlled drones as cheaper, politically less sensitive replacements for manned aircraft. So the traditional air force flyboys coopt the technology and write a requirement that makes it a mere adjunct to the flesh-and-blood aviator.
It totally has happened, the loyal wing man concept is exactly what entrenched flyboys would pitch given threats from technology.

What is warfighting in the AI era? Things like tactics and much of strategy turns into a "software" problem. The proper combat force isn't some "commanders" buying some airplanes from an vendor and tell the airplanes to shoot at some stuff. The proper combat force in the AI environment is an force of programmers, computing clusters, data collection and modelling people doing dynamic system updates to defeat the opponent system, the entire combat system is engineered with electronic and human intelligence parts.

The maneuver space isn't "air plane go north", but "combat force execute combat action" to "enable collection of opponent AI-Command-control-processing system architecture" to find flaws that enable the crafting of "adversarial inputs" that errors in opponent AI. The strategic adjustment OODA cycle minimized to hours (wall clock time cycle for google-AI training timeframes, in mere 2022), while patching is an activity that has to happen at the same time as hundred airplane furballs.

A battle would be a constant state of software updates to plug weakness and induce weakness in the opponent.

The potential for hyper fined grained, completely centralized campaign also enable absurdly fine and long time frame considerations that can be brute force into being with stupid amount of compute to solve extensive game theory problems.

For example, I expect the profiling of all human aviators (if skill differential is notable) and systems that enable real time identification via non-cooperative means. There'd be tactical "interactions" to collect this info and other things, and considerations in defeating/neutralizing each "human constraint" would be part of the combat model.

-----------
Of course, the first job of any AI based combat system is to increase the tempo of combat information processing requirement to exceed the capabilities of human-voice communication to deal with, overwhelm and collapse the legacy system. Force enemy into automation, and defeat the opponent automation with superior warfighting capability in this domain.

If there is a tactical warfighter, it would involve ability to recover AI that has bugged behavior during combat or predict weakness in AI behavior by observation of tactical situation is a skill that needs development, and is completely alien to military organizations.

The military does not even know how to think in AI centric warfare let alone implement it when it becomes feasible. Thankfully that applies to all militaries.
 
Well before we get to high level of AI controlled combat, AI will (probably is) absorbing huge amounts of disparate sensor data, correlate common signals across that data set (a satellite image, a SAR map, and an emission source all located at the same spot for instance) and provide a prioritized target list. The army is trying to do this with an AI software named Prometheus. The follow to that is feeding an AI a list of available platforms and targets so that it can solve the traveling salesman problem of engaging the largest number of prioritized targets in the shortest amount of time. The army name for this software is apparently SHOT. I’m sure the USAF has an equivalent.
 
I don't think the military really needs to get surgical with AI controlled assets. Simply overwhelming an enemy force with expendable drones is enough to do the job. I believe this is the course China is taking regarding carrier air groups and the Pacific bases. Just pepper them all with cheap drops/munitions. Maybe they'll even be ICBM delivered.

But back to the 6th gen discussion, it'll be interesting to see how the central-control manned aircraft turns out. I think the plan for the USAF is to have 3-6 escort unmanned escort aircraft. Not sure what the Navy is planning since it's a little different to launch and recover that many aircraft, they'll probably have a lower escort count. Probably a more agile fighter. Size will also be an important consideration.

I do think the Navy should have a separate, smaller carrier to launch and recover drones. And for the Aegis cruiser to have command and control capability along with whatever aircraft they're developing.
 
There is plenty of truth in the overselling of “AI” and the misleading presentation of greater autonomy as artificial thinking.

However a lot of the other comments above appears to be little more than technophobia misrepresented as something more reasoned and reasonable.
Not all technological change is good. Sometimes technological change is rushed when it’s not entirely ready. Some (most?) technological changes will prove to have pros and cons that evolve over time (as does the technology).
But a Luddite position that all technological change is inherently and unavoidably bad is unconnected to history or reality.

Anything done poorly will almost certainly perform poorly.
Any UCAV that is implemented with poor conception and implementation around what it is for and what it can actually do is clearly not going to do well.
But you can equally say the same thing about manned aircraft who are (almost) equally built around and are entirely reliant on much the same advanced technology.

And the argument that an unmanned “loyal wingman” is being sold as superior to a manned one is equally a straw-man argument.
It’s not being sold as superior in performance and flexibility versus its manned equivalent (it’s not) - it’s being sold as cheaper and more expendable - to help the manned platform survive and undertake its task rather than seeing more manned platforms shot down and pilots killed. It can be risked closer to threats etc. than airforces will be willing to send their manned aircraft.

It may well be that this initial generation of loyal wingmen may be relatively limited in their capabilities and not live up to their current hype and be bought in relatively small numbers. However as long as they are implemented and used within what they do offer (and they’re not incorrectly prioritised and/ or deployed) then they can help lead to subsequent generations of increasingly capable unmanned combat aircraft. The associated technology is not getting un-invented any time soon.
If you read my remarks at all carefully, I can hardly be called a Luddite or a technophobe. I make my living from computer technology.

What I decry is the cheerleading by those that read marketing slicks--press releases, white papers, and the like, from both vendors and service public affairs offices--without understanding, even at a very high level, what the technology actually is or even can be.

Something called a "loyal wing man" may be bought and fielded. But I very much doubt that it will be what is being sold now, simply because it can't be. So we aren't talking about rejecting technology. We are talking about rejecting smoke and mirrors and the accompanying techno-piety that views all criticism as a sort of heresy.

PR terms like "AI" and "loyal wing man" anthropomorphize machines in ways that trick the unwary into crediting them with powers they do not have (yet), powers that policy makers may come to rely later. This can have disastrous consequences.

The powered gun turret that I referred to above was '30s high tech. But it wasn't what it was sold to be or believed to be. Nor was it the technology that would let "the bombers always get through"--that was the Mosquito. But the mythical powers of the gun turret were too firmly entrenched in the belief systems of wartime western air forces to be challenged by mere realities. RAF Bomber Command actually commissioned an Operations Research unit to investigate the reasons and potential solutions for its appallingly high losses to German night fighters. While serving with this unit, the famous future physicist Freeman Dyson found a simple, real-world solution using accepted statistics and basic math: strip the turrets out of the bombers. This would have a two-fold effect:
  • It would reduce drag and weight and thus increase the bomber's speed just enough to make successful interception by German night fighters statistically impossible
  • It would drastically reduce casualties by removing the gunners and thus downsizing the crews by approximately half.
(See Dyson's Disturbing the Universe.) Needless to say, his superiors were at best puzzled by this heretical, mathematical challenge to their faith in the gun turret. Dyson was ignored, at the cost of untold lives and much lost materiel, all squandered for a pre-war marketing concept.
 
I don't think the military really needs to get surgical with AI controlled assets. Simply overwhelming an enemy force with expendable drones is enough to do the job.
Indeed. Ukraine's Turkish-made Bayraktars seem to be little more sophisticated than cutting-edge hobbyist equipment. They have a "ludicrously" small payload. Yet they have been perhaps the most successful combat drones in history, while operating in the face of the much vaunted air defenses of the West's most sophisticated opponent. Actual, quadcopter hobbyist drones have proved decisive for artillery spotting and scouting for tank hunting teams. Some have even been used as ultralight bombers.

The value of these cheap platforms has derived not from the technology itself, essential though that is, but from the imaginative way in which they have been used to gain leverage on the real-world, here-and-now battlefield. The Ukrainians have skillfully matched the limited capabilities and payloads offered by the technology to the available range of targets, taking into account potential countermeasures.

The Ukrainians understand this technology--what it is, what it can do and what it is not and cannot do.
 
Will the neural network fly it's combat missions with a perfect record before the US hands over the entirety of it's nuclear deterrent to it?

And that truly scares me, the thought that an AI computer is in charge of the US nuclear deterrent. Nope, I hope that never happens.
Come on, what's the worst that can happen? Some technicians trying to pull the plug once the network becomes self-aware, and the network initiating a nuclear exchange?

Nonsense.

I suggest we call it Skynet.
There is a non silly side to this. If you are on the receiving end of an "AI" mediated friendly fire incident or find yourself colliding with a "loyal wing man", it might as well be nuclear from your point of view. Skynet presumed a malevolent intelligence. But what if the "AI" in question is not intelligent--only presumed to be--and is thus just a machine that can on the fritz, like your office thermostat. Do you really want it to have responsibilities?
 
The potential for hyper fined grained, completely centralized campaign also enable absurdly fine and long time frame considerations that can be brute force into being with stupid amount of compute to solve extensive game theory problems.

For example, I expect the profiling of all human aviators (if skill differential is notable) and systems that enable real time identification via non-cooperative means. There'd be tactical "interactions" to collect this info and other things, and considerations in defeating/neutralizing each "human constraint" would be part of the combat model.
One question: what is the "the profiling of all human aviators"? How is it done? What attributes, methods, and parameters do you include? How, for example, do you measure "skill" in order to differentiate it? What is "skill" in this context? What units, instruments, and protocols do you use when doing the measuring? Are we counting G-tolerance? eyesight aerobatic ability? ability to calculate fuel burn? navigational skill? Tactics? Strategy" Knowledge of rules of engagement/military law/international law? Good judgment? And, fi we are, how do we balance them against each other when arriving at a "profile"? Are the units and measuremtn methods appropriate to each common to all?

No doubt the above barely scrapes the surface of some very complicated details. But the difference between technology and science fiction, science and magic lies in such details.

This is my core critique of "AI", as practised today. It pretends to be something that it cannot rigorously define. No one has come up with a reasonable definition of "intelligence". And without that, how do you know what you have implemented?
 
No one has come up with a reasonable definition of "intelligence".

In this context, I would describe 'intelligence' as the ability to make decisions based on data which is collected and/or gets fed, and the ability to have a positive or negative influence on the own future and/or the future of other things/beings based on the outcomes of already made decisions and on additional data, without having any self-awareness, emotions, hopes or desires.
So, somewhat like the behavior of my 17-year old cat.
 
Your 17 year old cat is probably, a bit of an IQ genius compared to many folk out there. Some of those are in quite powerful and influential positions. Perhaps the nezt G7 leaders should have a different dietary requirement.....

In other words, the measure of inteligence definitely needs a rethink which will be difficult given the people who are in a position to decide what inteligence is now.
 
I don't think the military really needs to get surgical with AI controlled assets. Simply overwhelming an enemy force with expendable drones is enough to do the job.
Indeed. Ukraine's Turkish-made Bayraktars seem to be little more sophisticated than cutting-edge hobbyist equipment. They have a "ludicrously" small payload. Yet they have been perhaps the most successful combat drones in history, while operating in the face of the much vaunted air defenses of the West's most sophisticated opponent. Actual, quadcopter hobbyist drones have proved decisive for artillery spotting and scouting for tank hunting teams. Some have even been used as ultralight bombers.

The value of these cheap platforms has derived not from the technology itself, essential though that is, but from the imaginative way in which they have been used to gain leverage on the real-world, here-and-now battlefield. The Ukrainians have skillfully matched the limited capabilities and payloads offered by the technology to the available range of targets, taking into account potential countermeasures.

The Ukrainians understand this technology--what it is, what it can do and what it is not and cannot do.
Russia isn't much of an opponent. Hasn't been for over 3 decades. I would not make decisions about the air power of the USA based on russia. In under 30 years the USAF will have fielded 3 new fighters and Russia still struggles with one new idk what to call it... 4.5 gen aircraft.
 
Something to do with a shovel? I have an orphan Badger that visits, I call her Snarler.........
 

Similar threads

Please donate to support the forum.

Back
Top Bottom