Paywalled article. Here are my notes. Apologies for the length - it's a long and detailed briefing.
The Economist June 22nd 2024
Briefing: AI and War
The Model Major General
An AI-assisted general staff may be more important than robots.
____________________________________________________________
RN approached Microsoft & Amazon Web Services. “Was there a better way to wage war – specifically, a more effective way to to co-ordinate between a commando strike team and the missile systems of a frigate? M & AWS collab with BAE & Anduril.
Result, 12 months later, in 2022: StormCloud. ‘Mesh’ of marines on ground, aerial drones, distributed sensors and processing, including small, rugged computers stragged to marines & vehicles with bungee cords. In exercise, remarkably effective – ‘wold’s most advanced kill chain.’
Such systems now used in Gaza & Ukraine. Forces spy opportunities and fear being left behind.
Lawyers & ethicists worry that AI will make war faster and more opaque.
ProsAIc but gAInful
Uses of AI for ‘boring stuff’ – managing maintenance, logistics, personnel etc. Saves USAF up $25 m/month avoiding breakdowns. Used for scoring soldiers for promotion.
Ukraine rushing to make drones AI-enabled [see NYT article mentioned in previous post]. Similar/same problems as self-driving cars – cluttered environments, obscured objects, smoke, decoys – but improving rapidly.
On their own, drones merely disrupting but not transforming but are so when combined with systems like StormCloud. Allows soldiers to act on real-time information without waiting for distant HQ.
AI is a prerequisite. Start with sensors on drones, space, social media etc. Too much to process ‘manually’ but 2014-15, speech to text software, neural networks, started a revolution. UK MoD Project Spotter enables 24/7 automated detection and ID of objects in satellite images. Uses commercial tools. Still in upper ends of development rather than full deployment. US project Maven seems more advanced, since 2017 ‘producing large volumes of computer vision detections for warfighter requirements’ according to National Geospatial Intelligence Agency.
Similar uses for minesweeping, sorting signal from noise in sonar.
IDF using an AI tool called Lavender to identify thousands of Palestinians as targets. Allegedly giving only cursory scrutiny to outputs before ordering strikes. IDF says it is ‘only a database.’ In practice, Lavender is likely to be ’decision support system’ (DSS) fusing information from variety of sources (inc. phone intercepts).
ExplAIn or ordain?
Vastly increased computing power blurring lines between ISR.
Ukraine’s GIS Arta software already collates data on Russians and generate target lists in order of commander priority. Previously took hours, now minutes.
USAF asked RAND whether AI tools could help in space warfighting. Indeedly-doodly, RAND says. Likewise, DARPA ‘Strategic Chaos Engine for Planning, Tactics, Experimentation, and Resiliency’ (SCEPTER) essentially intended to generate war plans on the fly.
Tools ‘didn’t even exist two years ago.’
PrAIse and complAInts
As a result, growing intellectual chasm between those who wage and seek to tame war.
AI systems cannot recognize hostile intent or attempt to surrender, distinguish a soldier with a real gun from a child with a toy. Also can be tricked. However, if improvements in reliability beyond current human abilities to make such distinctions, it may be unethical NOT to delegate authority. According to some, this point has been passed already.
Pre-war modelling further reduces uncertainty.
When machines do make mistakes, consequences are horrible, so some determined to keep ‘human in the loop.’
Issues of international law. International Committee of the Red Cross (ICRC) raises concerns that machine systems are unpredictable, opaque and prone to bias.
Experts & diplomats have been wrangling at the UN over whether to ban Autonomous Weapons Systems (AWS) but it’s hard to even define what they are. ICRC defines AWS as those that can choose a target based on a general profile. UK says they are defined as those that can identify, select and attack targets without ‘context-appropriate human involvement.’ Pentagon take similar view.
Might be OK if AWS’s behaviour is well-understood in an area where targets are unambiguous, no civilians, full autonomy is less problematic. However, if the DSS ‘suggests’ targets in ambiguous situations, ‘without cognitive clarity or awareness’ commanders may push buttons having abdicated moral responsibility to the machine.
[Editorial note by yours truly: look up Stanley Milgram’s experiment on obedience.
https://en.wikipedia.org/wiki/Milgram_experiment ]
Quandary likely to worsen. AI begets more AI. If one side uses it, the other will too. Already the case with air defence. Furthermore, AI will undertake more ‘less mathematical’ tasks, such as predicting opponents’ emotional state (morale, willingness to fight).
Even talk of using AI in nuclear decision-making. Soviet ‘Dead Hand’ system (think Dr Strangelove’s ‘Doomsday machine’) is reportedly still in use and upgraded to include AI. Very secret and little released, but high on the agenda for talks last year between Biden and Xi.
RemAIning in the loop
Currently, humans remain in the loop, able to say yes of no. Whether that would be the case in high-intensity war with Russia or China is less clear. Head of an IDF intelligence team published a book in 2021 ‘The Human Machine Team’ warned of a human decision-making bottleneck.
In practice, real life is outpacing these debates. False positives matter not to Russian forces that have deliberately targeted health facilities and Ukraine is fighting for its survival and simply wants systems that work available immediately.
Threat hanging over this: great power conflict. NATO countries know that whatever the outcome, Russia will now have expertise in AWS. China is a main producer of cheap civil drones, which Ukraine has proven can be easily weaponized. PLA has been discussing ‘MultiDomain Precision Warfare’ utilizing big data and AI.
Who has the upper hand? According to Center for Security and Emerging Technology at Georgetown University, US and China have about equal attention to AI applications. There were fears that China had the advantage to its lax respect for intellectual property but now America has pulled ahead in cutting-edge machine-learning models due to its recent chip restrictions and China faces ‘significant headwinds… in military AI.’
Summing up…
AI may fundamentally change the nature of war but contrary to impressions, it is not yet widespread in practice. There has been a lot of innovation and development of infrastructure but for example, the Pentagon spends less than 1% of its budget on software. ‘Stakes are high, we have to move quickly and safely’ says an official.
Meanwhile, StormCloud is continually improving but moves slowly due to internal politics [‘twas ever thus] and red tape around accreditation of tech. Funding for second iteration is ‘paltry’ £10 m. According to an officer, ‘If we were in Ukraine or genuinely worried about going to war any time soon, we’d have spent £100 m-plus and had it deployed in weeks or months.’
[Editorial note again: for some years I’ve been the editor/proofreader for a recognised expert in human-AI teaming (I can’t say who – commercial and intellectual property and so on) and they’ve raised many of these issues in their papers for journal publication in a civilian context – aircraft cockpits, self-driving vehicles etc. The recurring theme is explainability and trust of the AI as a team member.]