Last edited:
Last edited:
Last edited:
New York Times. Paywalled, but if you hit stop download quick enough, you can get the text at least.


A taster:

Until recently, a human would have piloted the quadcopter. No longer. Instead, after the drone locked onto its target — Mr. Babenko — it flew itself, guided by software that used the machine’s camera to track him.

The motorcycle’s growling engine was no match for the silent drone as it stalked Mr. Babenko. “Push, push more. Pedal to the metal, man,” his colleagues called out over a walkie-talkie as the drone swooped toward him. “You’re screwed, screwed!”

If the drone had been armed with explosives, and if his colleagues hadn’t disengaged the autonomous tracking, Mr. Babenko would have been a goner.

Vyriy is just one of many Ukrainian companies working on a major leap forward in the weaponization of consumer technology, driven by the war with Russia. The pressure to outthink the enemy, along with huge flows of investment, donations and government contracts, has turned Ukraine into a Silicon Valley for autonomous drones and other weaponry.

What the companies are creating is technology that makes human judgment about targeting and firing increasingly tangential. The widespread availability of off-the-shelf devices, easy-to-design software, powerful automation algorithms and specialized artificial intelligence microchips has pushed a deadly innovation race into uncharted territory, fueling a potential new era of killer robots.
 
Paywalled article. Here are my notes. Apologies for the length - it's a long and detailed briefing.

The Economist June 22nd 2024

Briefing: AI and War

The Model Major General

An AI-assisted general staff may be more important than robots.


____________________________________________________________

RN approached Microsoft & Amazon Web Services. “Was there a better way to wage war – specifically, a more effective way to to co-ordinate between a commando strike team and the missile systems of a frigate? M & AWS collab with BAE & Anduril.

Result, 12 months later, in 2022: StormCloud. ‘Mesh’ of marines on ground, aerial drones, distributed sensors and processing, including small, rugged computers stragged to marines & vehicles with bungee cords. In exercise, remarkably effective – ‘wold’s most advanced kill chain.’

Such systems now used in Gaza & Ukraine. Forces spy opportunities and fear being left behind.

Lawyers & ethicists worry that AI will make war faster and more opaque.

ProsAIc but gAInful

Uses of AI for ‘boring stuff’ – managing maintenance, logistics, personnel etc. Saves USAF up $25 m/month avoiding breakdowns. Used for scoring soldiers for promotion.

Ukraine rushing to make drones AI-enabled [see NYT article mentioned in previous post]. Similar/same problems as self-driving cars – cluttered environments, obscured objects, smoke, decoys – but improving rapidly.

On their own, drones merely disrupting but not transforming but are so when combined with systems like StormCloud. Allows soldiers to act on real-time information without waiting for distant HQ.

AI is a prerequisite. Start with sensors on drones, space, social media etc. Too much to process ‘manually’ but 2014-15, speech to text software, neural networks, started a revolution. UK MoD Project Spotter enables 24/7 automated detection and ID of objects in satellite images. Uses commercial tools. Still in upper ends of development rather than full deployment. US project Maven seems more advanced, since 2017 ‘producing large volumes of computer vision detections for warfighter requirements’ according to National Geospatial Intelligence Agency.

Similar uses for minesweeping, sorting signal from noise in sonar.

IDF using an AI tool called Lavender to identify thousands of Palestinians as targets. Allegedly giving only cursory scrutiny to outputs before ordering strikes. IDF says it is ‘only a database.’ In practice, Lavender is likely to be ’decision support system’ (DSS) fusing information from variety of sources (inc. phone intercepts).

ExplAIn or ordain?

Vastly increased computing power blurring lines between ISR.

Ukraine’s GIS Arta software already collates data on Russians and generate target lists in order of commander priority. Previously took hours, now minutes.

USAF asked RAND whether AI tools could help in space warfighting. Indeedly-doodly, RAND says. Likewise, DARPA ‘Strategic Chaos Engine for Planning, Tactics, Experimentation, and Resiliency’ (SCEPTER) essentially intended to generate war plans on the fly.

Tools ‘didn’t even exist two years ago.’

PrAIse and complAInts

As a result, growing intellectual chasm between those who wage and seek to tame war.

AI systems cannot recognize hostile intent or attempt to surrender, distinguish a soldier with a real gun from a child with a toy. Also can be tricked. However, if improvements in reliability beyond current human abilities to make such distinctions, it may be unethical NOT to delegate authority. According to some, this point has been passed already.

Pre-war modelling further reduces uncertainty.

When machines do make mistakes, consequences are horrible, so some determined to keep ‘human in the loop.’

Issues of international law. International Committee of the Red Cross (ICRC) raises concerns that machine systems are unpredictable, opaque and prone to bias.

Experts & diplomats have been wrangling at the UN over whether to ban Autonomous Weapons Systems (AWS) but it’s hard to even define what they are. ICRC defines AWS as those that can choose a target based on a general profile. UK says they are defined as those that can identify, select and attack targets without ‘context-appropriate human involvement.’ Pentagon take similar view.

Might be OK if AWS’s behaviour is well-understood in an area where targets are unambiguous, no civilians, full autonomy is less problematic. However, if the DSS ‘suggests’ targets in ambiguous situations, ‘without cognitive clarity or awareness’ commanders may push buttons having abdicated moral responsibility to the machine.

[Editorial note by yours truly: look up Stanley Milgram’s experiment on obedience. https://en.wikipedia.org/wiki/Milgram_experiment ]

Quandary likely to worsen. AI begets more AI. If one side uses it, the other will too. Already the case with air defence. Furthermore, AI will undertake more ‘less mathematical’ tasks, such as predicting opponents’ emotional state (morale, willingness to fight).

Even talk of using AI in nuclear decision-making. Soviet ‘Dead Hand’ system (think Dr Strangelove’s ‘Doomsday machine’) is reportedly still in use and upgraded to include AI. Very secret and little released, but high on the agenda for talks last year between Biden and Xi.

RemAIning in the loop

Currently, humans remain in the loop, able to say yes of no. Whether that would be the case in high-intensity war with Russia or China is less clear. Head of an IDF intelligence team published a book in 2021 ‘The Human Machine Team’ warned of a human decision-making bottleneck.

In practice, real life is outpacing these debates. False positives matter not to Russian forces that have deliberately targeted health facilities and Ukraine is fighting for its survival and simply wants systems that work available immediately.

Threat hanging over this: great power conflict. NATO countries know that whatever the outcome, Russia will now have expertise in AWS. China is a main producer of cheap civil drones, which Ukraine has proven can be easily weaponized. PLA has been discussing ‘MultiDomain Precision Warfare’ utilizing big data and AI.

Who has the upper hand? According to Center for Security and Emerging Technology at Georgetown University, US and China have about equal attention to AI applications. There were fears that China had the advantage to its lax respect for intellectual property but now America has pulled ahead in cutting-edge machine-learning models due to its recent chip restrictions and China faces ‘significant headwinds… in military AI.’

Summing up…

AI may fundamentally change the nature of war but contrary to impressions, it is not yet widespread in practice. There has been a lot of innovation and development of infrastructure but for example, the Pentagon spends less than 1% of its budget on software. ‘Stakes are high, we have to move quickly and safely’ says an official.

Meanwhile, StormCloud is continually improving but moves slowly due to internal politics [‘twas ever thus] and red tape around accreditation of tech. Funding for second iteration is ‘paltry’ £10 m. According to an officer, ‘If we were in Ukraine or genuinely worried about going to war any time soon, we’d have spent £100 m-plus and had it deployed in weeks or months.’

[Editorial note again: for some years I’ve been the editor/proofreader for a recognised expert in human-AI teaming (I can’t say who – commercial and intellectual property and so on) and they’ve raised many of these issues in their papers for journal publication in a civilian context – aircraft cockpits, self-driving vehicles etc. The recurring theme is explainability and trust of the AI as a team member.]
 
Last edited:
By the way, if you're still not having trouble sleeping, I just finished reading Nuclear War: A Scenario by Annie Jacobsen and highly recommend it. Among other things, it looks in detail at the decision-making processes and chains of command/succession involved. It highlights the vulnerability of these chains to both error and disruption in times of crisis when the clock is speeding.

Dr Strangelove and Fail Safe are still as relevant as ever. Perhaps even more so when AI is in the loop.

From my summary of the briefing in The Economist above:

Even talk of using AI in nuclear decision-making. Soviet ‘Dead Hand’ system (think Dr Strangelove’s ‘Doomsday machine’) is reportedly still in use and upgraded to include AI. Very secret and little released, but high on the agenda for talks last year between Biden and Xi.
 
Wouldn't you like to know what your opposing commander is thinking?

Quandary likely to worsen. AI begets more AI. If one side uses it, the other will too. Already the case with air defence. Furthermore, AI will undertake more ‘less mathematical’ tasks, such as predicting opponents’ emotional state (morale, willingness to fight).

It's not unlikely that intelligence agency will construct AI models of specific figures and agencies. What would Putin do? Let's ask Putinbot. I'm not suggesting that this would be a good idea; I'm suggesting that this would be an attractive idea.
 

 
Last edited:
Last edited:

 



 
Last edited:


 
Last edited:

 


 
Last edited:

 
Last edited:



 
Last edited:

 
Last edited:

 
Last edited:

 
Last edited:

Similar threads

Please donate to support the forum.

Back
Top Bottom