Image from Google Jackets

We’ll never have a model of an AI major-general: artificial intelligence, command decisions and kitsch visions of war/ by Cameron Hunter and Bleddyn E. Bowen

By: Contributor(s): Material type: TextTextPublication details: 2024Subject(s): Online resources: In: The Journal of Strategic Studies, Volume 47, Number 1, February 2024, page: 116-146Summary: Military AI optimists predict future AI assisting or making command decisions. We instead argue that, at a fundamental level, these predictions are dangerously wrong. The nature of war demands decisions based on abductive logic, whilst machine learning (or ‘narrow AI’) relies on inductive logic. The two forms of logic are not interchangeable, and therefore AI’s limited utility in command – both tactical and strategic – is not something that can be solved by more data or more computing power. Many defence and government leaders are therefore proceeding with a false view of the nature of AI and of war itself.
Tags from this library: No tags from this library for this title. Log in to add tags.
Star ratings
    Average rating: 0.0 (0 votes)

Military AI optimists predict future AI assisting or making command decisions. We instead argue that, at a fundamental level, these predictions are dangerously wrong. The nature of war demands decisions based on abductive logic, whilst machine learning (or ‘narrow AI’) relies on inductive logic. The two forms of logic are not interchangeable, and therefore AI’s limited utility in command – both tactical and strategic – is not something that can be solved by more data or more computing power. Many defence and government leaders are therefore proceeding with a false view of the nature of AI and of war itself.

ARTIFICIAL INTELLIGENCE, STRATEGY, TACTICS, NEWARTICLS

There are no comments on this title.

to post a comment.