Delegating strategic decision-making to machines: Dr. Strangelove Redux?/ James Johnson
Material type: TextPublication details: 2022Subject(s): Online resources: In: The Journal of Strategic Studies Vol 45 No.3, June 2022, pp. 439-477 (98)Summary: Will the use of artificial intelligence (AI) in strategic decision-making be stabilizing or destabilizing? What are the risks and trade-offs of pre-delegating military force to machines? How might non-nuclear state and non-state actors leverage AI to put pressure on nuclear states? This article analyzes the impact of strategic stability of the use of AI in the strategic decision-making process, in particular, the risks and trade-offs of pre-delegating military force (or automating escalation) to machines. It argues that AI-enabled decision support tools - by substituting the role of human critical thinking, empathy, creativity, and intuition in the strategic decision-making process - will be fundamentally destabilizing if defense planners come to view AI's 'support' function as a panacea for the cognitive fallibilities of human analysis and decision-making. The article also considers the nefarious use of AIenhanced fake news, deepfakes, bots, and other forms of social media by non-state actors and state proxy actors, which might cause states to exaggerate a threat from ambiguous or manipulated information, increasing instability.Item type | Current library | Call number | Copy number | Status | Date due | Barcode | |
---|---|---|---|---|---|---|---|
Journal Article | Mindef Library & Info Centre Journals | TECHNOLOGY (Browse shelf(Opens below)) | 1 | Not for loan | 67444.1001 |
Browsing Mindef Library & Info Centre shelves, Shelving location: Journals Close shelf browser (Hides shelf browser)
Will the use of artificial intelligence (AI) in strategic decision-making be stabilizing or destabilizing? What are the risks and trade-offs of pre-delegating military force to machines? How might non-nuclear state and non-state actors leverage AI to put pressure on nuclear states? This article analyzes the impact of strategic stability of the use of AI in the strategic decision-making process, in particular, the risks and trade-offs of pre-delegating military force (or automating escalation) to machines. It argues that AI-enabled decision support tools - by substituting the role of human critical thinking, empathy, creativity, and intuition in the strategic decision-making process - will be fundamentally destabilizing if defense planners come to view AI's 'support' function as a panacea for the cognitive fallibilities of human analysis and decision-making. The article also considers the nefarious use of AIenhanced fake news, deepfakes, bots, and other forms of social media by non-state actors and state proxy actors, which might cause states to exaggerate a threat from ambiguous or manipulated information, increasing instability.
INTEL, USA, CHINA, SECURITY, POLICY, TECHNOLOGY
There are no comments on this title.