000 | 01816cam a2200145 4500 | ||
---|---|---|---|
100 | 1 | _aJOHNSON James | |
245 |
_aDelegating strategic decision-making to machines: _bDr. Strangelove Redux?/ _cJames Johnson |
||
260 | _c2022 | ||
520 | _aWill the use of artificial intelligence (AI) in strategic decision-making be stabilizing or destabilizing? What are the risks and trade-offs of pre-delegating military force to machines? How might non-nuclear state and non-state actors leverage AI to put pressure on nuclear states? This article analyzes the impact of strategic stability of the use of AI in the strategic decision-making process, in particular, the risks and trade-offs of pre-delegating military force (or automating escalation) to machines. It argues that AI-enabled decision support tools - by substituting the role of human critical thinking, empathy, creativity, and intuition in the strategic decision-making process - will be fundamentally destabilizing if defense planners come to view AI's 'support' function as a panacea for the cognitive fallibilities of human analysis and decision-making. The article also considers the nefarious use of AIenhanced fake news, deepfakes, bots, and other forms of social media by non-state actors and state proxy actors, which might cause states to exaggerate a threat from ambiguous or manipulated information, increasing instability. | ||
650 |
_aARTIFICIAL INTELLIGENCE _zU.S.-CHINA RELATIONS _xNUCLEAR SECURITY _xDETERRENCE POLICY _xEMERGING TECHNOLOGY _xSTRATEGIC STABILITY |
||
773 |
_aThe Journal of Strategic Studies : _gVol 45 No.3, June 2022, pp. 439-477 (98) |
||
598 | _aINTEL, USA, CHINA, SECURITY, POLICY, TECHNOLOGY | ||
856 |
_uhttps://www.tandfonline.com/doi/full/10.1080/01402390.2020.1759038 _zClick here for full text |
||
945 |
_i67444.1001 _rY _sY |
||
999 |
_c41498 _d41498 |