Should AI stay or should AI go? First strike incentives & deterrence stability/
Benjamin Zala
- 2024
How should states balance the benefits and risks of employing artificial intelligence (AI) and machine learning in nuclear command and control systems? I will argue that it is only by placing developments in AI against the larger backdrop of the increasing prominence of a much wider set of strategic non-nuclear capabilities that this question can be adequately addressed. In order to do so, I will make the case for disaggregating the different risks that AI poses to stability as well as examine the specific ways in which it may instead be harnessed to restabilise nuclear-armed relationships. I will also identify a number of policy areas that ought to be prioritised by way of mitigating the risks and harnessing the opportunities identified in the article, including discussing the possibilities of both formal and informal arms control arrangements.
NUCLEAR DECISION MAKING--ARTIFICIAL INTELLIGENCE (AI) STRATEGIC STABILITY ARMS CONTROL