000 | 01592nam a22002057a 4500 | ||
---|---|---|---|
001 | 47378 | ||
003 | OSt | ||
005 | 20240820124628.0 | ||
008 | 240820b |||||||| |||| 00| 0 eng d | ||
100 | _aZALA Benjamin | ||
245 |
_aShould AI stay or should AI go? First strike incentives & deterrence stability/ _cBenjamin Zala |
||
260 | _c2024 | ||
520 | _aHow should states balance the benefits and risks of employing artificial intelligence (AI) and machine learning in nuclear command and control systems? I will argue that it is only by placing developments in AI against the larger backdrop of the increasing prominence of a much wider set of strategic non-nuclear capabilities that this question can be adequately addressed. In order to do so, I will make the case for disaggregating the different risks that AI poses to stability as well as examine the specific ways in which it may instead be harnessed to restabilise nuclear-armed relationships. I will also identify a number of policy areas that ought to be prioritised by way of mitigating the risks and harnessing the opportunities identified in the article, including discussing the possibilities of both formal and informal arms control arrangements. | ||
650 |
_aNUCLEAR DECISION MAKING _xARTIFICIAL INTELLIGENCE (AI) |
||
650 | _aSTRATEGIC STABILITY | ||
650 | _aARMS CONTROL | ||
773 | _gAustralian Journal of International Affairs: Volume 78, Number 2, April 2024, pages: 154-163 | ||
856 |
_uhttps://www.tandfonline.com/doi/full/10.1080/10357718.2024.2328805 _zClick here for full text |
||
942 |
_2ddc _cARTICLE _n0 |
||
999 |
_c47378 _d47378 |