000 | 01917cam a2200169 4500 | ||
---|---|---|---|
100 | 1 | _aTRUSILO Daniel | |
245 |
_aAutonomous AI systems in conflict: _bemergent behavior and its impact on predictability and reliability/ _cDaniel Trusilo |
||
260 | _c2023 | ||
520 | _aThe development of complex autonomous systems that use artificial intelligence (AI) is changing the nature of conflict. In practice, autonomous systems will be extensively tested before being operationally deployed to ensure system behavior is reliable in expected contexts. However, the complexity of autonomous systems means that they will demonstrate emergent behavior in the open context of real-world conflict environments. This article examines the novel implications of emergent behavior of autonomous AI systems designed for conflict through two case studies. These case studies include (1) a swarm system designed for maritime intelligence, surveillance, and reconnaissance operations, and (2) a next-generation humanitarian notification system. Both case studies represent current or near-future technology in which emergent behavior is possible, demonstrating that such behavior can be both unpredictable and more reliable depending on the level at which the system is considered. This counterintuitive relationship between less predictability and more reliability results in unique challenges for system certification and adherence to the growing body of principles for responsible AI in defense, which must be considered for the real-world operationalization of AI designed for conflict environments. | ||
650 | _aARTIFICIAL INTELLIGENCE | ||
650 | _aDEFENSE AI | ||
650 | _aEMERGENT BEHAVIOR | ||
773 |
_aJournal of Military Ethics: _gVol 22, No 1, April-June 2023 pp2-17 |
||
598 | _aAI | ||
856 |
_uhttps://www.tandfonline.com/doi/full/10.1080/15027570.2023.2213985 _zClick here for full text |
||
945 |
_i70179-1001 _rY _sY |
||
999 |
_c43249 _d43249 |