Autonomous AI systems in conflict: emergent behavior and its impact on predictability and reliability/ Daniel Trusilo
Material type:
Item type | Current library | Call number | Copy number | Status | Date due | Barcode | |
---|---|---|---|---|---|---|---|
Journal Article | Mindef Library & Info Centre Journals | ARTIFICIAL INTELLIGENCE (Browse shelf(Opens below)) | 1 | Not for loan | 70179-1001 |
The development of complex autonomous systems that use artificial intelligence (AI) is changing the nature of conflict. In practice, autonomous systems will be extensively tested before being operationally deployed to ensure system behavior is reliable in expected contexts. However, the complexity of autonomous systems means that they will demonstrate emergent behavior in the open context of real-world conflict environments. This article examines the novel implications of emergent behavior of autonomous AI systems designed for conflict through two case studies. These case studies include (1) a swarm system designed for maritime intelligence, surveillance, and reconnaissance operations, and (2) a next-generation humanitarian notification system. Both case studies represent current or near-future technology in which emergent behavior is possible, demonstrating that such behavior can be both unpredictable and more reliable depending on the level at which the system is considered. This counterintuitive relationship between less predictability and more reliability results in unique challenges for system certification and adherence to the growing body of principles for responsible AI in defense, which must be considered for the real-world operationalization of AI designed for conflict environments.
AI
There are no comments on this title.