Elevating humanism in high-stakes automation: experts-in-the-loop and resort-to-force decision making/ Jenny L. Davis
Material type: TextPublication details: 2024Subject(s): Online resources: In: Australian Journal of International Affairs: Volume 78, Number 2, April 2024, pages: 200-209Summary: Artificial intelligence (AI) technologies pervade myriad decision systems, mobilising data at a scale, speed, and scope that far exceed human capacities. Although it may be tempting to displace humans with these automated decision systems, doing so in high-stakes settings would be a mistake. Anchored by the example of states’ resort to force, I argue that human expertise should be elevated—not relegated—within high-stakes decision contexts that incorporate AI tools. This argument builds from an empirical reality in which defence institutions increasingly rely on and invest in AI capabilities, an active debate about how (and if) humans should figure into automated decision loops, and a socio-technical landscape marked by both promise and peril. The argument proceeds through a primary claim about the amplified relevance of expert humans in light of AI, underpinned by the assumed risks of omitting human experts, together motivating a tripartite call to action. The position presented herein speaks directly to the military domain, but also generalises to a broader worldbuilding project that preserves humanism amidst suffusive AI.Item type | Current library | Call number | Status | Date due | Barcode | |
---|---|---|---|---|---|---|
Journal Article | Mindef Library & Info Centre Journals | ARTIFICIAL INTELLIGENCE (Browse shelf(Opens below)) | Not for loan |
Browsing Mindef Library & Info Centre shelves, Shelving location: Journals Close shelf browser (Hides shelf browser)
No cover image available | No cover image available | No cover image available | No cover image available | No cover image available | No cover image available | No cover image available | ||
ARTIFICIAL INTELLIGENCE The role of artificial intelligence in nuclear crisis decision making: a complement, not a substitute/ | ARTIFICIAL INTELLIGENCE A complex-systems view on military decision making/ | ARTIFICIAL INTELLIGENCE Human-AI cognitive teaming: using AI to support state-level decision making on the resort to force/ | ARTIFICIAL INTELLIGENCE Elevating humanism in high-stakes automation: experts-in-the-loop and resort-to-force decision making/ | ARTIFICIAL INTELLIGENCE Sigint, tscm and artificial intelligence/ | ARTIFICIAL INTELLIGENCE Ai-native blockchain for multi-domain resource trading in 6g/ | ARTIFICIAL INTELLIGENCE Radicalsing and recruitment/ |
Artificial intelligence (AI) technologies pervade myriad decision systems, mobilising data at a scale, speed, and scope that far exceed human capacities. Although it may be tempting to displace humans with these automated decision systems, doing so in high-stakes settings would be a mistake. Anchored by the example of states’ resort to force, I argue that human expertise should be elevated—not relegated—within high-stakes decision contexts that incorporate AI tools. This argument builds from an empirical reality in which defence institutions increasingly rely on and invest in AI capabilities, an active debate about how (and if) humans should figure into automated decision loops, and a socio-technical landscape marked by both promise and peril. The argument proceeds through a primary claim about the amplified relevance of expert humans in light of AI, underpinned by the assumed risks of omitting human experts, together motivating a tripartite call to action. The position presented herein speaks directly to the military domain, but also generalises to a broader worldbuilding project that preserves humanism amidst suffusive AI.
There are no comments on this title.