Image from Google Jackets

Elevating humanism in high-stakes automation: experts-in-the-loop and resort-to-force decision making/ Jenny L. Davis

By: Material type: TextTextPublication details: 2024Subject(s): Online resources: In: Australian Journal of International Affairs: Volume 78, Number 2, April 2024, pages: 200-209Summary: Artificial intelligence (AI) technologies pervade myriad decision systems, mobilising data at a scale, speed, and scope that far exceed human capacities. Although it may be tempting to displace humans with these automated decision systems, doing so in high-stakes settings would be a mistake. Anchored by the example of states’ resort to force, I argue that human expertise should be elevated—not relegated—within high-stakes decision contexts that incorporate AI tools. This argument builds from an empirical reality in which defence institutions increasingly rely on and invest in AI capabilities, an active debate about how (and if) humans should figure into automated decision loops, and a socio-technical landscape marked by both promise and peril. The argument proceeds through a primary claim about the amplified relevance of expert humans in light of AI, underpinned by the assumed risks of omitting human experts, together motivating a tripartite call to action. The position presented herein speaks directly to the military domain, but also generalises to a broader worldbuilding project that preserves humanism amidst suffusive AI.
Tags from this library: No tags from this library for this title. Log in to add tags.
Star ratings
    Average rating: 0.0 (0 votes)
Holdings
Item type Current library Call number Status Date due Barcode
Journal Article Mindef Library & Info Centre Journals ARTIFICIAL INTELLIGENCE (Browse shelf(Opens below)) Not for loan

Artificial intelligence (AI) technologies pervade myriad decision systems, mobilising data at a scale, speed, and scope that far exceed human capacities. Although it may be tempting to displace humans with these automated decision systems, doing so in high-stakes settings would be a mistake. Anchored by the example of states’ resort to force, I argue that human expertise should be elevated—not relegated—within high-stakes decision contexts that incorporate AI tools. This argument builds from an empirical reality in which defence institutions increasingly rely on and invest in AI capabilities, an active debate about how (and if) humans should figure into automated decision loops, and a socio-technical landscape marked by both promise and peril. The argument proceeds through a primary claim about the amplified relevance of expert humans in light of AI, underpinned by the assumed risks of omitting human experts, together motivating a tripartite call to action. The position presented herein speaks directly to the military domain, but also generalises to a broader worldbuilding project that preserves humanism amidst suffusive AI.

There are no comments on this title.

to post a comment.