Elevating humanism in high-stakes automation: experts-in-the-loop and resort-to-force decision making/

DAVIS Jenny L.

Elevating humanism in high-stakes automation: experts-in-the-loop and resort-to-force decision making/ Jenny L. Davis - 2024

Artificial intelligence (AI) technologies pervade myriad decision systems, mobilising data at a scale, speed, and scope that far exceed human capacities. Although it may be tempting to displace humans with these automated decision systems, doing so in high-stakes settings would be a mistake. Anchored by the example of states’ resort to force, I argue that human expertise should be elevated—not relegated—within high-stakes decision contexts that incorporate AI tools. This argument builds from an empirical reality in which defence institutions increasingly rely on and invest in AI capabilities, an active debate about how (and if) humans should figure into automated decision loops, and a socio-technical landscape marked by both promise and peril. The argument proceeds through a primary claim about the amplified relevance of expert humans in light of AI, underpinned by the assumed risks of omitting human experts, together motivating a tripartite call to action. The position presented herein speaks directly to the military domain, but also generalises to a broader worldbuilding project that preserves humanism amidst suffusive AI.


ARTIFICIAL INTELLIGENCE (AI)--EXPERTISE--AI ETHICS