DiscoverICRC Humanitarian Law and Policy Blog‘Constant care’ must be taken to address bias in military AI
‘Constant care’ must be taken to address bias in military AI

‘Constant care’ must be taken to address bias in military AI

Update: 2025-08-28
Share

Description

As many states, especially those with large and resourceful militaries, are exploring the potential of using artificial intelligence (AI) in targeting decisions, there is an urgent need to understand the risks associated with these systems, one being the risks of bias. However, while concerns about bias are often mentioned in the military AI policy debate, how it manifests as harm and what can be done to address it is rarely discussed in depth. This represents a critical gap in efforts to ensure the lawful use of military AI.

To help bridge this gap, Laura Bruun and Marta Bo from the Stockholm International Peace Research Institute (SIPRI) unpack the humanitarian and legal implications of bias in military AI. They show how bias in military AI is likely to manifest in more complex and subtle ways than portrayed in policy debates, and if unaddressed, it may affect compliance with IHL principles of distinction, proportionality, and, especially, precautions in attack.
Comments 
loading
In Channel
loading
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

‘Constant care’ must be taken to address bias in military AI

‘Constant care’ must be taken to address bias in military AI

ICRC Law and Policy