Israel’s ‘Lavender’ AI program: Automating human target selection in Gaza

How is Israel harnessing artificial intelligence for target selection?

The Israeli military has reportedly implemented a program that uses artificial intelligence to select human targets for elimination, without prior human testing. In the early stages of the conflict with Hamas, the Israeli Air Force conducted a large number of strikes in Gaza Strip, targeting several hundred locations per day. The program, known as “Lavender,” allows for a significant increase in the number of targets selected by relying on AI to identify potential threats.

In previous conflicts, only high-ranking Hamas commanders were designated as targets, and thorough checks were conducted before attacks were carried out. However, following a terrorist attack by Hamas in October, the military leadership deemed the old method too time-consuming and authorized the use of “Lavender” for target selection. This program was originally developed as a targeting aid but now uses various data sources to identify potential fighters among the population.

According to research, the program identified around 37,000 Palestinians as suspected Hamas fighters with a 90% accuracy rate after sample checks. The use of AI in target selection has raised ethical questions, particularly regarding civilian casualties. Despite some military sources praising its effectiveness, others have criticized its use and questioned its reliance on technology to make life-and-death decisions.

The army’s actions have been scrutinized for civilian casualties, with critics questioning the proportionality of their response to Hamas attacks. Additionally, concerns have been raised about another program called “Where’s Dad?” that tracks individuals in their homes and signals when targets enter their homes for airstrikes to be carried out when families are present. Such tactics have been criticized for potentially leading to unnecessary civilian casualties and violations of international humanitarian law.

Overall, the use of AI in warfare raises important ethical questions that must be carefully considered before such programs are used again in future conflicts.

Leave a Reply