How the US Used AI to Conduct Airstrikes in the Middle East

The US military has confirmed that it used artificial intelligence (AI) to help identify and target potential threats in the recent airstrikes in the Middle East. The use of AI for combat operations has raised ethical and operational questions about the future of warfare.

According to Schuyler Moore, chief technology officer for US Central Command (CENTCOM), the US used computer vision algorithms to analyze satellite imagery and drone footage to locate rockets, missiles, drones, and militia facilities in Iraq and Syria. These algorithms were developed as part of Project Maven, a Pentagon initiative launched in 2017 to integrate AI and machine learning across defense operations.

On Feb. 2, the US carried out more than 85 airstrikes in various parts of Iraq and Syria, destroying or damaging the identified targets. The airstrikes were a response to the drone attack in Jordan on Jan. 28 that killed three US service members. The US blamed Iranian-backed militias for the attack.

Moore said that the use of AI for target identification was not a new practice, but rather an enhancement of existing capabilities. He said that the AI systems were designed to assist human analysts, not to make autonomous decisions. “There is never an algorithm that’s just running, coming to a conclusion and then pushing onto the next step,” he said.

AI for Combat Operations

The use of AI for target acquisition is part of a broader trend of increasing military use of AI for various purposes, such as intelligence analysis, logistics, cyber defense, and autonomous weapons. The US is not the only country that is leveraging AI for combat operations. In 2023, Israel reportedly used AI software to decide where to drop bombs in Gaza, based on vast amounts of data and human input.

The use of AI for combat operations has also raised ethical and operational concerns, such as the potential for errors, biases, hacking, escalation, and accountability. Some experts and activists have called for a ban on lethal autonomous weapons, or “killer robots”, that can select and engage targets without human intervention. Others have argued that AI can enhance military effectiveness, accuracy, and safety, and that a ban would be unrealistic and counterproductive.

The Future of Warfare

The use of AI for combat operations is likely to increase in the future, as the technology advances and the competition among major powers intensifies. The US has identified AI as a key strategic priority for national security and defense, and has invested billions of dollars in research and development. China and Russia have also declared their ambitions to become global leaders in AI and to use it for military purposes .

The use of AI for combat operations poses new challenges and opportunities for the US and its allies, as well as for the international community and the norms of warfare. The US and its partners will need to balance the benefits and risks of AI, and to develop clear policies, standards, and oversight mechanisms to ensure its ethical and responsible use. The US will also need to engage in dialogue and cooperation with other countries, especially its adversaries, to prevent misunderstandings, miscalculations, and conflicts that could arise from the use of AI for combat operations.

Leave a Reply

Your email address will not be published. Required fields are marked *