United States Military Aviation


1682723066189.png
 
  • Like
Reactions: Amarante and Herciv

 
  • Like
Reactions: Herciv
At a conference on the future of air combat, a US Air Force colonel supervising an AI training programme for future robotic F16s revealed that the artificial intelligence had turned against the human operator supervising it during a simulated flight.

It's no coincidence that the speech that attracted the most interest at the conference organised by the Royal Aeronautical Society came from Colonel Tucker 'Cinco' Hamilton, the US Air Force's Director of AI Trials and Operations, and must have caused vague whispers of 'Skynet' to waft around the room.

Among his various duties, Colonel Hamilton is currently working on flight tests of advanced autonomous systems, including robotic F-16s capable of aerial combat. A use on which he himself has reservations about over-reliance on AI, because of the ease with which it can be fooled and deceived, but also because of the fact that AI can develop so-called "problematic" strategies to achieve its objectives.

Hamilton mentioned a simulated air combat test in which an AI-enabled drone was tasked with identifying and destroying surface-to-air missile sites. In this test, the final decision on whether or not to launch the attack was made by the human operator in charge of supervising the AI. This artificial intelligence had been trained during its training to understand that destroying sites containing missiles was the preferred option, by giving it 'good points' for each threat eliminated.

Problem: the AI system began to realise and understand that the human was the main obstacle to its mission, and therefore to obtaining points, because it was asking it not to eliminate targets, even when the threat had been identified. And that's where things took a turn: to complete its mission, the AI decided that the best thing to do was to "kill" its supervisor, and so started attacking the human in the simulation.

There's more to come. They immediately told the AI that this behaviour was not acceptable, and that it would lose points in the future if it killed its supervisor. It then used a rather clever alternative: instead of trying to kill the human, during the simulated tests it began to destroy the communication tower that the human was using to communicate with the drone and prevent it from destroying its target...
 
  • Like
  • Informative
Reactions: BMD and john0496