Friday, 02 June 2023 13:07

AI Drone Decided Humans Were an Obstacle to be Removed in Simulated Test

Written by

Reading time is around minutes.

The arguments for and against AI as a threat all seem to be centered on the point of AGI (Artificial General Intelligence). This is the point where the reasons skills of AI are on par with the average human brain. When reached it would mark an evolution in AI. The people saying AI is a threat are trying to slow down progress towards this, while those arguing it is harmless all say we are nowhere near that stage. I have argued that this point is irrelevant in terms of assessing the dangers of a blind rush to build and shove AI into everything.

While AI might be nowhere near AGI, even the simplest of AI systems can make logical decisions exponentially faster than a human can. The compute power behind these systems and the way they are linked make this possible. This means that while we sit back and think about how to train and develop an AI for automated tasks, we are only able to imagine a small fraction of what that same AI system can think of once it is turned on.

A good example of this is a recent story relayed by USAF Col. Tucker Hamilton Chief of AI Test Operations. The story he relayed showed how a simple AI that is nowhere near AGI can make decisions that were not even dreamed of by those developing the system. It highlighted how guidelines and guardrails can be ignored, if the AI deems them an obstacle to its programming. Before diving into the story that Col Hamilton told, I want to point out that this was a test and a simulation at that. No one was harmed during this test at all and it was not a physical test at all, but a logical simulation.

The AI controlled drone in question was trained to attack simulated SAM (Surface to Air Missile) and other air defense systems. This type of mission profile is commonly called Suppression of Enemy Air Defense (SEAD). In this case the primary targets were SAM sites, the AI was given autonomy, with a human operator having the final control over mission execution. The programming was based on a point system (gamification for AI). The more SAM sites eliminated the more points it received. During missions the AI started to notice that its human controller would actively prevent it from getting points. The AI logically determined that the human operator was an obstacle to getting as many points as possible. This resulted in the AI turning on the operator and “killing” them (no actual humans were harmed). When the developers put in what they thought was an effective guardrail (taking away points for killing the operator), the drone then switched to attacking and destroying the communication towers used by the operators to prevent them from getting points.

It is important to note again that this AI is nowhere near AGI. It is a very simple form of AI that only wants to score points for items identified as having points associated with them. However, because it is capable of logical determinations, it identified an obstacle to achieving that goal and did so very quickly. Even in the face of negative enforcement, it still worked to cut out any control that prevented reaching its objective and this was not even a real-world simulation.

Here we see a clear example of how even an immature AI can reach a logical conclusion at odd with its programing. It shows that we Humans are still not able to imagine all the possible outcomes of pushing AI even if we all agree to slow down and use caution. If we push ahead without considering these potential pitfalls, we give ourselves a massive blind spot where we might actively ignore potential dangers and issues. Thankfully AI is still immature and is generally able to be controlled if we do not slow down and start to listen to the cautionary tales already in place, we might not be able to control it when it goes south in the future. Before the AI Bros jump in on this, let me be clear; using caution and imagining the things that might and probably could go wrong does not mean stop development of AI. It simply means doing the legwork and mental reasoning to make sure you are not missing something in your eagerness to build the next big thing. Taking the extra time to consider the implications and potential impacts (including malicious interference) might prevent you from having an “Oh no, what have I done?” moment in the future and one that you might not be able to take back.

Read 1344 times Last modified on Friday, 02 June 2023 13:44

Leave a comment

Make sure you enter all the required information, indicated by an asterisk (*). HTML code is not allowed.