today’s episode of ‘skynet is here‘… *whoosh whoosh whoosh whoosh*
https://youtu.be/edq4prlocQ0
The military wants to replace a host of current helicopters with aircraft that not only fly much faster, but can fly without a human pilot. The Army-led Future Vertical Lift program will study whether FVL should be an “Optionally Piloted Vehicle,” capable of accommodating a pair of highly-trained human pilots for complex combat missions or of flying with an empty cockpit for routine supply runs.
Senior officers have already expressed public enthusiasm for the idea. Lt. Gen. Jon Davis, deputy commandant of the Marine Corps for aviation, told Breaking Defense last month that making FVL “optionally manned/unmanned,” depending on the mission, “has great potential.” Lt. Gen. Mike Murray, Army deputy chief of staff for program development, told Flight Global last week that he could “easily see” unmanned or, more likely, optionally manned rotary-wing aircraft in the future. But this is the first we’ve heard of a serious effort to study whether and how to actually do it.
Source: http://breakingdefense.com/2016/09/optionally-piloted-aircraft-studied-for-future-vertical-lift/
While they are not currently talking about putting a literal Artificial Intelligence program inside these helicopters, you and I both know they will either conflate the need for adaptive software (that can do things like take evasive action upon detecting anti-aircraft fire) with the need for a helicopter that can decide to… say… completely reroute itself without human input.
Which will be the issue.
So let’s assume that the military will at some point add an Artificial Intelligence to some of its helicopters…
A helicopter with an Artificial Intelligence program will be able to make some level of decisions for itself… regardless of whether or not people thought ahead about the dangers of a Skynet situation and restricted the AI’s abilities… it will always have some kind of capability to make decisions without human input. Otherwise, what would be the point of an AI program?
So that means regardless of the actual levels of self-control given to the program, regardless of any kind of ‘human values‘ restrictions put into the AI, there will inherently be value flaws programmed into it.
This is something Isaac Asimov has illustrated to us in his robotics writings…
Do you remember how in iRobot the robots eventually ended up harming humans so as to protect them? That is a ‘value flaw‘, where an artificial life form, programmed by humans, does not have the same level of adaptive values as humans. Human beings constantly flip-flop over everything. An AI hasn’t spent it’s life doing things like not correcting the cashier when she forgets to ring up a bottle of liquor when you’re checking out or returning changed dropped by people to them. It does what its software tells it to do and can adapt from there (the level of adaptability is a variable of course).
SO… A value issue can and probably will occur here as well. In reality, we ought to prepare ourselves for an AI enabled helicopter to completely abandon a squad of soldiers at a landing zone because it detected small arms fire hitting the body of the helicopter. The helicopter, which will undoubtedly have Isaac Asimov like ‘laws‘ programmed into it, will just fly away because it’s following the ‘law‘ that requires it to protect itself. The value flaw of course being that it violated the law on protecting human lives. That is obviously an undesirable situation.
Anddddd… we’re now done working through an assumption… Let’s return to reality, where none of that has happened and hopefully will never happen.
I feel the need to mention that because we are at the beginning of a ‘AI product cycle‘ all we have to work with and talk about is hypothetical. We do not know how AI will actually interact with us in the real world or in any of the military hardware I’ve been mentioning in my Skynet is Here posts.
Which is exactly the point.
Skynet is Ultimately Just an Example… But It’s a Good One…
This does follow along with the premise of Skynet in the Terminator series. Skynet was created as a way to link more and more military systems together to improve tactical performance and that is exactly what we’re seeing coming out of US military programs so far.
We have radios with AI to improve battlefield performance, we have a fully linked up cyber command with AI, we’ll have AI interceptors, AI enabled aircraft, and most likely AI enabled helicopters too.
That is the existential threat that Terminator was attempting to highlight for people…
Technology now offers us a whole new world of possibilities, we have no idea whatsoever how a lot of this new technology will work out for us, so just because we can do something doesn’t mean we should. Bad things, very bad things, could happen.
That wrap’s up this episode of… SKYNET IS HERE!
just because we can put aritficial intelligence in something doesn’t mean we should