aint nobody gonna care till an AI piece of equipment kills somebody…
Frankly we’re getting to a point where there’s so much Artificial Intelligence talk in military applications / equipment these days that I may just start a weekly ‘Skynet is Here‘ post.
DARPA wants challenge participants to figure out how to make radios adapt to the current spectrum environment and keep pace with any real-time changes, Tilghman said.
…
What sets the Spectrum Collaboration Challenge apart from other adaptive radio efforts is the focus on artificial intelligence (AI) as a key enabler.
“This idea of AI systems that can learn to collaborate with each other is really sort of a fundamentally untapped area,” he added. “And very specifically, we want to know if AI can tackle this problem.”
Source: http://www.defensenews.com/articles/tech-watch-darpa-spectrum-collaboration-challenge-adaptive-radios
So here they’re describing a dynamic bit of software… but what they say they want is AI. Artificial Intelligence means a whole lot of things to a whole lot of people…
But what AI always means is some level of adaptability —> Adaptability means loss of control —> Loss of control means situations that were initially abstract suddenly become less abstract —> and oh look the AI program has now specifically cut off radio contact with a distant unit so as to maximize benefits to the larger more concentrated units.
See, it really isn’t hard to come up with a scenario where AI begins to run amok and that’s problematic for AI in and of itself… it’ll be worse when we experience the real world consequences of AI in our military equipment. Right now we live in a world of abstract situations and thoughts… we don’t really know how things are going to work out and that’s the rub. People want to rush in and do all of the things with AI while others like myself see that as irresponsible and thoughtless…
And you know what… the fact that we can’t even secure our internet of things doorknobs and light bulbs should tell you a lot about how AI is going to work out for humanity.
Update (9/28/2016) – The Confusing Nature of ‘Artificial Intelligence’
The system is part of a wider effort by BAE Systems to develop something close to the heart of Deputy Defense Secretary Bob Work’s Third Offset, a system using artificial intelligence to read and analyze an enemy’s communications and EW emissions, providing soldiers at the tactical level with tools to manage something that used to be handled back in the command center.
Key to this is work BAE Systems has done to greatly boost the efficiency and power of microprocessors and to build in algorithms that create “systems that leverage as much knowledge as possible ahead of time,” instead of forcing a system to rely on highly classified traditional threat libraries, Joshua Niedzwiecki, director of BAE’s sensor processing and exploitation, says.
BAE Systems calls these “cognitive processing algorithms.” One of their key functions is to rapidly comb through the EM soup, identify the wavelengths and their signal strengths and other characteristics and tell the soldier how they can countered or evaded.
Source: http://breakingdefense.com/2016/09/darpa-picks-baes-smart-handheld-ew-sensor/
See, to me that sounds more like software that has been made dynamic rather than a system that will use an actual Artificial Intelligence.
This goes to illustrate how things are currently… People use the term AI so generally that everybody has a different conceptual understanding of what is and is not AI. To me it sounds like for the foreseeable future, even with these software improvements, there will not be an actual Artificial Intelligence inside updated military radios… instead it’ll be advanced, dynamic, software. To other people though that may very well count as Artificial Intelligence…
(an aside: what’s worse of course is those people who are obsessed with correcting everybody over their conflating artificial intelligence with general intelligence… they’re honestly like those annoying people who compulsively correct everybody’s grammar)
As a society we’re just beginning to enter into an AI world and we have absolutely no clear ideas over anything inside that world yet.
In time that’ll improve, but for now we’re all going to have to argue with each other over the line between dynamic software and artificial intelligence. So expect his to be an ongoing issue.