Historically, mankind has developed new weapons, deployed them, observed the damage, and then debated the ethics and morality of the weapons after the fact. That pattern seems to have held true from the time of the first crude clubs, spears, bows and arrows, and even the atomic bomb.
The latest development in that same series is the X-47B fully autonomous drone, currently being tested by the US Navy. The new robotic drone is so advanced that it can take off and land on an aircraft carrier at sea without human intervention.
If autonomous take offs were it's sole claim to fame, then people wouldn't be concerned. What does concern many is the fact that the robot drone, and other similar weapons currently under development, can operate totally autonomously taking human judgement, and values, out of the loop.
Human operators will monitor the drone as it executes it's missions, and could potentially abort if they notice anything going astray. However, a key design strength of the autonomous drone approach is that it can recognize, analyze, and act on sensor data much faster than any human pilot could ever hope to. In that situation, how could humans monitoring the robot at a distance, ever expect to second guess it?
A little over ten years ago, before September 11, 2001, the military had very few drones. Now, according to the LA Times article below, they account for approximately one third of all the US military aircraft. The handwriting is on the wall and is easy for anyone to see.
There are many other critical issues and questions raised by the new technology and it's deployment. It's not just about the ethics of raining death and destruction at a distance. There are also issues of accountability, dependability, and morality.
Some of the arguments in favor of using robots in this fashion seem almost bizarre. For example:
"More aggressive robotry development could lead to deploying far fewer U.S. military personnel to other countries, achieving greater national security at a much lower cost and most importantly, greatly reduced casualties," aerospace pioneer Simon Ramo, who helped develop the intercontinental ballistic missile, wrote in his new book, "Let Robots Do the Dying."
If the robots were only killing other robots, that might be a reasonable, perhaps even entertaining approach. But, it totally ignores the perspective that the targets of the robots are typically human, and not always combatants.
Consider the last paragraph in the LA Times article, especially the last sentence:
The X-47B will not only land itself, but will also know what kind of weapons it is carrying, when and where it needs to refuel with an aerial tanker, and whether there's a nearby threat, said Carl Johnson, Northrop's X-47B program manager. "It will do its own math and decide what it should do next."
The Science Fiction world of our adolescent day dreaming has become a reality much sooner than anyone would have believed possible.
Via: New drone has no pilot anywhere, so who's accountable? - latimes.co, thanks to Robert at RoboDance