While robots with the capabilities to determine whether or not to take human life are not yet a reality, they are well on the way. There are also many existent robots which could be easily adapted to make the decision to take human life. Robots like the Samsung SGR-1 already have the capacity to identify and target individuals that enter a proximity and request permission to engage and kill the target from a designated human controller. The X-47B combat aircraft already autonomously lands and re-fuels in mid air. Other robotic weapons like Israel’s Iron Dome and the United State’s Phalanx CIWS could also be easily adapted to include discernment.
The use of drones and advancement of automated combat robotics among governments has skyrocketed in recent years. Many nations are excited to see fully automated, soldier-less weapons go into production. However, the danger of something going terribly wrong with a robot that’s been designed and unleashed to automatically identify and kill human targets, is certainly worth a bit of concern.
In the US State of New York, a human rights organization called “Human Rights Watch,” has organized the self evidently titled “Stop Killer Robots” campaign to address the moral and legal issues presented by this swiftly developing technology. H.R.W. warns that “rapid advances in technology are permitting the United States and other nations with high-tech militaries, including China, Israel, Russia, and the United Kingdom, to move toward systems that would provide greater combat autonomy to machines.”
Aside from the obvious danger of malfunction due to programming or mechanical errs, Human Rights Watch also warns that “If one or more country chooses to deploy fully autonomous weapons, others may feel compelled to abandon policies of restraint, leading to a robotic arms race” and that’s certainly another cause for concern.
Show Comments (0)