On The Daily Circuit Monday, we're talking about what a cyberwar might look like and the role humans might be called upon to play. In the battlefield, however, humans are increasingly being taken out of the equation.
Drones and other robots are at the forefront of the latest defense technology. News of U.S. drone strikes in Afghanistan are commonplace. In Israel, forces are employing the unmanned planes to target strikes against Hamas in Gaza. The Department of Defense is even developing an exoskeleton suit to enhance performance in troops.
Technology is introduced under the guise of making our lives better. And for the most part, these drones and robots are so smart, they're able to make better decisions than humans. But who's to blame if they make a mistake? If a drone strike mistakenly targets and kills innocent civilians, and it's made these decisions based on algorithms with no human interaction, who is responsible?
Paul Robinson, professor at the Graduate School of Public and International Affairs at the University of Ottawa, has been thinking about this. He's wrote an article about this next generation of technology that recently appeared in Slate.