It seems that state of the art robotics and artificial intelligence design is complex enough that Isaac Asimov's Three Laws of Robotics, or some similar restrictions on our potential artificial progeny, are on the academic agenda. As The Australian reports (well, reprints) in "Robot rules make sure robots don't rule":
The race is on to keep humans one step ahead of robots: an international team of scientists and academics is to publish a "code of ethics" for machines as they become more and more sophisticated. Although the nightmare vision of a Terminator world controlled by machines may seem fanciful, scientists believe the boundaries for human-robot interaction must be set now, before super-intelligent robots develop beyond our control. "There are two levels of priority," says Gianmarco Verruggio, a roboticist at the Institute of Intelligent Systems for Automation in Genoa, northern Italy, and chief architect of the guide, to be published next month. "We have to manage the ethics of the scientists making the robots and the artificial ethics inside the robots." Verruggio and his colleagues have identified key areas that include ensuring human control of robots, preventing illegal use, protecting data acquired by robots and establishing clear identification and traceability of the machines. "Scientists must start analysing these kinds of questions and seeing if laws or regulations are needed to protect the citizen," Mr Verruggio says.
While the potential for sentient artificial intelligence, or artificial life, is probably a long way off, it's definitely a good thing to think about how humanity will relate to intelligent machines. (Which will no doubt have long-term utility for those involved in such fields, but may equally be useful in asking about how humans relate to other humans as much as machines!)
[Tags: artificialintelligence | robot | ethics | intelligentmachines | artificiallife]