3 Laws Safe?
Tuesday, June 20, 2006It seems that state of the art robotics and artificial intelligence design is complex enough that Isaac Asimov's Three Laws of Robotics, or some similar restrictions on our potential artificial progeny, are on the academic agenda. As The Australian reports (well, reprints) in "Robot rules make sure robots don't rule":
The race is on to keep humans one step ahead of robots: an international team of scientists and academics is to publish a "code of ethics" for machines as they become more and more sophisticated. Although the nightmare vision of a Terminator world controlled by machines may seem fanciful, scientists believe the boundaries for human-robot interaction must be set now, before super-intelligent robots develop beyond our control. "There are two levels of priority," says Gianmarco Verruggio, a roboticist at the Institute of Intelligent Systems for Automation in Genoa, northern Italy, and chief architect of the guide, to be published next month. "We have to manage the ethics of the scientists making the robots and the artificial ethics inside the robots." Verruggio and his colleagues have identified key areas that include ensuring human control of robots, preventing illegal use, protecting data acquired by robots and establishing clear identification and traceability of the machines. "Scientists must start analysing these kinds of questions and seeing if laws or regulations are needed to protect the citizen," Mr Verruggio says.
While the potential for sentient artificial intelligence, or artificial life, is probably a long way off, it's definitely a good thing to think about how humanity will relate to intelligent machines. (Which will no doubt have long-term utility for those involved in such fields, but may equally be useful in asking about how humans relate to other humans as much as machines!)
[Tags: artificialintelligence | robot | ethics | intelligentmachines | artificiallife]
2 Comments:
I totally agree with the except of the article that you have quoted particularly about what the robots or AI should be allowed to do and how the scientists should conduct themselves. It got me me thinking...that we are facing two other problems with AI - ethically and philosophically.
1.) If we are able to design AI that is similar to us (which I think is almost impossible) we have the problem of their place in our society, because if they are able to behave like humans, then we face a problem ethically of how we treat them: as robots or living things?
Like the article says if they are going to be used for crowd control can we as humans hold them ethically responsible for hurting others, if they don't know that they are? I suppose that it comes back to the Asimov Laws
But this has probably been thought of by limiting their input/output responsiveness (knowledge) of themselves and the world around them.
2.) Philosophically if we were able to design something that was similar to us, then we face the challenge of defining them no longer as robots but almost humans - which brings us back to ethics again.
This is all conjecture of course because if there is to be a regulating body they would have surely thought about this and restrict how "aware" potential sentient artificial intelligence, or artificial life would be so to restrict it's actions. Hopefully preventing any disaster - major or minor - later on.
Just my 2 cents.
Renwick,
Interesting ideas, but should we always presume that we'll immediately recognise artificial life or AI when it becomes self-aware or alive on its own terms? Our own definitions of life are quite vague, as are our real understanding of how we came to be what we are. Artificiality might not be designed into being, but occur in ways, quite simply, we can't forsee. That's when the ethical questions get really interesting (and difficult, and murky!).
Lots to think about! :)
Post a Comment
<< Home