MIT Scientist Sparks Debate on Future of Robot Ethics

An MIT educated roboticist has created a relatively simple robot that has still managed to make quite a stir. Simply put, this robot harms humans.

Alexander Reben has designed and built a machine that uses an algorithm to determine if it will prick the human user’s finger with a needle or not. Reben himself cannot predict what the robot will do, so the robot essentially “decides” whether or not to inflict harm.

Reben’s robot is groundbreaking because it violates Isaac Asimov’s first law of robotics, which is that a robot may not injure a human being or, through inaction, allow a human being to come to harm. While Asimov is a science fiction author and his laws were created for originally fictional purposes, they have been widely viewed as our only real groundwork for the still-developing field of robot ethics.

Reben’s device is the first robot to break this law, technically speaking. Of course, humanity has designed drones and missiles as weapons to hurt people everyday, but behind these robots is a human who makes the command; none of these machines actually “make the decision” to inflict harm.

Reben created his robot to force people into considering the very real possibility of a world in which many types of robots have the ability to intentionally cause harm.

“I wanted to make a robot that does this that actually exists. That was important too, to take it out of the thought experiment realm into reality, because once something exists in the world, you have to confront it. It becomes more urgent. You can’t just pontificate about it,” said Reben.

The MIT scientist’s project has raised many questions. If a robot were to unintentionally inflict harm, would it or its creator be held ethically responsible? Dr. Ben Letson, philosophy professor at Emory & Henry College and ethics expert, says that as of right now robots “can only do what we program them to do” and so the responsibility would be placed on the creator.

However, in a potential future, we may devise robots with consciousness. Dr. Letson says that placing the blame on the creator of one of these robots would be like blaming the actions of a horrible person on his or her parents.

It is easy to imagine this future world. Take a robot that already exists, for example. A possible feature in many self-driving cars would be the ability to decide if they should protect their passenger or not. If a group of pedestrians stepped into the road and in the path of a self-driving car, the car would be forced to decide whether to hit the pedestrians and to save its passenger, or to instead veer into tree and kill its passenger. Killing the passenger would save more lives, but could the car be programed to protect its passenger over pedestrians?

“We may gradually distance ourselves from ethical responsibility for harm when dealing with autonomous robots…the further we get from being able to anticipate the behavior of a robot, the less ‘intentional’ the harm,” said Kate Darling, another researcher from MIT.

– Sydney Cooke

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s