acm-header
Sign In

Communications of the ACM

ACM News

Teaching Robots Right From Wrong


View as: Print Mobile App Share:
How do you teach a robot what is right?

It is urgent that robots and robotic systems be taught how to behave appropriately in a broad range of situations.

Credit: Brett Ryder

More than 400 years ago, according to legend, a rabbi knelt by the banks of the Vltava river in what is now known as the Czech Republic. He pulled handfuls of clay out of the water and carefully patted them into the shape of a man. The Jews of Prague, falsely accused of using the blood of Christians in their rituals, were under attack. The rabbi, Judah Loew ben Bezalel, decided that his community needed a protector stronger than any human. He inscribed the Hebrew word for "truth," emet, onto his creation's forehead and placed a capsule inscribed with a Kabbalistic formula into its mouth. The creature sprang to life.

The Golem patrolled the ghetto, protecting its citizens and carrying out useful jobs: sweeping the streets, conveying water and splitting firewood. All was harmonious until the day the rabbi forgot to disable the Golem for the Sabbath, as he was required to, and the creature embarked on a murderous rampage. The rabbi was forced to scrub the initial letter from the word on the Golem's forehead to make met, the Hebrew word for "death." Life slipped from the Golem and he crumbled into dust.

This cautionary tale about the risks of building a mechanical servant in man's image has gained fresh resonance in the age of artificial intelligence. Legions of robots now carry out our instructions unreflectively. How do we ensure that these creatures, regardless of whether they're built from clay or silicon, always work in our best interests? Should we teach them to think for themselves? And if so, how are we to teach them right from wrong?

 

From The Economist
View Full Article


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account