Trust Me, I Am a Robot Robot safety: as
robots move into homes and offices, ensuring that they do not injure people will
be vital. But how The incident In 1981 Kenji Urada, a
37-year-old Japanese factory worker, climbed over a safety fence at a Kawasaki
plant to carry out some maintenance work on a robot. In his haste, he failed to
switch the robot off properly. Unable to sense him, the robot’s powerful
hydraulic arm kept on working and accidentally pushed the engineer into a
grinding machine. His death made Urada the first recorded victim to die at the
hands of a robot. This gruesome industrial accident would not
have happened in a world in which robot behaviour was governed by the Three Laws
of Robotics drawn up by Isaac Asimov, a science-fiction writer. The laws
appeared in I, Robot, a book of short stories published in 1950 that inspired a
recent Hollywood film. But decades later the laws, designed to prevent robots
from harming people either through action or inaction, remain in the realm of
fiction. Indeed, despite the introduction of improved safety
mechanisms, robots have claimed many more victims since 198 I. Over the years
people have been crushed, hit on the head, welded and even had molten aluminium
poured over them by robots. Last year there were 77 robot-related accidents in
Britain alone, according to the Health and Safety Executive. More related
issues With robots now poised to emerge from their
industrial cages and to move into homes and workplaces, roboticists are
concerned about the safety implications beyond the factory floor. To address
these concerns, leading robot experts have come together to try to find ways to
prevent robots from harming people. Inspired by the Pugwash Conferences--an
international group of scientists, academics and activists founded in 1957 to
campaign for the non-proliferation of nuclear weapons—the new group of
robo-ethicists met earlier this year in Genoa, Italy, and announced their
initial findings in March at the European Robotics Symposium in Palermo,
Sicily. "Security, safety and sex are the big concerns," says
Henrik Christensen, chairman of the European Robotics Network at the Swedish
Royal Institute of Technology in Stockholm, and one of the organisers of the new
robo-ethics group. Should robots that are strong enough or heavy enough to crush
people be allowed into homes Is "system malfunction" a justifiable defence for
a robotic fighter plane that contravenes the Geneva Convention and mistakenly
fires on innocent civilians And should robotic sex dolls resembling children be
legally allowed These questions may seem esoteric but in the
next few years they will become increasingly relevant, says Dr. Christensen.
According to the United Nations Economic Commission for Europe’s World Robotics
Survey, in 2002 the number of domestic and service robots more than tripled,
nearly surpassing their industrial counterparts. By the end of 2003 there were
more than 600,000 robot vacuum cleaners and lawn mowers — a figure predicted to
rise to more than 4m by the end of next year. Japanese industrial firms are
racing to build humanoid robots to act as domestic helpers for the elderly, and
South Korea has set a goal that 100% of households should have domestic robots
by 2020. In light of all this, it is crucial that we start to think about safety
and ethical guidelines now, says Dr.
Christensen. Difficulties So what exactly is being
done to protect us from these mechanical menaces "Not enough," says Blay
Whitby, an artificial-intelligence expert at the University of Sussex in
England. This is hardly surprising given that the field of "safety-critical
computing" is barely a decade old, he says. But things are changing, and
researchers are increasingly taking an interest in trying to make robots
safer. Regulating the behaviour of robots is going to become
more difficult in the future, since they will increasingly have self-learning
mechanisms built into them, says Gianmarco Veruggio, a roboticist in Italy. As a
result, their behaviour will become impossible to predict fully, he says, since
they will not be behaving in predefined ways but will learn new behaviour as
they go. Then there is the question of unpredictable failures.
What happens if a robot’s motors stop working, or it suffers a system failure
just as it is performing heart surgery or handing you a cup of hot coffee You
can, of course, build in redundancy by adding backup systems, says Hirochika
Inoue, a veteran roboticist at the University of Tokyo who is now an adviser to
the Japan Society for the Promotion of Science. But this guarantees nothing, he
says. "One hundred percent safety is impossible through technology," says Dr.
Inoue. This is because ultimately no matter how thorough you are, you cannot
anticipate the unpredictable nature of human behaviour, he says. Legal
problems So where does this leave Asimov’s Three Laws of
Robotics They were a narrative device, and were never actually meant to work in
the real world, says Dr. Whitby. Let alone the fact that the laws require the
robot to have some form of human-like intelligence, which robots still lack, the
laws themselves don’t actually work very well. Indeed, Asimov repeatedly knocked
them down in his robot stories, showing time and again how these seemingly
watertight roles could produce unintended consequences. In any
case, says Dr. Inoue, the laws really just encapsulate commonsense principles
that are already applied to the design of most modem appliances, both domestic
and industrial. Every toaster, lawn mower and mobile phone is designed to
minimise the risk of causing injury — yet people still manage to electrocute
themselves, lose fingers or fall out of windows in an effort to get a better
signal. At the very least, robots must meet the rigorous safety standards that
cover existing products~ The question is whether new, robot-specific rules are
needed-- and, if so, what they should say. "Making sure robots
are safe will be critical," says Colin Angle of Robot, which has sold over 2m
"Roomba" household-vacuuming robots. But be argues that his firm’s robots are,
in fact, much safer than some popular toys. But what he believes is that robot
is just like other home appliances that deserves no special treatment.
Robot safety is likely to appear in the civil courts as a matter of
product liability. "When the first robot carpet-sweeper sucks up a baby, who
will be to blame" asks John Hallam, a professor at the University of Southern
Denmark in Odense. If a robot is autonomous and capable of learning, can its
designer be held responsible for all its actions Today the answer to these
questions is generally "yes". But as robots grow in complexity it will become a
lot less clear cut, he says. However, the idea that
general-purpose robots, capable of learning, will become widespread is wrong,
suggests Mr. Angle. It is more likely, he believes, that robots will be
relatively dumb machines designed for particular tasks. Rather than a humanoid
robot maid, "it’ s going to be a heterogeneous swarm of robots that will take
care of the house," he says. This passage is mainly about the benefits of developing robots and how people are going to get used to living with robots in their office and home.