From the FMS Global & UK News Desk of Jeanne Hambleton
Courtesy of Blog.wired.com.wiredscience.
By Brandon Keim (firstname.lastname@example.org) December 18, 2008 Categories: Bioethics, Robotics
Babysitting robots, once the province of speculative fiction, are on the market. They make conversation, recognize faces and keep track of kids. They are not a replacement for TV or games, but for personal care — and some researchers worry that kids will be harmed.
“If you leave a small child in front of the TV, you have to keep popping in to make sure they are OK. But these are so safe that people will eventually leave their children in the care of robots,” said Noel Sharkey, a University of Sheffield roboticist.
Sharkey’s concerns, voiced in an editorial, “The Ethical Frontiers of Robotics,” published in Science, come at a potentially historic intersection between robotics and parenting.
Models now on the market range from the Hello Kitty robot — “perfect … for whoever does not have a lot time to stay with child,” proclaims a vendor — to NEC’s PaPeRo, which tells jokes, gives quizzes and uses radio-frequency identification chips to track kids. In another generation, these sophisticated machines will likely seem quaint.
Personal service robots are more common than industrial robots — an estimated 5.8 million are now in use, five times more than in industry — and people are happy to use them for tasks once fulfilled by people. One survey of public attitude towards robots found that many people were willing to to use them as babysitters — more people, in fact, than would use robots as priests or massage therapists.
“What would happen if a parent were to leave a child in the safe hands of a future robot caregiver almost exclusively?” wrote Sharkey. “The truth is that we do not know what the effects of the longterm exposure of infants would be.”
Sharkey does, however, take instruction from psychologist Harry Harlow’s famous and controversial tests on the importance of maternal care for monkeys, and ostensibly people: when nursed by inanimate objects, they grew up to be withdrawn and socially dysfunctional.
In the editorial, he mentions research — which now would be too unethical to conduct — on monkeys raised by inanimate nurses, which demonstrated the importance of maternal care.
Roboticist Ronald Arkin of Georgia Institute of Technology agrees that robots will affect people. “This stuff absolutely warrants further study,” he said. “People’s behavior is going to change as these artifacts are introduced. We see that with previous technologies, too — TV, the internet, the VCR.”
Arkin is, however, less immediately concerned than Sharkey, and willing to wait for research results before being alarmed. “We do not have to be fearmongers, but we do need to study them intelligently and rationally,” he said.
But Sharkey is worried that sound science is impossible. Commercial robot makers, he said, are “doing experiments showing positive results by introducing them into schools for two or three hours a day. Children love them. But what we cannot do, scientifically, is long-term studies with isolated children.”
The sorts of tests necessary to directly test the effect of robot care would be unethical.
To Clifford Nass, director of Stanford’s Communications Between Humans and Interactive Media Lab, Sharkey and Arkin’s concerns are ultimately just practical. There is a more fundamental question posed by the use of robots to care for children.
“The question is, if robots could take care of your children, would you let them?” he said. “What does it communicate about our society that we are not making child-care a number-one priority?”
Nass pointed out that surveys show people are least willing to use robots as massage therapists, even though robots could make excellent masseurs. The reason, he said, is the meaning of a massage.
“There are some things you do for symbolic reasons, not technical reasons,” he said.