Roboticists covered a sweeping range of topics at Xconomy’s annual Robo Madness West conference last week, from the ethics of artificial intelligence to the powerful impact of robots that have faces.
Two themes ran through all the panel discussions, whether they focused on robot design, logistics and manufacturing, drones, or artificial intelligence. Speakers repeatedly brought up:
Hardware—Ways to make a killing by making cheaper, better versions of certain components.
People—In various roles, they make up some of the thorniest challenges to the growth of the robotics/AI sector.
Let’s talk about hardware first. While robot developers are benefiting from the availability of some cheap, commodity components to build their prototypes, there seems to be plenty of room for innovation, according to the Robo Madness panelists. Once robotics companies start to manufacture at scale, they need to keep costs down on all their parts.
Some modules are now so inexpensive that they can be bought by the scoopful, Rosanna Myers, CEO of Carbon Robotics, says. (Her company makes a modular, trainable robotic arm.) But there’s been disappointingly slow progress in improvements on basic nuts-and-bolts items like motors, gearboxes, and batteries, says Paul Birkmeyer, co-founder and chief technology officer of San Francisco-based Dishcraft Robotics, a maker of robots for commercial kitchens.
The auto industry is going to drive the market for sensors by deploying millions of them in coming years, Fetch Robotics CEO Melonee Wise says. “If you can make low-cost sensors, you can probably become extremely wealthy.”
Other items on the panelists’ wish lists: inexpensive laser scanners, laser radars, and a harmonic drive (a type of gearing system) for $100.
People are also necessary components of a robotics/AI economy, but a full population with knowledge of the field has yet to arise, the panelists said. Skilled engineers who also have experience with robots are in short supply, Wise and Myers say. Officials at regulatory agencies, such as the Federal Aviation Administration, are still learning while they frame ground rules for drones, self-driving vehicles, and other technologies. And the businesses that might profit most from adopting robot technologies are not quite up to speed.
“Very few customers have the depth of knowledge to evaluate us in comparison to a competitor,” Wise says. San Jose, CA-based Fetch makes wheeled robots that can work side by side with humans to collect and transport items within warehouses.
People are also at the heart of an ongoing concern about robots that came up across various sessions at Robo Madness: Will robots rob human workers of their jobs?
Myers says her company’s robotic arms can relieve factory workers of repetitive and potentially dangerous tasks. Chief designer Adrian Canoso of Santa Clara CA-based Savioke says the company’s Relay robots have run about 30,000 errands in hotels, like delivering toothbrushes to guests. He says the machines take the pressure off busy staffers at the front desk.
“We’re happy to say that no one’s been replaced by robots,” Canoso says. (The Relay robot is pictured above at the panel on Robot Design.)
But Dishcraft CTO Birkmeyer says robots are becoming increasingly capable, and societies will need to prepare by educating people for “what they’re good at.”
Already, the speed of decision-making in fields such as defense and finance exceeds human capacity, says Rob McHenry, vice president of public sector operations at Palo Alto, CA-based research services company PARC, a Xerox company. High-speed trading on stock exchanges requires artificial intelligence, he says. “We’re being forced to concede control.”
But so far, it’s still a human being who understands the context of the task and designs the strategies, McHenry says. “I’ve never seen a robot that can set its own objective.”
As they adapt robots and artificial intelligence to enable machines to work together with people, tech entrepreneurs are confronting the fact that human beings have certain hard-wired design elements. These include habitual thoughts, fears, sympathies, and behaviors.
A robot like Relay that has a physical body—even though the short machine looks something like a rolling pedestal—can seem creepy milling about along hotel corridors if guests don’t know why it’s there, Canoso says. So Savioke has the robot announce, “I’m a delivery robot.” The guests’ perceptions change, he says.
“Getting into an elevator, people will step aside because they have empathy for the robot,” Canoso says.
Humans expect intelligent agents to have human-like traits and imperfections, says Leila Takayama, who has analyzed human-robot interaction at PARC, Google, Willow Garage, Nokia, and her own company, Hoku Labs in Palo Alto, CA. One company that benefits from these human expectations is Bedford, MA-based iRobot, maker of the autonomous vacuum cleaner called the Roomba, which is roughly the shape of a bathroom scale.
“People love Roomba because it bumbles around and bumps into walls,” Takayama says. They’re thinking, “Isn’t that adorable?” she says.
But human expectations can sometimes get in the way if robot makers don’t take them into account in their designs, Takayama says. People may be distracted when dealing with a somewhat human-looking robot if they can’t figure out its gender, she says.
Cory Kidd, CEO of San Francisco-based Catalia Health, says his company has delved into human psychology as it developed its social robot, Mabu, which is designed to be a health coach for patients. The Mabu looks like a bald, pale yellow, alien child with big, mobile eyes and a simple dent for a mouth. It lives with patients and speaks to them, encouraging them to go outdoors, and reminding them to take their medicines, for example.
In clinical trials of the device, all the participants named their Mabu, and many dressed them, Kidd says. “They were wearing hats and scarves, and one had a red feather boa hanging around her neck,” he says.
Catalia has concluded that robots with faces are far more effective in nudging people toward better health habits than mobile apps on screens, even when the digital personal assistants speak, Kidd says.
But will intelligent robots always need to be physically embodied in one place? That was one of the interesting questions raised by Bo Begole, global head of the Media Lab at Huawei Technologies. Begole moderated a panel on AI and human-machine interaction.
Begole and other panelists envisioned users having a relationship with a single AI that lives as an ambient intelligence floating across the person’s various devices, drawing on data from many sensors to anticipate the user’s needs. Another possibility is that a person, or a family, could interact with a collection of different AIs serving various purposes.
Could you then end up with conflicts among the AIs helping different family members? Begole wonders.
The prospect of a barrage of messages from “hundreds of chatbots talking to us is an argument for a single point of interaction,” Takayama says.
Begole raised another intriguing question: Could advanced robots ever develop the ability to be creative, or in some way original?
An emphatic “No” came from JR Alaoui, CEO of Palo Alto, CA-based Eyeris, which trains machines to read human micro-expressions.
“There is no standard way for people to be creative,” Alaoui maintains. “If there is no standard, you cannot train an AI to be creative.”