The Future of Human-Robot Interaction
How will humans and machines work together in the future? Will social robots be nothing more than robotic digital assistants like those on smartphones, or are there new opportunities and challenges when we put an AI brain into a robotic body? What are the risks and ethics around AI systems?
Those were some of the topics of a question-and-answer panel held at Xconomy’s Robo Madness West 2016 conference last month. I moderated the panel, which consisted of experts in the complementary fields of robotics, artificial intelligence, and human-computer interaction.
The panelists were JR Alaoui, CEO and co-founder of Eyeris, a deep learning and emotion recognition startup; Rob McHenry, vice president of public sector operations at PARC (and former DARPA program manager); and Leila Takayama, founder of Hoku Labs and a professor of psychology at UC Santa Cruz (formerly of GoogleX and Willow Garage).
Here are some edited highlights from the discussion:
Bo Begole: Now that computers can accurately recognize speech and gestures, we are hearing the promise that interacting with robots will be as natural and easy as interacting with other people. Frankly, I find communicating with humans to be incredibly difficult. So, how much benefit can we really expect from this idea of “natural” interaction?
Leila Takayama: We’ve had this vision of interacting naturally with computers for decades, at least as far back as Apple’s Knowledge Navigator video in the 1980s. But language is ambiguous, and that’s a feature not a bug, allowing us to express concepts efficiently by not having to be overly precise. Humans can fill in the gaps and use conversation to elicit more information when there is uncertainty. Computers cannot yet deal well with ambiguity or uncertainty, so “natural” language may not work for mission-critical applications where we’ll define specialized languages. For consumer robots, though, people want to talk naturally and we need to design casual dialogs that make it easy to express desires without having to give every small detail. Also, language allows for multiple interpretations (e.g., sarcasm, innuendos) and that enables people to save face in social interactions, which is important for everyday conversations.
Begole: Humans express emotion and emphasis when they talk; what’s the importance of having the robots exhibit empathy? Do they need to detect the emotional state of humans in order to know when to be nicer? Why not just design them to be nice all the time?
JR Alaoui: Yes, they should be designed to be polite, but that’s not the only thing that emotion and empathy are for. People can read each others’ feelings in less than a 15th of a second, and we need the robot to do that in order to feel it has truly understood us. If I’m in a relaxed mood, it’s delightful if the robot makes a joke. But when I’m in a hurry, I don’t want it to be joking around – it needs to respond quickly and with a sense of urgency. And if I sound uncertain about what I want, or dissatisfied, the robot can offer alternative suggestions.
Begole: The DARPA Grand Challenge in autonomous vehicles paved the way for the autonomous vehicle products that are coming out today. What are some examples of government-funded research in AI and robotics that is happening today that will set the stage for tomorrow’s products?
Rob McHenry: Public-funded research has always pushed the state-of-the-art in advanced autonomy, which then drives commercial AI. I think many people would be surprised by the advanced capabilities that autonomous systems for defense are already demonstrating – capabilities that many might guess wouldn’t be achievable for many years.
For example, DARPA and the Navy are testing at sea today an autonomous ship that is designed to go “toe-to-toe” against a human adversary in the wild during complex unconstrained military operations. The ACTUV (Anti-submarine warfare Continuous Trail Unmanned Vessel) program has delivered an unmanned ship that can not only comply with the complex Rules of the Road in the open ocean, but simultaneously track and harass a manned submarine, keeping a step ahead of a highly trained human submarine captain. This is an example of AI that can understand humans, in both competitive and supportive roles.
Begole: What is the future of autonomous defense systems? Is it safe to imagine letting a machine decide when to apply lethal force?
McHenry: Unfortunately, all indicators are that we’re being forced in that direction. As the speed of modern warfare accelerates, the ability for humans to always be in the decision loop is shrinking – we’re necessarily ceding some control to AI. There have been defensive examples like ship missile defense systems for decades, and more recently in jet aircraft controls where the system sometimes must take action before the pilot is even aware of a threat. There is a similar example in high-speed stock trading.
The safety on this is that all of these systems operate within constraints that are set by humans. As the technology advances, those constraints are getting broader – DARPA’s LRASM (Long Range Anti-Ship Missile) program has delivered an AI-enabled cruise missile that can operate independently over thousands of square miles and satisfy the U.S. rules of engagement without an operator in the loop. But a human still has to launch that missile. As long as a human is in the loop to determine intent and constraints, I believe we are operating within a safe framework for lethal force.
Begole: That sounds alarmingly dangerous to our future.
McHenry: In my mind it comes down to self-initiative. Even the most advanced AI systems imagined today do the things they do because we tell them to. They have no sense of intent or purpose, only task. Maybe that spark of initiative is innately human, but it is at least so far beyond the foreseeable technology horizon to be a distant theoretical risk at best.
Alaoui: This is why modeling true human cognition is important so that we can imbue these systems with ethics and reasoning. Like a human, the autonomous system should choose to hit a truck rather than a child when faced with such a dilemma. The systems we have today are a far cry from that level of intelligence, and we need to start modeling and programming them with ethics and rational cognition now.
Begole: For humans and AI to work together, is it important to put the AI into the body of a robot or a smartphone, or can it be spread across our devices and in the cloud?
Alaoui: Both. We’re definitely moving in the direction of “ambient intelligence” across the Internet of Things, where all of your devices participate in predicting your needs and delivering solutions proactively. Rather than simply reacting to commands, systems will know your patterns of action and preventatively intervene to give you what you need: home automation, heating and air conditioning and also health and nutrition coaching.
Ultimately, we want technology to disappear into the fabric of our lives. Electricity is a great example of that, and it was the vision of the original ubiquitous computing research at PARC to create an environment where using computers doesn’t require special training and is a calm and natural experience.
Takayama: At the same time, human evolutionary wiring defines what we think of as “natural,” and we are used to brains being inside of bodies. So, all of our interactions with intelligence are inclined toward an embodied agent – like a robot. The presence of the body makes us project a personality and to think of a robot as something of a peer – even more so than apps on a screen or voices from a smartphone.
There has been some great research at Indiana University studying the use of robots versus cameras for older adults, who generally preferred the robot. People are a little creeped out by surveillance cameras in the home, but cameras in robots are less concerning partly because it’s easier to remember that it’s there. If you want to sneak a donut, for example, you can hide it from the robot’s eyes.
At the same time, the embodiment of the intelligence makes it a little more persuasive. If the robot says “don’t eat that cookie,” you take it more seriously because the robot is physically there – even if you know it can’t really threaten you. I’ve seen this with telepresence applications too – sure you can have video conference on a phone or a laptop, but having a robotic telepresence is more meaningful because it is taking up space like a real person would do. There’s a lot of empirical data showing that even though video conferencing is functionally similar, our evolutionary wiring makes robotic telepresence feel more real and creates a more trusting, human connection.
Begole: Do people really “suspend disbelief” enough to think that the robot is a real being?
Takayama: No, we don’t have to suspend disbelief, because we cannot help it. Our animal brain responds socially to things that exhibit any level of interaction. Just like with our pets, we can readily perceive robots as being adorable, smart, and interesting. We want them to have character and we might as well design for it.
Begole: Are we ultimately at risk of designing robots that replace humans?
Takayama: I don’t think of it that way. There is no point in making machines that replace humans – we already know how to make more humans, so what would that buy us? We need to understand the kinds of tasks that machines are uniquely suited for and design them for those while we humans focus on areas where we excel. AI will not replace humans but will co-exist in an ecosystem that is evolving.
Honestly, the online book recommendations I get are better than those I get from human clerks at even my favorite book stores. Even though those humans probably care more about me and genuinely try to make me happy, they just don’t know enough about what matters to me.
Do the people working in book stores really want to be selling books or would they prefer to read, interpret, and discuss books? It’s just that selling books is a way to make a living close to their passion for books. Maybe there will be careers interpreting books for AIs. I’m not sure, but it’s clear that there are some things that human psychology is better at and some things where the opposite is true. We are at a point now when we are learning the symbiosis of these two kinds of intelligences.
McHenry: I think we’re looking at this all wrong – the opportunity is to design teams of humans and AI that work more productively than either could alone, and productivity gains have always resulted in improvements in our society. We’ll see the balance of the human-computer team evolve as technology advances, but there will always be roles where humans have the advantage.
By the way, we often imagine humans at the top of the pyramid directing robots, but the inverse seems more likely to me. The areas we see AI performing best involve optimization of complex networks that exceed human capacity to manage, and humans already have the nice “end-effectors” of hands and bodies that are so challenging for robotics. Consider that while we’re just seeing autonomous cars become feasible, the routes for human delivery truck drivers have been generated by AI for years.
Begole: What is the industry getting wrong or not paying enough attention to for consumer robotics?
McHenry: There needs to be more attention to cognitive modeling in AI rather than sole dependency on machine learning. Some parts of human reasoning are not observable, and machines cannot learn what they cannot see.
Alaoui: There’s some risk of backlash due to concerns about invasions of privacy. Cognitive modeling is necessary so we can program ethical rules into the reasoning systems. We can have the information compartmentalized with rules about when and how it’s used. That may sound complex, but technologically it’s no harder than the AIs we’re trying to create.
Takayama: First, too many companies are overpromising (and will under-deliver on those promises) – let’s try to be more honest in our concept videos!
Second, the field is still dominated by the whiz and wow of the technology, maybe because it is coming out of industrial robotics where the value was straightforward: manufacturing things better, faster, and cheaper. But for consumer robots, there hasn’t been enough attention to who is it for, what will they do with it, and why will it matter to them. Consumer robots need different value propositions than industrial ones – more attention needs to be paid to human-centric innovation to identify benefits to consumer lives and to the design of valuable user experiences.