(Page 2 of 2)
figure out how to build more powerful conversational agents, so that computing can finally move off the screen and into our daily lives.
Ever since 1988, when William Gibson published Mona Lisa Overdrive, the final novel in his famous cyberpunk trilogy, I’ve been waiting for a real-world version of Continuity. In the book, Continuity was a conversational expert system, or maybe a self-aware artificial intelligence, plugged into a vast global database. From her networked house, Angie, the book’s main protagonist, could call it up by name:
“Do you know how to reach Hans Becker?”
“I have his agent’s number in Paris.”
And so forth. Continuity could do research for Angie, make videos, even write books. What I found intoxicating about Gibson’s idea was not just the concept of a global knowledge base (and remember, this was a decade before Google, and 13 years before Wikipedia)—it was the idea that you’d be able to talk to it.
Today, the big players in tech are investing billions in conversational and/or voice-activated virtual assistants. Microsoft has Cortana, Amazon has Alexa, Google has Ok Google, and Apple has Siri. In addition, Facebook is helping companies create chatbots for Facebook Messsenger, and Siri’s original creators are working on a new, more context-aware system called Viv. Conversational interfaces are hot.
That said, none of these assistants are anywhere near the point where they can hold up their end of an actual conversation. That would require what computer scientists call “strong AI” or “artificial general intelligence,” which is likely decades away.
But that’s okay, because frankly, Continuity-style AI is overkill. We don’t need our laptops or smartphones to be our friends. We just need them to understand our needs, and to be better at tapping into all of the digitally mediated services that surround us.
Once Siri gets smart enough to take over many of the tasks that currently force me to poke at a tiny screen with my fat fingers, you can bet I’ll buy a new iPhone. I’m talking about things like:
At that level of intelligence, it won’t matter as much whether the iPhone has a beautiful edge-to-edge OLED screen, because we won’t need to pull our phones out of our pockets as often.
From this point of view, the alleged disappearance of the headphone jack on the iPhone 7 begins to make sense. If you’re going to be talking with your iPhone a lot more, you’ll need to be able to hear it, and it will need to hear you, even when there’s lots of noise around. That means you’ll need an earbud, and that earbud should work whether your phone is in your pocket or on the other side of the room, which means it needs to be cordless.
If you saw the 2013 movie Her, none of what I’m describing will be novel to you. In the movie, Theodore (Joaquin Phoenix) still carried a little smartphone-like device, about the size of a business card holder. But it acted only as a camera and a wireless conduit to his personal cloud-based AI, Samantha (Scarlett Johansson). Theodore and Samantha communicated mainly through his earpiece, which looked like a stylish hearing aid.
Bluetooth earbuds are a real thing now, so in hardware terms, the movie wasn’t all that prescient. Cutting the cord is the first step toward getting people to think of their smartphones and their earbuds as an always-there, connected ecosystem. (Once you’ve got such a system, you don’t need a separate voice-assistant device like Amazon’s Echo, which is why I’m skeptical about reports that Apple is developing one.)
I wish I could predict how many more steps will be needed before Siri becomes a truly reliable and versatile personal assistant. In a 2014 column called “Welcome to the Seven-Year Technology Pause,” I noted that big technology advances tend to take about 15 years to play out. If the iPhone’s introduction in 2007 kicked off one such cycle, I wrote at the time, then we were still seven or more years away from the next big hardware-software convergence. Given the gradual pace of recent progress on interfaces, that may not be a bad estimate.