Chasing A.I. Breakthroughs: Q&A With Director of New MIT-IBM Lab
For all the progress that has been made in developing artificial intelligence technologies in recent years, the software “brains” that power machines are still nowhere near the sophistication of human minds. And advancing the field will require bold, new ideas, says A.I. researcher David Cox.
Cox (pictured) has spent the past 11 years at Harvard University—first as a research fellow, and later as an associate professor—trying to enhance machine intelligence by studying the intersection of neuroscience and computer science. Now, he’ll continue that work about two miles down the road at IBM’s Kendall Square office in Cambridge, MA. This month, IBM named Cox the director of its efforts in a new, joint A.I. research lab being run in partnership with MIT.
IBM is pouring $240 million over 10 years into the MIT-IBM Watson A.I. Lab announced in September. More than 100 IBM scientists and MIT professors and students will collaborate to advance A.I. algorithms, software, and hardware technologies that could have an impact on sectors like healthcare and cybersecurity, two key areas for IBM’s business. In addition to practical applications of A.I., researchers will also “explore the economic and ethical implications of A.I. on society,” MIT said.
Cox—who got his bachelor’s degree at Harvard and his doctorate at MIT—says he will oversee the lab’s joint research activities, working with his MIT counterparts to choose projects and “shepherd these collaborations as they grow and evolve.” He’ll also continue his own machine learning research, he says.
“I was already collaborating with several labs at MIT even before I took this job, so it is especially natural for much of that work to continue under the auspices of the new lab,” Cox says in an e-mail. (He’s on leave from Harvard, he adds.)
If the joint research lab is successful, it not only could help IBM deliver on its big A.I. bet, but might also generate breakthroughs for a field that is relying heavily on discoveries that are about 30 years old. Both of those are still big “ifs.”
Here is the rest of Xconomy’s e-mail exchange with Cox, in which he shares more details about the lab’s initial projects, his philosophy on how biological brain research can drive A.I. innovations, and what it might take to make leaps in A.I. capabilities.
Xconomy: What are your specific metrics for success with this A.I. lab, both from the perspective of IBM and the broader field?
David Cox: There are many short-term metrics that one might look at, such as scientific papers published or patents filed, but what we are really focused on is achieving breakthroughs in the science of A.I. To be truly successful, we can’t just chase incremental progress; we need to invent new paradigms for artificial intelligence. Today’s deep learning algorithms have achieved truly amazing things, but there’s a growing realization that these algorithms lack the sort of deep understanding of the world that we, as humans, take for granted. Deep learning is likely going to play an important part in that vision, but we also believe that it will require embracing a much broader set of ideas.
“Breakthroughs” [are] obviously easier said than done, but we’ve been given an extraordinarily long runway in which to achieve that goal: IBM’s commitment to the MIT-IBM lab is for 10 years, and that gives us the time horizon to think long term while also meeting shorter-term milestones.
X: Can you give an example or two of other ideas you might embrace to advance A.I., outside of deep learning?
DC: We’re going to be exploring a lot of different areas, but two that are of particular interest include approaches that merge classical A.I. approaches (where IBM has great strength) with tools from deep learning, and looking to cognitive science and neuroscience for ideas that we might be missing. These two ideas are intimately related. We are particularly interested in systems that can learn and reason about structured representations of the world.
As an example, many of today’s deep learning systems seem to detect a dog by detecting the texture of dog fur in an image, and then making the guess that there’s probably a dog in the image. We think it is important to build systems that have deeper knowledge of what a dog is: What parts does it have? How do they fit together? How are they allowed to move relative to each other? How does the dog physically interact with other objects in the world? A dog is an agent in the world with a mind of its own. What is it thinking? What might it do next? How will other agents respond to it?
These ideas extend beyond just visual examples; most interesting data has deep structure and knowledge behind it. Extracting all of this rich structure is something that we humans do effortlessly, but it isn’t easy yet for machines. This is a major gap that we have to close.
X: How will the relationship with MIT work? Any challenges or kinks to work out there?
DC: Most of the research we do will take the form of collaborative projects. MIT and IBM researchers come together to propose joint projects, and the leadership of the lab on both sides (MIT and IBM) come together to decide what projects to green light. In some cases, we will make suggestions about how to focus or adjust the scope of the project, but the ideas bubble up from the two organizations organically, as they should. The first batch of collaborative projects are starting right now, and we are grateful to have support from the highest levels of both IBM and MIT.
X: Any details you can share about the first batch of collaborative projects the lab is undertaking?
DC: We have not yet begun sharing information about specific projects, but I can say that the initial batch is primarily focused on … Next Page »