A.I. in 2019: IBM Exec on Advances in Training, Trust, Transparency
[Editor’s note: This is part of a series of posts sharing thoughts from technology leaders about 2018 trends and 2019 forecasts.]
Researchers are hopeful the industry’s next breakthroughs will enable A.I. to move beyond answering narrow queries to handling broader assignments. But getting there will require designing software that can “think” and communicate more like humans—big challenges that A.I. developers have worked toward since the field’s inception.
“I think we’ve gotten to a point where [A.I.] error rates are better than humans”—at least in some contexts—“and that’s good,” says Jeff Welser (pictured above), vice president of IBM Research and director of its lab in San Jose, CA. “But now we’ve reached a realization that that’s so much further away from what [we] really need to make this useful in an actual business context.”
With 2018 coming to a close, Welser and I chatted by phone about some of the technology advances in A.I. this year, and where things might be headed in 2019.
Some of the most interesting A.I. research projects that IBM (NYSE: IBM) worked on this year involved methods for speeding up the training of machine learning systems. For example, one effort aimed to develop a visual recognition model that could correctly identify new objects after being trained on just one or a few examples. Such systems are generally more accurate when trained with thousands or millions of examples. But that method requires a lot of manual work by humans to label all the samples so the software learns, say, this is a cat and this is not, Welser says. “That’s where we find the biggest bottleneck in terms of scaling up in training” machine learning software, he adds.
It’s a problem that other big tech companies such as Google (NASDAQ: GOOGL) and small startups such as Indico have tried to tackle in various ways in recent years.
With the “few-shot” learning method IBM developed, the idea is the human would be more of a hands-off teacher who would give a short initial lesson, then check the machine’s work. Let’s say the developer wants to teach the software to scan documents for mentions of banks. The software would be shown a few example names, and then it would scan a ream of documents and study how and where those example names are used, and what other words are used in that context, Welser says. Then it will try to identify other bank names that it wasn’t taught. A person can then scan a list of the words the system thinks are banks, and quickly give the software feedback on which answers it got right and which ones were wrong. Welser claims such a machine learning system could become quite accurate within a few hours of this sort of back and forth.
Another potential benefit of this approach is that, theoretically, it could combat bias in the machine learning algorithms—a problem that has dogged the industry—because humans would be able to more closely watch how the system is learning as it goes along, Welser says. “We hope that could make it even less biased, but I would say at this point we don’t necessarily know that’s the case,” he adds.
A desire to increase transparency and trust in A.I. algorithms were other big themes in A.I. this year, Welser says. In another recent project, IBM says its researchers devised a way to construct simpler machine learning models that are still accurate, but make it easier to understand how the models arrived at their conclusions. The A.I. industry has struggled with a “black box” problem: advances in “deep learning” systems have generated more powerful A.I. tools, but the so-called neural networks that underpin them, like the human brain, are complex and not easily interpreted.
Welser thinks that in 2019, A.I. developers will increasingly focus on … Next Page »