A.I. in 2019: IBM Exec on Advances in Training, Trust, Transparency
(Page 2 of 2)
making machine learning systems’ conclusions more understandable, and he’s hopeful more progress will be made on that front. Coincidentally, Aiera, a financial technology startup working on that problem, raised $3.5 million in seed funding this month to advance its A.I. tools for making stock picks. Aiera, with offices in New York and Boston, says it has had success in reverse engineering the conclusions its deep learning-based system reaches, so it can explain the software’s investment recommendations in “human-readable” reports.
Making machine learning systems more transparent could also make the tools more useful, Welser says. For example, a financial analyst evaluating a potential business merger might want the software to digest tons of documents and, rather than only spitting out a recommendation, it would share a list of reasons why the merger might make sense and a list of reasons why it might not, he says. The idea is to “help with the reality of decisions most of us make all the time, which aren’t simply yes or no answers,” Welser says.
He says having clearer explanations for machine learning models’ decisions could be an important step toward overcoming another important and long-standing challenge in A.I.—helping machines understand the difference between causation and correlation. Currently, A.I. systems mostly make decisions based on patterns that are correlations, Welser says. The rooster crows, and the sun rises—that’s a correlation. Someone flips a light switch, and a light bulb turns on—that’s causation.
Getting A.I. software to understand what causes outcomes is “probably the best bet to figure out ways not only to get away from [needing] so much data to train it on, but also have it be more robust—and also have it find the same connections that a person would find in the data,” Welser says.