As top lawyers from Facebook, Google, and Twitter testify on Capitol Hill this week about Russians using their platforms to interfere in the 2016 U.S. presidential election, it’s becoming clearer that the fight against the spread of fake news won’t be won by humans alone.
That’s according to Yann LeCun, a machine learning expert and Facebook’s director of A.I. research. The question is what role technology will play, and how long it will take to develop effective tools.
Bloomberg reported this week that PolitiFact, the Associated Press, and the other outside organizations Facebook has enlisted to fact check articles are struggling to handle the massive volume of fake news on its platform, and some sources say the current process Facebook has set up is too “cumbersome and inefficient” to fix the problem. In a statement to Bloomberg, Facebook said the third-party checks are part of a multi-faceted effort that is making progress. The company’s top lawyer told senators that Facebook would double the number of human workers on its safety, security, and community issues teams to 20,000 by the end of 2018.
Artificial intelligence, meanwhile, will play an increasingly important part in tackling fake news, but humans will likely always be involved, even as A.I. technologies improve, said LeCun, also a professor at New York University.
Xconomy spoke with LeCun on Wednesday after he gave a talk on MIT’s campus during a business conference called “AI and the Future of Work.” Asked about fake news, LeCun stressed that he’s not an expert and that he’s not currently developing any algorithms or other technologies specifically aimed at solving the problem. He didn’t comment on Facebook’s efforts.
But generally speaking, LeCun said A.I.’s usefulness in rooting out fake news will grow as natural language understanding technology improves. That can help with things like translating content and classifying articles by topic, he said.
“There is steady progress” being made, LeCun said. He thinks there could be a “quantum leap” if researchers figure out how to train machines to learn the way humans and animals do, endowing them with something akin to what we consider “common sense.” “But that might take a really long time,” he adds—perhaps decades.
“Assessing the reliability of a piece of news is really hard because who is to say?” LeCun continues. “There is a technology part of it, but it’s also very much a kind of product design, if you will—what sources people are ready to believe when signaling a piece of news as true or false.”
Ultimately, LeCun thinks the solution will involve a combination of A.I. systems and humans “because we need trust in the result.”
Meanwhile, here are three highlights from LeCun’s presentation:
—Fighting bias: One of the concerns about machine learning systems is they can exhibit biases acquired through the data used to train them.
“That might have very dire societal consequences if that problem is not dealt with in appropriate ways,” LeCun said. “I’m personally optimistic on that account. I think it’s going to be easier to remove biases from machines than it ever was from people.”
—Opening up black boxes: LeCun pushed back against what he called the “myth” that artificial neural networks “are black boxes—that we can’t really understand what works inside.”
“That’s not true, we can,” he said. “There’s all kinds of techniques for this. The reason they’re not used is because they’re not that useful, at least with the use cases [for neural networks] that we have today.”
—Valuing human creativity: Society tends to value goods and experiences crafted by humans more than those made by machines. LeCun gave the example of a Blu-ray player—a complicated piece of technology, largely made by robots—which costs around $50. Meanwhile, a handmade ceramic bowl, created using methods that date back 8,000 years, can cost upwards of $750, LeCun said.
That dynamic won’t change, even as software and devices get more sophisticated, LeCun argued.
“We can build machines that will do jazz improvisation, but they’re not going to communicate human emotion; they might fake it pretty well,” he said. “So, the prediction is that there’s a relatively bright future for creative and artistic professions that have an authentic human application in them and [involve] creation and emotion communication.”