IU Researchers Create Tools to Track and Combat Fake News

(Page 2 of 2)

Obama is a human being and so is Joseph Stalin. Since Stalin was a communist, an algorithm might be tempted to conclude that fellow human being Obama is one, too. “We wanted to make sure stuff like that doesn’t trick the algorithm,” he adds.

So the IU team did further question-answering exercises, like trying to create an algorithm to match U.S. presidents with the correct First Ladies. Ciampaglia said there were many pitfalls to overcome in that exercise, not least of which was teaching the algorithm to figure out the nature of the relationship between Barbara Bush and George W. Bush. Next, they tried matching U.S. states to capitals, and Oscar winners to the correct award category.

“We got very encouraging results then,” he says, adding that the team is optimistic that many facets of fact-checking can eventually be automated, thereby making it easier and faster to detect misinformation. “But we’re still far from a machine that understands natural language.”

IU researchers hope to integrate the tools they’ve created with software used by fact-checking organizations to make the job of verifying information less tedious and more automatic. All of the tools being developed by OSoMe are open source, and, at the moment anyway, purely for research (and not commercial) purposes.

While IU researchers delve deeper into fact-checking, how can people mitigate the effects of fake news and misinformation? Ciampaglia says getting out of your echo chamber is the first step. “Social media amplifies cognitive bias. Modeling helps us understand how misinformation spreads—if you change the parameters, will that affect how it spreads? We’re seeing that if people are bombarded with information, they lose critical-thinking skills.”

He’s happy to see Facebook, Google, and Twitter beginning to crack down on the proliferation of fake news, although he correctly notes the issue is not how many fake articles are out there, but rather how much exposure they get.

“The real concern is the financial incentive to produce misinformation—it’s very easy to build a professional website, and the media landscape is so huge that it’s hard to tell what’s fake,” he says. “Cutting incentives will reduce the problem, but it won’t solve the root cause, which has more to do with polarization of society and the unintended consequences of personalized news. It can’t be solved purely through technology, but I’m moderately optimistic.”

Single PageCurrently on Page: 1 2 previous page

Sarah Schmid Stevenson is the Custom Content Editor for Xconomy Insight. You can reach her at 313-570-9823 or sschmid@xconomy.com. Follow @

Trending on Xconomy