IU Researchers Create Tools to Track and Combat Fake News

We’re now living in an era where, thanks to the Internet, we have easier access to more raw information than at any other time in the planet’s history. Yet, despite these reams of data, the presence of facts has seemingly never mattered less in public discourse. Now, a team at Indiana University is hard at work trying to figure out why, and what can be done to reverse this trend.

The Internet was supposed to be a great, democratizing force when it came to knowledge, learning, and news dissemination. But in addition to creating new educational and journalistic paradigms, we have incidents like the one that happened in Washington, DC, this past weekend, where a disturbed gunman stormed into a pizzeria in search of pedophiles.

The incident was traced back to a leaked fundraising e-mail from Hillary Clinton’s campaign manager that was transmogrified by trolls on 4chan—despite a complete lack of real-world evidence—into reports of a child sex ring run by the Democratic Party. (Vox does a good job of explaining how the conspiracy theory went viral from there.)

Now that it’s becoming clear that fake news influenced the way people voted for president, the trend has been heavy on the national consciousness. Once the annoying provenance of dittoheads and racist uncles, fake news has become commonplace on social media and is being taken as gospel by an increasing number of Americans.

Facebook, where a reported 44 percent of Americans get their news, has been in hot water for how it manages so-called trending topics, and the way its algorithms help promote fake news. In an October report, BuzzFeed found that one of Facebook’s more popular promoted posts was a humdinger about how George W. Bush and Barack Obama conspired to rig the 2008 election. BuzzFeed also found that 38 percent of posts on conservative pages and 19 percent of posts on liberal pages featured false or misleading content.

So, if the website where so many Americans get their news is rife with misinformation, what can be done to combat fake news? Professor Filippo Menczer leads a team of researchers at IU’s Center for Complex Networks and Systems Research that is trying to find out. In May, the center launched the Observatory on Social Media (OSoMe), a set of digital tools and corresponding application program interface (API) for anyone seeking to analyze social media trends.

Giovanni Luca Ciampaglia, an assistant research scientist with the center, says the purpose of OSoMe is to prove that computer techniques can automate part of fact-checking. The tools include Trends, which people can use to study the use of hashtags over time; Networks, which offers visualizations of retweets, mentions, and quotes; Map, which shows where conversations happen; Movies, which shows how conversations about a meme unfold over time; and BotOrNot, which helps determine whether a social media account is run by a human or software.

Ciampagila says the toolkit is designed to help the general public get a better understanding of how information spreads on social media. “Even just using public resources like Wikipedia can extract a lot of knowledge that can be used for fact-checking,” he explains.

When the researchers initially tried to automate fact-checking, they began by programming an algorithm to pull information from the “info boxes” on Wikipedia that summarize the primary facts on any given topic. Those info boxes usually contain links to other information, Ciampagila points out.

“Each concept on Wikipedia is related to other concepts,” Ciampaglia says. “Imagine you didn’t know Barack Obama was married to Michelle Obama. Maybe you can reconstruct that relationship by seeing he has a daughter who is also the daughter of Michelle. The idea is the algorithm walks over a paragraph in search of related concepts.”

That might sound easy, he says, but there are immediate complications. For instance, Obama is a human being and so is Joseph Stalin. Since Stalin was a communist, an algorithm might be tempted to conclude that fellow human being Obama is one, too. “We wanted to make sure stuff like that doesn’t trick the algorithm,” he adds.

So the IU team did further question-answering exercises, like trying to create an algorithm to match U.S. presidents with the correct First Ladies. Ciampaglia said there were many pitfalls to overcome in that exercise, not least of which was teaching the algorithm to figure out the nature of the relationship between Barbara Bush and George W. Bush. Next, they tried matching U.S. states to capitals, and Oscar winners to the correct award category.

“We got very encouraging results then,” he says, adding that the team is optimistic that many facets of fact-checking can eventually be automated, thereby making it easier and faster to detect misinformation. “But we’re still far from a machine that understands natural language.”

IU researchers hope to integrate the tools they’ve created with software used by fact-checking organizations to make the job of verifying information less tedious and more automatic. All of the tools being developed by OSoMe are open source, and, at the moment anyway, purely for research (and not commercial) purposes.

While IU researchers delve deeper into fact-checking, how can people mitigate the effects of fake news and misinformation? Ciampaglia says getting out of your echo chamber is the first step. “Social media amplifies cognitive bias. Modeling helps us understand how misinformation spreads—if you change the parameters, will that affect how it spreads? We’re seeing that if people are bombarded with information, they lose critical-thinking skills.”

He’s happy to see Facebook, Google, and Twitter beginning to crack down on the proliferation of fake news, although he correctly notes the issue is not how many fake articles are out there, but rather how much exposure they get.

“The real concern is the financial incentive to produce misinformation—it’s very easy to build a professional website, and the media landscape is so huge that it’s hard to tell what’s fake,” he says. “Cutting incentives will reduce the problem, but it won’t solve the root cause, which has more to do with polarization of society and the unintended consequences of personalized news. It can’t be solved purely through technology, but I’m moderately optimistic.”

Sarah Schmid Stevenson is the editor of Xconomy Detroit/Ann Arbor. You can reach her at 313-570-9823 or sschmid@xconomy.com. Follow @XconomyDET_AA

Trending on Xconomy