As Russians Hacked U.S. Election, Did Big Tech Firms Break Any Laws?

Xconomy San Francisco — 

News is constantly streaming out these days about the role of Facebook, Twitter, and Google in the 2016 presidential election; most disturbing to the public is the apparent use of social media, search, and video channels by Russian operatives seeking to influence U.S. voters.

Critical lawmakers have blasted the big tech companies for failing to detect the Russian influence campaign earlier, for allegedly accepting payments for election ads in rubles, and for allowing a troll farm in St. Petersburg to flood their communications platforms with fake and divisive messages.

But was anything they did against the law?

Lawyers for the big three tech companies may address some of those questions when they appear before the Senate and House intelligence committees at hearings slated for this week. In prepared remarks obtained by the New York Times in advance of the hearings, Facebook said Russia-backed messages, intended to rile up Americans, had reached 126 million users.  It’s all happening just as the first indictments are coming to light from the investigation of special counsel Robert Mueller into Russian involvement in the 2016 election—and any possible ties to the Trump campaign.

Tech companies have reacted to the uproar about their role in the 2016 election in two ways. They’ve announced some voluntary efforts to prevent election interference and reduce fake news, which may preserve the loyalty of users as well as help persuade Congress that new laws aren’t needed. The big Silicon Valley companies have also mounted a beefed-up lobbying drive to stave off increased regulation, according to the New York Times and other news outlets.

To provide some perspective on the fray, Xconomy asked election law experts to weigh in on whether the tech giants might already face some legal jeopardy, and to identify gaps in the law that Congress might be tempted to fill.

The question of legal liability is harder to answer when it comes to online companies than it would be for TV networks, newspapers, telecom companies, or other traditional communications organizations, says Ian Vandewalker, senior counsel for the Democracy Program at New York University’s Brennan Center for Justice. The Internet has given rise to business models never contemplated when key federal election statutes were drawn up decades ago, he says.

Vandewalker and other experts say some tech companies may have veered into unlawful territory during the 2016 campaign, even though they may not have been aware of it. But other actions, while very troubling to the companies’ critics, may be completely legal under federal election law, they say.

The Federal Election Campaign Act (FECA) dates back to 1971, before the Internet existed, Vandewalker says. Even in 2002, when amendments were added to the law with the passage of the McCain-Feingold Act, neither Facebook nor Twitter had yet been founded, while Google was still a four-year-old startup.

U.S. election law focuses primarily on financial transactions, such as campaign contributions and expenditures on advertising. But these days, a vast amount of communication via social media platforms is sent free of charge, and that makes its standing much murkier under the law.

Federal election law “is just not really set up to deal with the Internet,” Vandewalker says.

One clearly illegal scenario

Campaign law experts agree that an ad-hosting company such as Facebook could be in trouble under federal elections law if the following precise scenario had played out: the company sells an election ad mentioning a candidate for office to a buyer that the company knew to be a foreign citizen, government, company, or other foreign entity.

All such “foreign nationals” are forbidden under federal elections law to make election-related expenditures and campaign donations. The law also forbids political campaigns to solicit or accept contributions or substantial support from foreigners. That constraint applies as well to companies in the United States, says Adav Noti, senior director of trial litigation and strategy at the Campaign Legal Center in Washington, DC.

For example, a U.S. company must refuse to sell to foreign nationals any ads, goods, or services intended to influence an election, Noti says. That applies to any U.S. election—federal, state, or local, he says.

If companies, or campaign organizations, are not sure whether a buyer or donor is a foreign entity, they can set the money aside while they find out, so they can return it if necessary. They’re not required to demand passports from everyone they work with, Noti says. But they should investigate if they see signs of foreign involvement, such as a donation check or a bill paid in a foreign currency, he says.

Noti says that suspicion could have arisen for Facebook. The company reportedly accepted payment in rubles for some ads during the 2016 campaign, Sen. Mark Warner (D-VA), vice chairman of the Senate intelligence committee that is investigating Russian election interference, claimed in a tweet in September.

“There is no bigger tip-off that this is a foreign national than that,” Noti says. “Facebook may have potential liability there.”

Other tip-offs to foreign involvement, according to the Federal Elections Commission, include the following: the donor or ad buyer provides a foreign address, makes a contribution from a foreign bank, uses a foreign passport, or lives abroad.

But there’s potential wiggle room here for U.S. companies. An ad purchased by a foreign actor is only illegal if it’s related to an election. That would clearly apply if an ad mentions a candidate. However, when a foreigner buys an ad that more generally addresses political or cultural issues, it might be permissible under federal elections law.

That’s because foreign actors, while within the United States, have the right to express themselves on policy issues, as long as they’re not trying to affect an election outcome. Some ads traced to Russian buyers during the 2016 campaign reportedly focused on issues, such as immigration, rather than candidates and the voting process. Conceivably, these would be legal.

However, Noti says, a prosecutor could argue that these ads were still out of bounds under federal election law, because they were intended to inflame the passions of  potential voters, and tilt them toward voting for Trump or another candidate.

But should a company such as Facebook be required to make such distinctions, and try to guess the intentions of its advertising customers?

“That’s a genuine difficulty for them,” Vandewalker says. “There’s a tricky sort of drawing-the-line problem.”

One other question may arise for lawmakers and the public—could Facebook, Twitter, and other tech companies be in legal jeopardy if they didn’t alert the government about attempts by foreigners to influence U.S. elections, whether the companies refused to sell ads to them or not?

Interestingly, nothing in the Federal Election Campaign Act seems to require campaigns to inform the government if a foreign actor offers a donation, both Noti and Vandewalker say; neither are U.S. vendors such as media companies required to tell the Federal Elections Commission, the FBI, or another government agency if a foreign actor tries to buy ads or other goods as part of an effort to affect a U.S. election.

But a company—and some of its employees—could face criminal prosecution if they knew that foreign actors were violating U.S. election law, and helped them to do it, according to Renato Mariotti, a former federal prosecutor who is now a Chicago-based partner at the law firm Thompson Coburn. Mariotti has been providing legal analysis of the Russian election interference probes through Twitter threads and media commentary, and recently announced he’s running to become Attorney General of Illinois.

Facebook’s disclosures about Russian-backed ads

Facebook CEO Mark Zuckerberg originally discounted the idea that manipulation of his company’s platform with fake news could have substantially influenced the 2016 election campaign. But after being pressed by congressional investigators, Facebook announced on Sept. 6 that an internal probe turned up at least $100,000 in ad spending and around 3,000 ads traced to fake Facebook accounts “affiliated with one another and likely operated out of Russia.” The company says it is now sharing those ads with Congress as well as Mueller’s investigation.

In September, Facebook said “the vast majority” of the ads didn’t mention the presidential election, voting, or a particular candidate. “Rather, the ads and accounts appeared to focus on amplifying divisive social and political messages across the ideological spectrum—touching on topics from LGBT matters to race issues to immigration to gun rights.”

But a small number of the ads included the names of Republican candidate Donald Trump and Democratic Party candidate Hillary Clinton, the Washington Post reported.

Mariotti says that special counsel Robert Mueller must have turned up evidence that foreign nationals had committed crimes in their use of Facebook, because Mueller was able to persuade a judge to approve the search warrant that required Facebook to turn over reams of data about election-related ads. But that doesn’t amount to evidence that Facebook did anything wrong, Mariotti says.

“Just because a crime is committed on a communications platform does *not* mean the communications platform is criminally liable,” Mariotti wrote in a series of tweets in September. “But what if someone at Facebook knew what was going on?”

“The short answer is that Facebook employees are like anyone else. If they knew that a crime was being committed and helped the crime succeed in some tangible way, then they have criminal liability,” Mariotti wrote.

Mariotti says such knowing U.S. participants could be charged with aiding and abetting a crime, or with joining in a criminal conspiracy. (The term “collusion” is often used in discussions of possible election campaign conspiracies between Russian actors and Americans.) Federal prosecutors are less likely to charge a corporation, such as Facebook, if the company is cooperating with an investigation of the actions of its officers and employees, Mariotti says.

Merely having access to the database that contained evidence of foreign election meddling would not make a Facebook employee culpable, Mariotti says. The staffer or executive would need to have actually known about the criminal activity to be liable, he says. That raises several more questions—are there employees at Facebook who would have been in a position to know, and is Mueller trying to find out and interview them?

If so, Facebook might be pulled further into the heart of Mueller’s probe into the Trump campaign’s possible collaboration with Russians. First, internal Facebook employees might have made in-house observations about such things as ad payments in foreign currency, or about similarities in the content or targeting techniques of Trump ads and foreign ads or messages. Such employees could become witnesses for Mueller, even if they committed no wrongdoing.

Second, both Facebook and Twitter sent staffers to help the Trump campaign with its ad-targeting efforts on their platforms. That’s according to the Trump campaign’s digital director Brad Parscale, who sat for a lengthy interview for CBS news magazine “60 Minutes,” aired on Oct. 8. In the interview, Parscale alleged that the social media companies were willing to supply him with staffers who were Republicans, whom he viewed as more likely to support the election of Donald Trump.

It’s not unusual for social media companies to provide big advertisers with on-site assistance from staffers. But Parscale’s account—describing Facebook and Twitter employees as “embedded” in the campaign—could mean that those people were often in a position to observe campaign workers, managers, advisors, data analysts, supporters, suppliers, and other visitors—in addition to the online content the campaign worked on. FEC regulations not only forbid foreign nationals from donating to campaigns, but also bar them from decision-making roles at any organization involved in election-related activities.

In the “60 Minutes” interview, Parscale said the campaign did not collaborate with Russians. The House intelligence committee, which is investigating Russian election interference, was slated to interview Parscale on Tuesday, Oct. 24, the Wall Street Journal reported.

One of the data analytics firms that also sent workers into the Trump campaign offices, Cambridge Analytica, has been swept up in congressional investigations of the campaign’s possible ties to Russia, the New York Times reported.

Cambridge Analytica’s CEO, Alexander Nix, allegedly contacted WikiLeaks founder Julian Assange during the summer of 2016 to ask for e-mails of Democratic Party candidate Hillary Clinton’s, The Daily Beast reported last week. (Assange is Australian, and WikiLeaks is an international organization with suspected ties to Russian hackers.) According to a Wired story, the House committee is poring over Cambridge Analytica staffers’ e-mails for evidence of foreign contacts.

Neither Vandewalker nor Noti said they have seen any signs that Facebook, Twitter, or any of their workers had reason to believe that Trump’s campaign was collaborating with foreign nationals. But they have some advice for any tech employees who might someday detect foreign involvement in a political campaign or advertising purchase they were assisting.

Vandewalker says an employee or company that suspects foreign involvement in an election-related activity would be well-advised to withdraw: “It could put someone in the position of seeing foreign collusion.” Even in the absence of legal jeopardy, the company’s reputation could be harmed, he says.

“It certainly wouldn’t be good in the court of public opinion,” Vandewalker says.

Noti says he would advise workers who saw signs of foreign activity to tell their bosses. “You would not want your employer to face liability,” he says. Employers can be held responsible for the actions of their employees, he says.

Ultimately, it may be unlikely that tech companies or their employees will be charged with crimes based on acts that helped foreign entities during the 2016 campaign, Noti says. But if that happened, they might have a handy defense.

“For a U.S. person, it’s only a crime if you know you’re breaking the law,” Noti says. That defense claim—a lack of knowledge that foreign involvement in a U.S. election is illegal—would be more plausible for some people than for others, such as campaign professionals, he says. However, a company or employee might still face an enforcement action by the FEC, which can impose fines.

But come the next big election cycle, Noti says, companies are on notice that foreign influence on U.S. elections is illegal.

“Looking forward, for social media companies, I think it would be much harder if similar activity happened in the next election, for them to say they didn’t know what was going on,” Noti says.

For his part, Vandewalker sees a low probability that tech companies will face prosecution related to the 2016 campaign. “At this stage, it’s not helpful to look at social media companies as potential criminal defendants,” Vandewalker says. “I think they were caught off guard.” He says it’s best to focus on the reforms needed for the future.

Two U.S. Senators, Mark Warner and Amy Klobuchar (D-MN), are already proposing a bill that would require major online ad-hosting companies such as Facebook, Twitter, and Google to reveal the names of political ad buyers, as Politico reported.

Before the 2016 election season, Noti says, Facebook had forcefully resisted proposals before the Federal Elections Commission that would have required digital content hosts to disclose the names of ad buyers on the face of the ads, the way TV and radio outlets are required to do. Facebook claimed it should qualify for an exemption from disclosure that applies to campaign buttons, which are arguably too small to contain both a campaign message and the name of the entity that paid for the buttons.

“Facebook’s aggressive position laid the groundwork for the situation we had in 2016,” Noti says. “Nobody knew who funded the ads.”

Facebook and Twitter have now pledged to follow voluntary policies to make the sources of election ads more transparent.

Both Noti and Vandewalker, however, say their organizations favor legislation to insure that the public can see who is trying to influence their votes through the use of any large-scale social media network.

The Brennan Center supports regulation that would oblige online companies to maintain detailed records about the content of political ads, their purchasers, and how the ads were targeted to individuals or groups of voters, Vandewalker says. Noti says the information should not only be available to government officials, but should also be open to the public.

Free, foreign, and fake news—what’s the law?

U.S. intelligence agencies and Congressional committees are looking beyond paid advertising as they try to understand how Russian operatives could have used social media and websites to sway the voting public in 2016. It’s possible that the bulk of their influence flowed from free messages circulated without the direct involvement of any campaign, and shared by and with millions of Americans. Even more insidious, many of these messages might have been outright lies or distortions of the truth, intended by foreigners to foster distrust among Americans of each other and of the American form of government.

The Federal Election Campaign Act, with its primary focus on campaigns and financial transactions, wasn’t designed to cope with such circumstances.

“Somebody posting something on Twitter for free is not necessarily covered” by the law, Vandewalker says. Twitter would have received no payment from a foreign entity in this case, so the ban on foreign nationals spending money on election activity “potentially leaves out this free social media,” he says.

Facebook says it has closed inauthentic accounts linked to Russian entities, and since March it has expanded efforts to root out fake or misleading news stories, regardless of the source, Bloomberg reported.

Any attempt by lawmakers to require social media companies to control the content its users post could face high Constitutional hurdles: The First Amendment bars the government from restricting free speech, except under very narrow circumstances.

But online companies are free to set limits on the content posted by users, and they have done so in the past in efforts to eliminate content such as hate speech and the sexual exploitation of children. Their voluntary measures to suppress some harmful messages, however, don’t open the door to legal claims that they are responsible for removing all objectionable material. That legal shield against sweeping liability for user content is contained in section 230 of the Communications Decency Act, passed by Congress more than 20 years ago, though it continues to face legal challenges.

Crafting legislation that would control the spread of fake, divisive, or foreign political messages would be a daunting task, even under an administration more friendly to government regulation than President Trump’s. Public opinion may be the more influential force spurring social media companies to control the content they host.

With their freedom under the law to host content from all sources, the big Internet companies have built global networks and given a platform to the powerful and the oppressed alike. But the companies are now grappling with the possibility that their platforms have become international battlegrounds where one nation can vie for political control of another, as the New York Times detailed in a recent story about Facebook.

Social media sites are now primary news sources for millions of Americans, but unlike newspapers, they hold more than mere circulation lists of readers. They know what we buy, who we know, and what we say to them, and they make billions of dollars selling that analysis to advertisers. The money pouring out to influence elections is no longer flowing mainly through campaigns run by the two political parties, but increasingly to a plethora of outside groups whose agendas are much more diversified than a single party’s platform. And some of those groups are outside U.S. borders.

Anyone can use social media to broadcast their opinions at no charge; ads can be placed on numberless outlets, from newspaper websites to Twitter to the political blog written by your neighbor around the corner. What’s more, those ads and messages don’t just run once and disappear off the TV screen or into the recycling bin with the daily paper. They can be shared indefinitely within the global population, and circulate for months or years.

In spite of the vastly powerful communications tools created by social media companies, Vandewalker says he doubts that lawmakers could control the content of free online messages while still upholding other American values. That would be hard to do “without becoming an authoritarian, thought-police kind of state,” he says.

But Vandewalker says government should regulate where it can to make the public aware when a significant portion of the online debate around elections is coming from external entities trying to manipulate voters.

“We should be able to see when foreign powers are deceptively using social media to inject foreign propaganda into our election,” Vandewalker says.

Photo Credit: Depositphotos