Tech Giants’ Partnership To Explore Ethics, Societal Impacts of AI

With public interest in artificial intelligence technologies on the rise, five of the world’s largest corporations—vying against each other in so many spheres—are banding together to support research on the ethical and societal issues raised by machines with increasingly human-like capabilities.

The Partnership on AI, formally unveiled Wednesday, includes Amazon, Microsoft, Facebook, IBM, Google and its 2014 acquisition DeepMind. Each company is investing untold billions of dollars in developing AI technologies, betting on a future defined by computer systems that can perceive, reason, advise, and decide. (Microsoft, meanwhile, announced a new 5,000-person group Thursday dedicated to the development of AI technologies and products that use them. Harry Shum, an executive vice president in charge of Microsoft’s Technology and Research group, is heading the new unit, called the Microsoft AI and Research Group.)

Setting aside the profit motives of the founding companies—which pledged an undisclosed sum to support this effort for multiple years—the partnership is being designed as a “fiercely independent” nonprofit organization that will balance its board of directors with non-corporate representatives from the worlds of policy, law, science, and society, says Eric Horvitz, a Microsoft technical fellow and director of the research lab at its Redmond, WA, headquarters, and co-chair of the partnership. (Horvitz, is pictured above at left with founding members of the partnership, Francesca Rossi, IBM; Yann LeCun, Facebook; and Mustafa Suleyman, Google/DeepMind. Not pictured is Ralf Herbrich, Amazon.)

The goal is to create an open platform that deliberates in public, includes views from a broad range of stakeholders, supports unbiased research, and develops a set of best practices for developing and implementing AI technologies.

The partnership comes as public interest in artificial intelligence reaches levels unprecedented in the 60-year history of the field.

In a recently published paper, Horvitz and Stanford computer science PhD student Ethan Fast analyzed the last 30 years of articles about artificial intelligence in the New York Times.

“General interest, awareness, and discussion about AI has waxed and waned since the field was founded in 1956,” they write (PDF). “We find that discussion of AI has increased sharply since 2009, and that these discussions have been consistently more optimistic than pessimistic. However, when we examine specific concerns, we find that worries of loss of control of AI, ethical concerns for AI, and the negative impact of AI on work have grown in recent years.”

Horvitz says 2009 was an important year in the advancement of machine learning technologies. Also around that time, Horvitz, then the president of the Association for the Advancement of Artificial Intelligence, organized a study where AI researchers assessed the likelihood of artificial “super-intelligence,” the results of which were covered on the front page of the Times.

“We have been at an inflection point in the fielding of AI technologies for the last five-to-seven years in terms of large amounts of data becoming available, computational resources becoming much more powerful, and our algorithms keeping up, and getting refined,” Horvitz says in an interview with Xconomy.

As the technology has advanced, so has the competitive landscape.

“The company that has slighter better algorithms to do X, Y, or Z gets a lot more attention and eyeballs, and that actually is fuel for the AI industry to work harder and better and faster at these systems that can classify and recognize patterns more quickly and more accurately,” he says.

But research into the social and ethical implications of this technology and business surge—and appropriate practices to mitigate potential harm—has not kept pace.

That’s part of the impetus for the Partnership on AI, Horvitz says.

“For example, we’re seeing signs now of algorithms that are trained with data that might have hidden biases in it, and that then, in their whole pipeline, produce biased classifications or analysis or recognitions,” he says. “And it’s often not clear to the designers even and the people fielding the technologies that this is a problem until someone points out that some inappropriate action was taken or poor decision was made.”

He says the partnership may ask—or fund third-party researchers to ask—“What are best practices for identifying biases in data and addressing them across the industry?”

Other questions focus on how intelligent systems, particularly those used in high-stakes decisions, communicate why they’re doing what they do. A doctor using an intelligent healthcare assistant might ask, “Why did the system tell me that this patient was at high risk for readmission, or at higher risk than the other patients for getting a hospital-associated infection?” Horvitz says. “Our abilities and technologies to explain inferences and classifications and reasoning is way behind the ability [of systems] to do the reasoning, and so we need to push and focus some attention in that space.”

Horvitz emphasizes that AI is not one thing, but rather a collection of technologies applied in a wide variety of contexts, each with its own operative questions. He does see the opportunity for some broadly applicable frameworks though.

“Even if we can’t get a single ethics right for what a car should do on a highway, at least there might be standards of practice for how a car, a car system, expresses its preferences and makes them clear,” he says. “What are the ethics, the encoded tradeoffs, in the system? Are they clear to people, and can people actually change them to align with their own ethical sense?”

The partnership is not set up as a lobbying organization. (Though the participating companies certainly have plenty of lobbying clout individually.) But the research it supports may help guide regulators and policymakers to specific contexts in which regulation of AI technologies might be appropriate, Horvitz says.

The partnership unveiled itself this week with a set of eight shared tenets, some that borrow ideas from Isaac Asimov’s Three Laws of Robotics and the Hippocratic Oath. For example, the sixth tenet: “We will work to maximize the benefits and address the potential challenges of AI technologies, by… [o]pposing development and use of AI technologies that would violate international conventions or human rights, and promoting safeguards and technologies that do no harm.”

The drafting of these principles provides an interesting glimpse into the mechanisms of a partnership like this, which unite like-minded researchers—often with academic backgrounds—at companies that are direct competitors and bring to the effort their individual corporate cultures.

“I felt extremely empowered to make decisions on behalf of the company that I love, that I understand deeply, that I know will support me,” Horvitz says of Microsoft.

But in the back-and-forth process of drafting the tenets for public consumption, he saw indications that other companies were reviewing the tenets at different levels.

“You had the sense that there was some legal review going on about the implications of the words for business, for example,” he says. “But in the end, I don’t think anybody felt, in any of those words, when we finally finished, that there was any compromise.”

Trending on Xconomy