Tech Giants’ Partnership To Explore Ethics, Societal Impacts of AI
With public interest in artificial intelligence technologies on the rise, five of the world’s largest corporations—vying against each other in so many spheres—are banding together to support research on the ethical and societal issues raised by machines with increasingly human-like capabilities.
The Partnership on AI, formally unveiled Wednesday, includes Amazon, Microsoft, Facebook, IBM, Google and its 2014 acquisition DeepMind. Each company is investing untold billions of dollars in developing AI technologies, betting on a future defined by computer systems that can perceive, reason, advise, and decide. (Microsoft, meanwhile, announced a new 5,000-person group Thursday dedicated to the development of AI technologies and products that use them. Harry Shum, an executive vice president in charge of Microsoft’s Technology and Research group, is heading the new unit, called the Microsoft AI and Research Group.)
Setting aside the profit motives of the founding companies—which pledged an undisclosed sum to support this effort for multiple years—the partnership is being designed as a “fiercely independent” nonprofit organization that will balance its board of directors with non-corporate representatives from the worlds of policy, law, science, and society, says Eric Horvitz, a Microsoft technical fellow and director of the research lab at its Redmond, WA, headquarters, and co-chair of the partnership. (Horvitz, is pictured above at left with founding members of the partnership, Francesca Rossi, IBM; Yann LeCun, Facebook; and Mustafa Suleyman, Google/DeepMind. Not pictured is Ralf Herbrich, Amazon.)
The goal is to create an open platform that deliberates in public, includes views from a broad range of stakeholders, supports unbiased research, and develops a set of best practices for developing and implementing AI technologies.
The partnership comes as public interest in artificial intelligence reaches levels unprecedented in the 60-year history of the field.
In a recently published paper, Horvitz and Stanford computer science PhD student Ethan Fast analyzed the last 30 years of articles about artificial intelligence in the New York Times.
“General interest, awareness, and discussion about AI has waxed and waned since the field was founded in 1956,” they write (PDF). “We find that discussion of AI has increased sharply since 2009, and that these discussions have been consistently more optimistic than pessimistic. However, when we examine specific concerns, we find that worries of loss of control of AI, ethical concerns for AI, and the negative impact of AI on work have grown in recent years.”
Horvitz says 2009 was an important year in the advancement of machine learning technologies. Also around that time, Horvitz, then the president of the Association for the Advancement of Artificial Intelligence, organized a study where AI researchers assessed the likelihood of artificial “super-intelligence,” the results of which were covered on the front page of the Times.
“We have been at an inflection point in the fielding of AI technologies for the last five-to-seven years in terms of large amounts of data becoming available, computational resources becoming much more powerful, and our algorithms keeping up, and getting refined,” Horvitz says in an interview with Xconomy.
As the technology has advanced, so has the competitive landscape.
“The company that has slighter better algorithms to do X, Y, or Z gets a lot more attention and eyeballs, and that actually is fuel for the AI industry to work harder and better and faster at these systems that can classify and recognize patterns more quickly and more accurately,” he says.
But research into the social and ethical implications of this technology and business surge—and appropriate practices to mitigate potential harm—has not kept pace.
That’s part of the impetus for the Partnership on AI, Horvitz says.
“For example, we’re seeing signs now of algorithms that are … Next Page »