Austin Group Joins Other Tech Efforts to Promote Responsible AI Use

Artificial intelligence is poised to infiltrate nearly all aspects of human life. Given this development, technologists are focusing on how to ensure the technology usage is governed by ethics.

“The general rule is that power begets responsibility,” says Michael Stewart, founder and CEO of Lucid AI, an AI startup in Austin. “If you’re bringing in a more powerful technology, which all startups hope to accomplish, you have to manage and contain that power.”

That’s why Stewart says he’s among the founding members of AI Global, a new initiative with the goal of promoting responsible development and deployment of AI technologies. Founders include the former general manager of IBM Watson and the head of artificial intelligence and machine learning at the World Economic Forum, as well as institutions and businesses such as the University of Texas at Austin and USAA, the insurance company serving members of the military.

AI Global has created an online marketplace—a sort of open-source “app store” for AI tools and software—based on an open interface developed by Austin AI startup CognitiveScale called Cognitive Agent Modeling and Execution Language, or CAMEL. (Manoj Saxena who formerly led IBM’s Watson efforts in Austin and is chairman of CognitiveScale, is a founding member of AI Global.) The idea is to allow startups to build AI innovations from these tools that have ethical guidelines set by the AI Global community.

“If we have a standard way of defining these skills, then we can have a standard way of measuring them and testing them to see if they exhibit fairness,” says Matt Sanchez, Cognitive Scale’s founder and CTO, and a member of AI Global.

The Austin-based organization has joined a growing movement in the technology sector to take a hard look at the societal and ethical issues raised by machines mimicking human capabilities. The Partnership on AI was launched in 2017 with members such as Microsoft, Amazon, Facebook, IBM, and Google, among others. Each of these organizations have similar missions.

In June, Tesla founder and CEO Elon Musk, three co-founders of Google’s AI company DeepMine, and other tech leaders pledged to avoid developing lethal autonomous weapons at this year’s International Joint Conference on Artificial Intelligence in Stockholm. Musk himself was one of high-profile tech leaders to found Open AI, a non-profit research firm dedicated to promoting AI that is beneficial to humanity. (Other founders include Y Combinator’s Jessica Livingston, LinkedIn founder Reid Hoffman, and Peter Thiel.)

“People are now starting to talk about the risks that are created by using data blindly,” says Mark Gorenberg, founder and managing director of Zetta Venture Partners, a seed- and early-stage fund that focuses on AI technologies. “It’s not enough to be technologists, we have to be sure that technology serves people.”

While AI can help to automate tasks, freeing humans for more leisure or more meaningful work pursuits, unrestrained development of the technology could be used in adverse ways, on purpose or by accident. AI is governed by mathematical calculations done by machines. But human bias often creeps into the process: In 2015, users pointed out that image recognition algorithms were classifying black people as “gorillas.” AI, in other words, is only as good as the humans who program it.

More than two years later, Google has managed only to erase gorillas, and other primates, from the service’s lexicon, according to a Wired article. “The awkward workaround illustrates the difficulties Google and other tech companies face in advancing image-recognition technology, which the companies hope to use in self-driving cars, personal assistants, and other products,” the article stated.

Stewart of Lucid says the company regularly comes across scenarios that could steer AI to cross a line. For example, in trying to use data to predict behavior like insider trading, if you build AI models based on employees, do you also then build models based on that person’s spouse? After all, a spouse could have influence on an employee. “How far do we take [the modeling] out?” he asks.

Should a bank, in an effort to prevent robberies, have software that could identify and create a risk profile—did the bank just foreclose on this borrower?—for everyone who drives into the parking lot? “You would probably be effective in preventing [an adverse event], but do you really want to interrogate every customer with no probable cause in advance to see if they could do something criminal?” Stewart asks. “In both cases, the answer was, no, we don’t want to do that.”

Creating AI technologies that are as free as possible from bias is the idea behind a new tool introduced by Accenture. The firm’s “AI fairness tool” uses statistical methods to assess the data sets—especially those related to age, gender, and race—that feed machine-learning models.

Rumman Chowdhury, the firm’s responsible AI lead, said in a TechCrunch article that Accenture is defining fairness here as “equal outcomes for different people.”

“There is no such thing as a perfect algorithm,” she says. “We consider it unfair if there are different degrees of wrongness … for different people, based on characteristics that should not influence the outcomes.”

Zetta Venture Partners’ Gorenberg, who serves as an MIT board trustee and was appointed by former President Barack Obama to the President’s Council of Advisors on Science and Technology, says he expects the tech industry to continue to seek ways to mitigate the downsides of AI.

“The thing that’s holding AI back from acceptance is not technology, but societal acceptance,” he says. “People are willing to accept the risk if they have control.”

Angela Shah is the editor of Xconomy Texas. She can be reached at ashah@xconomy.com or (214) 793-5763. Follow @angelashah

Trending on Xconomy