Austin Group Joins Other Tech Efforts to Promote Responsible AI Use
(Page 3 of 3)
use data to predict behavior like insider trading, if you build AI models based on employees, do you also then build models based on that person’s spouse? After all, a spouse could have influence on an employee. “How far do we take [the modeling] out?” he asks.
Should a bank, in an effort to prevent robberies, have software that could identify and create a risk profile—did the bank just foreclose on this borrower?—for everyone who drives into the parking lot? “You would probably be effective in preventing [an adverse event], but do you really want to interrogate every customer with no probable cause in advance to see if they could do something criminal?” Stewart asks. “In both cases, the answer was, no, we don’t want to do that.”
Creating AI technologies that are as free as possible from bias is the idea behind a new tool introduced by Accenture. The firm’s “AI fairness tool” uses statistical methods to assess the data sets—especially those related to age, gender, and race—that feed machine-learning models.
Rumman Chowdhury, the firm’s responsible AI lead, said in a TechCrunch article that Accenture is defining fairness here as “equal outcomes for different people.”
“There is no such thing as a perfect algorithm,” she says. “We consider it unfair if there are different degrees of wrongness … for different people, based on characteristics that should not influence the outcomes.”
Zetta Venture Partners’ Gorenberg, who serves as an MIT board trustee and was appointed by former President Barack Obama to the President’s Council of Advisors on Science and Technology, says he expects the tech industry to continue to seek ways to mitigate the downsides of AI.