Tech in the Trump Era: Regulating Privacy, Immigration, Ethics
Austin—From personal privacy to managing the technology that one day could take our jobs, finding the right approach to regulating technology is a more prominent subject now than seemingly ever, especially given the steady stream of proposals being pushed by the new White House administration.
Those speaking at South by Southwest over the last week took note, frequently commenting on subjects such as immigration bans and the business implications of the U.S. pulling out of global trade deals. One representative from Intel at SXSW this week said the company hopes that President Donald Trump and the federal government will consider a new form of trade agreement, after Trump pulled the U.S. out of the Trans-Pacific Partnership (TPP) in January.
“That’s something that we’re hoping for going forward: the good things that came out of the TPP, that trade and growth of market access and our ability to sell our products overseas,” said Norberto Salinas, senior counsel and director of government relations at Intel, during a panel about tech policy in the new administration. “The majority of our research and development is here, but 80 percent of sales are outside of the U.S.”
Like many in the tech industry, Salinas also advocated for the H-1B visa program, which allows companies to temporarily employ foreign workers, typically in highly skilled jobs. Salinas said companies like Intel look to graduate students to find the next generation of innovators, but those who are from foreign countries may no longer want or be able to stay in the U.S., if Trump restricts H-1B visas. (Besides Trump, the H-1B program has other skeptics.)
“If we discourage those students from coming into the United States, they’re going to end up in India, China, Canada,” Salinas said. “We also need to create a system where not only are our future innovators here on visas, but to keep them here as citizens.”
Others on the panel with Salinas noted the importance of properly regulating privacy, particularly as more corporations implicitly control our personal information online as we shift more of our lives to the Web. Having a simple conversation is one example.
“There are things I would say to a friend in my own home that I wouldn’t say in a crowded street, or the DMV,” said Jesse Blumenthal, the manager of technology and innovation work at the Charles Koch Institute. “As our privacy moves online, that conversation now takes place over a messaging app or a video chat. If we want to have the same association, we need to have strong privacy and encryption.”
The same is true for developing industry standards about recording or relaying private information for things that connected devices in the home, car, and elsewhere, Salinas said.
Creating standards that companies can or should follow is an ongoing conversation in other areas of developing technology—particularly in artificial intelligence. The Institute of Electrical and Electronics Engineers (IEEE) has a division, the Standards Association, which focuses on developing policy recommendations, guidelines, and standards for various tech industries. The organization is currently working on them for artificial intelligence.
These are standards that should have perhaps been made long ago, and now groups like IEEE are trying to catch up to the technology, said Konstantinos Karachalios, the managing director of the IEEE Standards Association. He spoke to a full audience at a discussion about ethically aligned artificial intelligence hosted by the University of Texas at Austin.
“We’re almost late. Technology is advancing rapidly. Driverless cars are here, drones are flying around,” Karachalios said. “I’m worried about the algorithms that define our identity, what we’re able to do and what not. If we continue without anything, if we’re not transparent, not accountable, I think we are going into the scenario of digital slavery.”
Overregulation can cause innovation to stagnate, said Derek Jinks, a professor at the UT School of Law—which underscores the importance of setting standards that can then be piecemealed into laws.
The IEEE has published a first version of its recommendations for designing artificial intelligence ethically, but the conversation is ongoing and the group wants input from the general public, too, Karachalios said. Locally, the organization is working with groups like AI-Austin, a nonprofit focused on “practical and responsible” design of artificial intelligence.
“What we’d like to do is create Austin as a hub” for artificial intelligence, said Kay Firth-Butterfield, the organization’s executive director. “We want to try innovative things for the benefit of our community. Once we’ve done that, we’d like to be a model for other cities, especially in the developing world.”