Xconomy Q&A: Security Sleuth Stefan Savage Wins MacArthur Grant
After winning a MacArthur “genius” grant late Tuesday, the San Diego cybersecurity sleuth Stefan Savage acknowledged the honor “definitely triggers my imposter syndrome.”
Savage, a computer scientist at UC San Diego (and a San Diego Xconomist), added in an e-mail Wednesday, “It’s very nice to see all the work that I and my collaborators and students have done together be recognized this way.”
In response to questions from Xconomy, Savage wrote that his $625,000 MacArthur grant, to be paid quarterly over the next five years, would enable him to focus his future research on using evidence-based techniques for assessing cybersecurity.
Savage, a 48-year-old expert in computer security, was among 24 people selected Tuesday for the 2017 class of MacArthur Fellows, an honor that includes a “no-strings-attached” grant of $625,000 for each recipient from the Chicago-based James D. and Catherine T. MacArthur Foundation. The foundation says it picked Savage for “Identifying and addressing the technological, economic, and social vulnerabilities underlying Internet security challenges and cybercrime.”
Launched in 1981, the fellowship program is limited to U.S. citizens and is awarded annually to 20 to 30 individuals working in any field, and includes artists, scientists, journalists, social workers, poets, and quantum astrophysicists. It is broadly based on three criteria:
—A promise for important future advances based on a track record of significant accomplishments.
—The potential for a MacArthur grant to facilitate additional creative work.
Previous MacArthur fellows include “Hamilton” creator and star Lin-Manuel Miranda, the physicist and energy analyst Amory Lovins, author Ta-Nehisi Coates, and Internet pioneer Timothy Berners-Lee. MacArthur alumni in San Diego include the organic chemist Phil Baran of The Scripps Research Institute and the planetary geologist Michael Malin, founder of Malin Space Science Systems.
According to the foundation, Savage helped to create new strategies for defending computer networks against malware and so-called “distributed denial of service” attacks. He and his colleagues also were the first to demonstrate how to remotely hack an automobile—and take control over the engine and brakes, and to monitor conversations taking place within the car. More recently, Savage and his collaborators analyzed network-level interactions to identify ways that online credit card transactions used by Internet crime rings to sell counterfeit drugs could be disrupted.
Asked what he plans to do next , Savage wrote in an e-mail to Xconomy: “Right now we’re focused on what we call ‘evidence-based security’… the notion that you should be able to measure how different defenses/behaviors impact security outcomes (in the same way that we look at the relationship between treatments and health outcomes in evidence-based medicine). Right now, most security decisions are driven by best-practice compliance, which is some combination of received wisdom and gut instinct… We’d like to make those decisions data-driven instead.”
Savage also provided some quick answers to a few other questions:
Xconomy: How should people think about cybersecurity in light of major computer attacks such as the Equifax data breach?
Stefan Savage: I’m not sure how to answer. I think it is quite evident that we are not doing a great job at protecting large collections of data from skilled attackers focused on accessing it. This is a super hard problem and one that has only become harder as the kind and sophistication of adversaries has increased. So… if the question is for individuals I don’t know that there is an easy thing for them to think other than what they already think, “that things are pretty bad out there.”
X: What are the emerging opportunities for innovation in cybersecurity? Can artificial intelligence (AI) technologies be applied?
SS: It’s still an open question if AI is going to save us or hurt us. Automated learning offers such power, but the power cuts both ways. I think there is are lots of opportunities for innovation in security, including around authentication (passwords remain to be problems); around support to protect against social engineering (we still don’t have an answer to phishing); work to allow more work on encrypted or blinded data (i.e., where even the computers processing the data may not know what it is that they’re processing); work on measuring cyber risk and doing assessment in a scientific way; the use of trusted hardware; work to do behavioral analysis (i.e., when a computer user is operating in a way at odds with their normal behavior)… there’s a ton here… I could type for a long time.
X: Will there ever be a way to identify and distinguish what’s “authentic” versus “fake” online?
SS: Discerning misinformation requires due diligence and research, which is expensive. There are probably ways we can automate the detection of “poorly constructed” misinformation, or singular stories that scale really quickly. But in general I don’t think there’s an easy solution. I think the best we can hope for is to limit the scale (i.e., make it hard for a single mis-informer to mislead millions of people).