Xconomy Q&A: Security Sleuth Stefan Savage Wins MacArthur Grant
(Page 2 of 2)
protecting large collections of data from skilled attackers focused on accessing it. This is a super hard problem and one that has only become harder as the kind and sophistication of adversaries has increased. So… if the question is for individuals I don’t know that there is an easy thing for them to think other than what they already think, “that things are pretty bad out there.”
X: What are the emerging opportunities for innovation in cybersecurity? Can artificial intelligence (AI) technologies be applied?
SS: It’s still an open question if AI is going to save us or hurt us. Automated learning offers such power, but the power cuts both ways. I think there is are lots of opportunities for innovation in security, including around authentication (passwords remain to be problems); around support to protect against social engineering (we still don’t have an answer to phishing); work to allow more work on encrypted or blinded data (i.e., where even the computers processing the data may not know what it is that they’re processing); work on measuring cyber risk and doing assessment in a scientific way; the use of trusted hardware; work to do behavioral analysis (i.e., when a computer user is operating in a way at odds with their normal behavior)… there’s a ton here… I could type for a long time.
X: Will there ever be a way to identify and distinguish what’s “authentic” versus “fake” online?
SS: Discerning misinformation requires due diligence and research, which is expensive. There are probably ways we can automate the detection of “poorly constructed” misinformation, or singular stories that scale really quickly. But in general I don’t think there’s an easy solution. I think the best we can hope for is to limit the scale (i.e., make it hard for a single mis-informer to mislead millions of people).