Algorithmia Lands In-Q-Tel Deal, Adds Deep Learning Capabilities
Algorithmia, which runs a public marketplace for algorithms, has just landed a deal to provide a private algorithm-sharing platform for the U.S. intelligence community.
The deal with In-Q-Tel, which invests in and procures new technologies for intelligence agencies, comes on the heels of a significant upgrade in capabilities for Algorithmia’s primary business of brokering access to algorithms—the mathematical formulas that underpin modern apps—through a marketplace open to anyone. The company implemented upgrades to handle the data- and compute-heavy requirements of trained deep learning models, which have largely been the domain of tech giants and academic researchers.
In-Q-Tel will have access to Algorithmia’s Codex platform—essentially a private version of the broader Algorithmia marketplace, which is based on public cloud computing infrastructure.
“We knew from day one that not all businesses or organizations were going to be able to take advantage of that, either because of security concerns or because it’s just not part of their policies,” says Diego Oppenheimer, CEO of the Seattle-based startup.
But these organizations still have a lot of algorithmic intellectual property that needs to be shared among developers internally. Leading tech companies have their own methods for doing this. They set up central code repositories where developers across the company can tap into proprietary algorithms through APIs. For other large enterprises, Algorithmia offers a central algorithm repository-as-a-service. In-Q-Tel is not the company’s first enterprise customer, but it is by far its largest and highest-profile, Oppenheimer says, calling it a “huge validation of the technology behind our platform.”
Algorithmia took a major step forward with its platform last week when it began hosting and distributing trained deep learning models.
Deep Learning Models
Think of these as vastly more complicated versions of a financial model you’d build in a spreadsheet. Deep learning models take inputs, such as a batch of black-and-white photos, and produce an output such as smart colorization of those photos. Deep learning models must be trained on vast amounts of data.
A branch of machine learning, deep learning is not a new concept. Researchers have been working on it—though not always using today’s buzzwords—for decades. Some deep learning techniques attempt to replicate in machines the way the human brain responds to stimuli and learns new things. While it has long been theoretically possible, only recently have advances in data and computing power made it possible in practice.
Capabilities such as facial recognition, Google DeepMind’s AlphaGo, and chatbots that can react to natural language inputs all use elements of deep learning.
Microsoft Azure, Amazon Web Services, Google Cloud and other public cloud providers rent out computing power at a massive scale, lately including the graphics processing chips originally designed for gaming and video, which are much faster than standard processors at handling deep learning workloads. And, of course, the world is accumulating digital data of all kinds at rates never seen before.
One ingredient in the deep learning recipe remains scarce: the talent to build, train, and implement the models themselves.
“As exciting and amazing as it is, [deep learning] has been essentially monopolized by these extremely deep-pocketed technical companies that have the ability to hire the very specialized people that are needed to build these models and deploy them at massive scale so anybody can use them,” Oppenheimer says.
Academic researchers—those that haven’t been enticed into the arms of one of the tech giants—are pushing the state of the art in deep learning. But there are drawbacks to the traditional channels available to them for distributing their work, Oppenheimer says: Academic journals reach a relatively limited audience. There are open-source code repositories such as GitHub, but these require much more work on the part of a developer interested in implementing a deep learning model.
Algorithmia is designed to streamline the process of finding, testing, and implementing algorithms of all kinds. It’s essentially an app store for algorithms, paired with cloud infrastructure on which to run them, with multiple models for compensating algorithm creators, depending on their individual motivations.
Oppenheimer believes Algorithmia’s new capabilities will be a tremendous asset to the relatively small group of people developing deep learning models, and to a larger group of developers, outside of the tech giants, who wish to use them in their apps.
“[Academics] want to see their life’s work out there, in action, changing businesses and technologies, and that’s at the core of what Algorithmia does,” he says.
Researchers can realize a new revenue stream from royalties earned through Algorithmia. The company takes 30 percent of what it charges developers for access to an algorithm and gives 70 percent to the algorithm’s creator, potentially relieving some of the pressure on academics to land the next big grant. They can reasonably hope for a much wider audience of potential users—opening the door to the greater real-world impact that academics desire (and are increasingly evaluated on for tenure). Around 20,000 developers use Algorithmia’s marketplace now, Oppenheimer says.
The photo colorization model, for example, came from researchers at the University of California, Berkeley. It was one of 16 showcase deep learning models initially available on Algorithmia with the advent of the new capability late last week. A couple thousand people had used it in the first day, Oppenheimer says.
For those concerned about the monopolization of deep learning and other artificial intelligence precursors by a small handful of extremely powerful corporations, Algorithmia offers another way.
Oppenheimer, a former Microsoft program manager before co-founding Algorithmia in 2013 with Kenny Daniel, is a “big fan” of the tech giants.
“But they are swallowing research, and probably the biggest existential risk to AI is that it becomes 100 percent privatized,” he says. “That’s why an open marketplace for distributing this work in a usable form, versus just throwing it up on a repository, is so important. Now, any developer that wants access to it, any technology company that wants access to it can get access to.”