Artificial intelligence has a long road ahead to reach the front lines of healthcare—but it’s coming.
Big companies and startup investors are pouring billions of dollars into A.I. technologies for healthcare, but a lot needs to happen before such technologies become common tools used by doctors, nurses, and other caregivers. To get there, companies will not only have to spend time and money honing their products and convincing regulators, healthcare organizations, and patients that the tools are useful and reliable, but they must also navigate concerns about job automation and questions about how algorithms and other A.I. tools work.
In 2014, companies worldwide generated an estimated $633.8 million in revenues from healthcare-related A.I. products, according to a Frost & Sullivan report from January 2016. The report predicted that the market will expand to about $6.6 billion by 2021, at a compound annual growth rate of 40 percent. If the estimate proves accurate, that’s a noteworthy growth pace, but still relatively small dollars for healthcare; consider that Partners HealthCare alone spent $1.2 billion implementing an electronic health records system across its network of New England hospitals and clinics.
Another way to gauge the health of the emerging market is to consider how much hospitals are spending on A.I. tools. Michael Greeley, a partner with Boston-based healthcare technology investor Flare Capital Partners, says his firm has met with dozens of startups who inked “relatively small” deals—around $500,000 to $1 million apiece—with hospitals to use their A.I.-related products on a trial basis. But he says he doesn’t hear about many purchases in the $5 million to $10 million range.
“That’s when you know a market has arrived,” he says—when it’s “real dollars, not just pilot dollars.”
Still, A.I. technology holds real promise for healthcare, according to doctors, entrepreneurs, researchers, technologists, corporate executives, and other industry observers interviewed by Xconomy. They say it’s more a question of when—rather than if—machine learning algorithms and other A.I. tools will be embedded in the day-to-day routine of caring for patients. Some experts estimate the technologies could become widely used within three years, at least in certain areas like medical imaging. But it might take at least a decade, some say, for A.I. to broadly permeate healthcare.
Now the race is on between companies big and small to deliver on that potential (see table of examples below). IBM is an early leader with its Cambridge, MA-based Watson Health business. Big Blue is certainly one of the most aggressive companies in the industry, having spent over $4 billion in the past two years acquiring healthcare-computing companies and building out its A.I. capabilities.
|Other Players in Healthcare A.I.|
But another long-standing, giant American corporation—General Electric—has also set its sights on becoming a leader in A.I. and healthcare. Over the past year, the Boston-based company’s healthcare business has announced partnerships with three high-profile medical care and research institutions to co-develop A.I. software and other digital tools to help with patient diagnosis and other aspects of healthcare.
IBM and GE, along with their competitors, are on a collision course to try to win the future of this fast-moving field. Their products and business strategies may differ, but their efforts could go a long way toward determining how and when machine learning technologies get accepted by patients, doctors, and regulators—and what the consequences will be for the industry.
Big Blue bets big
IBM turned heads in 2011 when its Watson supercomputer handily beat two of Jeopardy’s top human contestants in the televised trivia game show, thanks to its ability to understand language and speech, comb through a vast repository of information, and spit out answers (in the form of questions) in less than three seconds.
Since then, IBM has bet much of its future on Watson, with the idea that the kinds of algorithms and analytics tools that won a trivia contest are also useful for businesses and other organizations. IBM is applying its “cognitive computing” technologies in areas such as commerce, education, financial services, and marketing. But healthcare has been its biggest bet so far.
IBM formed the Watson Health business in 2015 and quickly built it into a more than 7,000-person operation with new headquarters in Cambridge’s Kendall Square neighborhood. Many of those employees were added via acquisition. In the past two years, IBM acquired Ann Arbor, MI-based Truven Health Analytics for $2.6 billion; Chicago-based medical imaging firm Merge Healthcare for $1 billion; Dallas-based population health company Phytel; and Cleveland-based healthcare intelligence firm Explorys. (The prices of the last two deals weren’t disclosed, but they were certainly smaller than the other two.)
But it’s unclear whether those moves are translating to business success, since IBM’s public financial reports don’t break out Watson Health’s revenues. The company says Watson software is being used or implemented by around a dozen of the largest life sciences companies, and its oncology tools are being used or implemented by more than 55 hospitals and healthcare organizations worldwide.
Watson Health’s pitch is that it can quickly sift through reams of data to help customers perform drug research, make diagnoses, and so forth. An IBM spokeswoman says Watson Health’s cloud-based data repository includes 40 million research documents (think medical journals and textbooks), 100 million electronic health records, 200 million healthcare claims records, and 30 billion medical images. (The patient records are stripped of personally identifying information to protect privacy.)
IBM says there is growing evidence that Watson Health’s tools are having—or could have—an impact in cancer treatment and other areas of healthcare.
In an analysis of 1,000 cancer patient cases at the University of North Carolina’s Lineberger Comprehensive Cancer Center, for example, Watson software surfaced new information that could point to potential treatments that doctors hadn’t previously identified in about 30 percent of those cases. Meanwhile, Barrow Neurological Institute used Watson technology to help identify five genes associated with ALS that hadn’t previously been linked to the disease. Without the software, researchers predicted, those discoveries could have taken years instead of a few months, IBM says.
And, in early June, IBM touted a new pilot study with Novartis and Highlands Oncology Group that found Watson technology helped shorten the time needed to screen patients for clinical trial eligibility. During the 16-week study, Watson assessed the eligibility of 2,620 lung and breast cancer patients and reduced the screening time from one hour and 50 minutes to 24 minutes, IBM says.
“There are many examples” of Watson’s healthcare impact, says Kyu Rhee, a medical doctor and IBM’s chief health officer. “And I think in the next five to 10 years, a system like Watson will be part of every health and healthcare decision.”
But the time frame might end up being longer than that, in part because it will take years to clinically validate the impact of such technologies across a large number of patients, Flare Capital’s Greeley says.
The skeptical view is that Watson Health and its competitors are promising capabilities that are still being built and perfected, but are not yet “off the shelf” products, Greeley says. Other observers have gone further. Last month on CNBC, Social Capital founder and CEO Chamath Palihapitiya called Watson a “joke” and said IBM is good at sales and marketing, but not innovating in A.I. (I’ve heard similar charges from machine learning startup executives in Boston. But it’s a common refrain to bash big companies’ innovation efforts.)
“You hear a lot of cynicism around, really, what does Watson do?” Greeley says. “I think it’s going to be very powerful. I don’t know if they’ve really developed a large market opportunity of use cases.”
GE: “This is not a research project”
And now, Watson Health faces a big challenger in GE Healthcare, though the companies have different strengths and approaches.
“We’re not necessarily chasing IBM because our strategy is different, and we have a different product portfolio than they do,” says Charles Koontz, GE Healthcare’s chief digital officer and the CEO of GE Healthcare IT. Still, he acknowledges the two companies will undoubtedly be competing in this sector.
Rhee doesn’t sound worried about competitors, touting the time and investment IBM has put into building its A.I. capabilities, which includes “hundreds of patents,” he says.
“While Watson Health is only two years old, the work we’ve been doing in A.I. and machine learning and cognitive [computing] is over a decade old,” Rhee says. “I think we’ve got a significant head start on [competitors], and we’re demonstrating incredible value.”
Nevertheless, GE has been investing in cloud computing, A.I. systems, and other software technologies in recent years. In 2013, GE Healthcare made plans to invest $500 million in software initiatives, and GE overall committed to adding 5,000 digital-focused jobs worldwide by 2018, a number that includes internal hires, contractors, and software staff at partner companies and organizations, a spokeswoman says. GE Healthcare alone currently has about 5,000 software workers, she says.
With its vast resources, GE “could catch up” to the “aggressive” IBM in healthcare computing, Greeley says. (GE is a “strategic partner” of Greeley’s Flare Capital, and the corporation’s venture capital arm has co-invested in startups with Flare, he says.)
So far, GE has not acquired any healthcare-focused A.I. companies, a spokeswoman says, although Greeley thinks that could change if company executives feel like “they’re starting to lag the market.” Instead, the company’s approach has involved teaming up with UC San Francisco’s Center for Digital Health Innovation, Boston Children’s Hospital, and most recently Partners HealthCare, whose facilities include Massachusetts General Hospital and Brigham and Women’s Hospital.
The UCSF partnership, announced in November, is initially focused on developing “deep learning” algorithms that analyze medical images to help identify patients that need follow-up or intervention by doctors. An example is a software tool being developed to identify scans that might indicate a patient has a collapsed lung, so doctors can prioritize and more quickly treat patients in need.
With Boston Children’s Hospital, GE Healthcare is developing digital tools to help radiologists of varying expertise interpret brain MRI scans, in order to better diagnose and treat neural diseases in children. That partnership was also announced in November.
The collaboration with Partners HealthCare, announced in May, is larger in scope. The 10-year partnership is aimed at developing deep learning applications that will impact all aspects of healthcare, from diagnosis and treatment to handling administrative tasks. That sounds broad, but many of the details are still vague.
GE and Partners are initially focusing on developing diagnostic imaging applications, which they say might help doctors assess the prognosis for someone who suffered a stroke, identify bone fractures, or track tumor growth or shrinkage after a patient takes a new kind of drug. Later, GE and Partners say, they might develop A.I. tools for areas like molecular pathology, genomics, and population health.
Koontz declined to say how much money GE is investing in the Partners deal or how many GE employees will work on the project. Some of the work will be done by software engineers and data scientists at GE’s San Ramon, CA, office, and there will also be GE employees working side-by-side with Partners staff in Boston, Koontz says. Some of the key collaborators at Partners will be the 30-some employees in its Center for Clinical Data Science, which is less than a year old and is a collaboration between Massachusetts General Hospital and Brigham and Women’s Hospital.
Mark Michalski, the center’s executive director, gives an example of how A.I. technology might progress in radiology: An initial application might be assessing whether a tumor seen in a computed tomography (CT) scan of the abdomen is cancerous. A year or two later, the software might automatically be able to measure the size of tumors in images, something that doctors currently do by hand, he says.
“While that seems like a minor thing, it actually becomes pretty interesting because now maybe your radiologist has gotten more efficient, maybe more precise, and it means maybe less variability in the measurements they’re taking,” Michalski says.
And once those precise measurements become common in medicine, Michalski says, healthcare systems could accumulate data that enables them to better understand the health of large groups of patients—and potentially improve treatment while lowering costs.
GE Healthcare has said it plans to develop a library of hundreds of commercially available healthcare apps by 2020. “This is not a research project,” Koontz says. “The way to think about it is we’re going into business with some of our biggest and most prestigious customers. And we’re developing software together to take to the marketplace.”
The first products hatched by GE and its various partners could hit the market as soon as early 2019, a GE spokeswoman says. Any apps that go beyond advising doctors and play a direct role in diagnosis, however, would likely take longer to get in the hands of customers because the FDA would have to sign off on them first, she says.
That raises an interesting issue: securing FDA clearance for A.I.-related software products might not be easy.
“I don’t think that the FDA has a clear plan in place for how they’re going to regulate and ensure that diagnostic tools are safe and effective for this particular application,” says Alex Harding, a medical doctor conducting his residency at Massachusetts General Hospital. “It could be a while until the FDA feels comfortable enough to approve any of these kinds of [A.I.] tools for routine use in the clinic.”
Look out for Google, Amazon
GE’s efforts to develop deep learning apps in healthcare are part of its broader attempt to transform from being a traditional industrial manufacturer to a maker of software-enabled devices and equipment. “This is all about our digital strategy,” Koontz says.
That digital push was a core piece of outgoing GE CEO Jeff Immelt’s agenda, and it will continue—if not pick up more steam—under his successor, John Flannery, who will take the reins in August. Flannery, like Immelt, ran GE’s healthcare business before getting tapped to lead the whole corporation.
“I think digital will be huge in” GE Healthcare, Flannery said during a company town hall discussion on June 12, the day his appointment was announced. “I’m really confident about the future of the business, the industry, [and] how it fits in GE. … I think we’re just scratching the surface of what we can be with that business.”
GE Healthcare is a leading seller of medical devices such as X-ray machines, patient monitoring equipment, and ultrasound systems. Now, the idea is to make the existing software that runs those machines more sophisticated. Part of the reason for collaborating with healthcare organizations like Partners is that GE needs doctors to help train the algorithms by confirming, for example, which lung scans show cancerous tumors and which don’t, Koontz says.
“What we’re trying to do is enhance the value of our devices,” Koontz says. “The use of A.I. and those types of smart applications is just a logical next step for us.”
It’s also a matter of survival, says Rob McCray, the president and CEO of the Wireless-Life Sciences Alliance in San Diego.
“Medical products that aren’t wrapped in software, or aren’t software themselves, are just going away,” says McCray, whose trade organization works to advance the adoption of digital technologies in healthcare. “To be competitive, GE has to move in this direction.”
The fact that GE Healthcare’s machines are already being widely used by hospitals worldwide gives it an advantage over competitors in healthcare A.I. software—including IBM—that don’t sell medical devices, according to Koontz and others. “They have a real opportunity,” says John Brownstein, Boston Children’s Hospital’s chief innovation officer and a Harvard Medical School professor.
There will be other players, too, of course. McCray says GE and IBM might be successful in this emerging sector, but he argues that the biggest innovations in healthcare-related A.I. could come from West Coast tech giants. “Companies like Google and Amazon, maybe Apple, you have to think of in the first tier of disruptors who come in with a fresh look,” he says.
Amazon, meanwhile, hasn’t made a significant push into healthcare—yet. But outside developers have started creating healthcare apps for the company’s voice-controlled speaker devices powered by Amazon’s virtual agent, Alexa. Boston Children’s Hospital created the first Alexa healthcare “skill” last year, a reference tool for parents to gather information about common child maladies.
Alexa is already starting to make its way into hospitals and doctor’s offices, too.
“The notion that’s been in healthcare, since we had the first electronic health record, is that the best user interface for clinicians would be voice,” McCray says. “I’m comfortable that Amazon, with its depth of knowledge in language and natural language processing and A.I., is going to be a more significant player in healthcare as we know it.”
Brownstein, of Boston Children’s Hospital, thinks there’s a lot of room for a mix of companies applying A.I. to different areas of healthcare.
“There’s plenty of ground to cover,” he says. “I don’t think it’s a winner-take-all scenario.”