all the information, none of the junk | biotech • healthcare • life sciences

Big Data Is BS in Healthcare. When Will It Become Real?

(Page 2 of 2)

typically only work for about 25 percent of the patients who get them. Asthma drugs only work for about 60 percent. Rheumatoid arthritis meds work maybe half the time. A lot of money gets wasted on treatments that don’t work for an individual patient. As any student can tell you, biology is incredibly complicated. We don’t know what causes lots of diseases, rheumatoid arthritis included. We’ve certainly never had the ability to predict, with a high degree of mathematical confidence, which drug is most likely to work for a given patient.

That’s starting to change, partly because of what GNS is doing. GNS has gone around and struck licensing deals to stitch together a database with health data from 100 million American lives. It’s chock full of info from electronic medical records, registries, and claims datasets. These datasets contain patient characteristic information on things like age, gender, ethnicity, diagnosis, smoking or non-smoking-status. There’s also information on treatment history. Increasingly, data can be layered on top to include imaging scan results, genetic test results, clinical outcomes, and—here’s a really important part—financial outcomes for the patient.

Over the past year, the data crunchers at GNS have been working on creating predictive models that aim to tell whether a given drug is likely to work for an individual. Those models will differ based on whether you’re trying to gain insight into multiple sclerosis, rheumatoid arthritis, or some other condition.

It’s no accident that GNS is focusing on multiple sclerosis and rheumatoid arthritis. These are chronic conditions. There are many different competing treatment options for patients, which look more or less interchangeable based on their clinical trial results. The products, even though they don’t work for all, generate revenue by the billions. There’s a huge amount of waste, and a lot of money to be saved by reducing waste.

That kind of market is where GNS thinks predictive algorithms can thrive. GNS can crunch through numbers on actual patient experience of hundreds of thousands of patients with MS. It’s conceivable that if this data is properly used, it could tell you, hypothetically, that 55-to-60 year-old Asian-American female non-smokers who have relapsed after first-line treatment with Biogen Idec’s interferon-beta (Avonex) should go next to treatment with Biogen’s natalizumab (Tysabri) or Sanofi/Genzyme’s alemtuzumab (Lemtrada) instead of one of the new oral pills. Or maybe the software will say that Tysabri has a 65 percent chance of working for that patient, compared with a 50 percent chance for a new oral drug, which might be considerably different than what clinical trials might have suggested for that small subpopulation of patient. The options might be quite different, at least in theory, for, say, a 45-year-old Hispanic male with certain high levels of c-reactive protein markers in the blood, for example.

These are the kinds of questions that motivated GNS and a few other healthcare companies to start Orion Bionetworks last month. Orion got started with a $5.4 million financing from Janssen Research & Development (a unit of Johnson & Johnson) to bring together patient data from Accelerated Cure Project for Multiple Sclerosis, the Institute for Neurosciences at Brigham and Women’s Hospital, and PatientsLikeMe to build some predictive models of what works, and what doesn’t, for individual MS patients.

Within a few years, Hill says, this sort of question could be reduced to an app on a physician’s smartphone. All the heavy data and math would occur in the background, and presto, your doctor says that Teva’s glatiramer acetate (Copaxone) has the highest likelihood of success for you, based on all your patient characteristics, medical history, and genomic profile.

What’s driving this? It won’t happen just because consumers want another cool app on their iPhone. It will have to come from the payers, who are under intense pressure to curb runaway healthcare spending. They are the ones hearing employers screaming about insurance premiums spiraling out of control. They are motivated, Hill says, and they are getting much more savvy about data than they were a couple years ago.

“They are focused,” Hill says. “They’re saying to themselves, ‘I have 20,000 MS patients in my covered population, and the costs keep going up, and we don’t know what treatments are most effective. Instead of spending $25,000 a year on these patients, can we limit that to $22,000 or $20,000 a year? And can we do that while offering equivalent or better outcomes for patients?”

That’s the ticket. Equal or better clinical outcomes (in this case, measured by multiple-sclerosis flare-ups or disability), and a better financial outcome for the patient and insurance company.

Pharma companies, no surprise, haven’t yet gotten fully on board. Many have long paid lip service to “personalized medicine” or getting the right drug to the right patient at the right time, but the fact is they make more money under today’s system, when there’s a lot of trial-and-error prescribing. Still, GNS has a growing and diversified list of collaborators, which includes Pfizer, Aetna, the Dana-Farber Cancer Institute, and the CHDI Foundation.

Pharma companies have much to lose if big data analytics were to truly come of age sometime soon, since doctors could start to curb all their wasteful prescribing habits. Then again, pharma also might be able to turn this technology to its advantage. If you’re a multiple sclerosis drugmaker and you have this kind of fine-grained, predictive data on your drug’s efficacy profile, you now have some convincing evidence to make a sale, and you can save by cutting back on efforts to over-treat certain populations. You could target your ad budget to the best demographic possible, and see those ads convert into sales. You might be able to anticipate competitive threats to your market share, and design a chemical modification to a future drug that’s more effective for a certain segment of patients. You might be able to weed out likely non-responders from your clinical trials, improving the success rate of your pipeline candidates.

This definitely got my imagination going, but part of me says this is all a lot of wishful thinking. Anytime you’re talking about health data, privacy is always the big barrier. Most doctors are luddites who don’t like to switch to IT-based solutions, even when it helps them operate more efficiently. In today’s cost-constrained environment, most of them don’t want to do anything new unless it helps them make more money. How are they going to react when many will feel like their greatest value—their insights based on experience—is being undermined by software algorithms that can make recommendations based on far more data than a single human brain can process?

My own best guess is that the technologists will work out whatever flaws there are in the datasets (the garbage in/garbage out problem), assuage the privacy concerns, and find ways to integrate really cool stuff like genome sequences. Those things shouldn’t take too long.

Consumers, once they find out there’s a better way to predict their outcomes in health, will probably start demanding it, especially as they are on the hook for increasingly steep co-pays. The harder, longer-term work will be in getting pharma companies and the doctors to figure out how they can still prosper in a future when the algorithms have real value, and tells us things that the human experts never could before.

Attached to my office bulletin board is a quote from Upton Sinclair: “It is difficult to get a man to understand something when his salary depends on not understanding it.” Right now, it’s not in the financial interest of physicians, hospitals, and pharmaceutical companies, to understand the potential of big data in healthcare. To them, it’s bullshit. But whether it’s Colin Hill and his team, or someone else, healthcare is eventually going down this path of using predictive health models, based on machine learning. The algorithms looking over datasets will quickly detect things, like the next big drug safety problem, long before it becomes a catastrophe. The data will tell us which treatment is most likely to work for you. Our healthcare system will be in much better shape, and hopefully no longer gobbling up 18 percent of our gross domestic product, when it finally goes mainstream.

Single PageCurrently on Page: 1 2 previous page

By posting a comment, you agree to our terms and conditions.

9 responses to “Big Data Is BS in Healthcare. When Will It Become Real?”

  1. Phil Simon says:

    I was a bit skeptical as well. Then I started doing research for my new book. I’m not skeptical anymore.

  2. ExMedNav says:

    On behalf of physicians, nurses & caregivers who care about patients, their outcomes and the cost effectiveness of care, I can say with surety that a good majority of caregivers will gladly embrace the USE of big data. Forcing them to be the primary source of data CREATION (e.g. costly keying it into forms) takes them away from interacting with patients, which is the concern most often expressed.

    Likewise, patients, who often aren’t included as a key stakeholder in these conversations, should also be given incentives for not only healthy behavior but also the consumption and use of big data through tools that empower their ability to look out for themselves. The analogs Luke identifies early on (weather, traffic, stocks) are great examples of big data empowering *consumer* applications.

  3. Luke, at LexisNexis Risk Solutions we are actively engaged in using the open source HPCC Systems data intensive compute platform along with the massive LexisNexis Public Data Social Graph to tackle everything from fraud waste and abuse, drug seeking behavior, provider collusion to disease management and community healthcare interventions. We have invested in analytics that help map the social context of events through trusted relationships to create better understanding of the big picture that surrounds each healthcare event, patient, provider, business, assets and more. For an interesting case study visit: http://hpccsystems.com/Why-HPCC/case-studies/health-care-fraud

  4. Liz Derr says:

    Another set of stakeholders that have a financial motivation to use big health care data are the Personal Liability Insurers. Insurers and Patient Safety Organizations have a huge motivation to prevent medical mistakes and reduce the number of malpractice lawsuits. Analysis of big data can help them identify who among their clients is at higher risk of making mistakes, and why.

  5. Some great thoughts to ponder. I totally agree with this post.

  6. Some great thoughts to ponder. I totally agree with this post.

  7. Laurie Meehan says:

    Excellent article. Nothin’ better than hope from a skeptic!

  8. Statistical analytics over large populations of structured data may be the most common mental image of “Big Data.” But whether or not that particular Volume-oriented example is BS or not, there is a broader spectrum of Big Data technology . Velocity and Variety are equally important dimensions, where, in particular, new NoSQL database technologies are providing fast responses over medical data whose complexity doesn’t fit well in database technologies that are a generation or two out of date.

  9. Equation says:

    Great post. I think the key is starting in smaller segments. Where the data silos are still large, though targeted with specific niche based questions. It’s true that there are many variables at play, though one aspect we’re tackling is the physician economics. With http://www.datariver.me, we aggregate billing data make it instantly useful for practice managers and others within the health system.