[Updated 1/30/15 1:15 pm. See below.] President Obama’s precision medicine plan just got a bit more precise, and it has a $215 million price tag. Starting with that cash, his administration wants to build a massive national database to study long-term public health trends, gathering head-to-toe health data from a million or more volunteers over their lifetimes.
The million-person idea is the cornerstone detail of the president’s new initiative, which Obama first floated last week during his State of the Union address.
[Quote from Obama added.] At a White House event today, Obama hosted biomedical executives, scientists, and technologists to discuss the long-term study and other details of the initiative, such as more funding for cancer genomic research. “The time is right to unleash a new wave of advances in this area, just like we did with genetics 25 years ago,” Obama said.
The administration’s push brings more attention to what both the private and public sectors have been working toward for years: More detailed health information to help drug companies make finely tuned medicines for individual patients. But it’s also meant to help doctors and patients prevent disease in the first place, or avoid costly interventions—such as diagnostic tests that steer people away from unnecessary surgery or ineffective drugs.
In a few medical areas, such as pediatric genetic diseases or certain cancers, the promise is turning into reality. But for most of the world, personalized medicine remains just a concept.
Atul Butte, a leading bioinformatics expert who also specializes in pediatric medicine, underlined the importance of building systems that lead not only to medical insights, but also to solutions that doctors and patients can put into practice. “The results of analyses will need to be translated into models, predictions, and then positive healthy actions for Americans,” said Butte, chief of Stanford University’s division of systems medicine (and soon to join the University of California, San Francisco). To do all that, he said, will require “novel ways to analyze data and communicate risks and change individual behaviors.”
The administration plans to spend a relatively meager sum to promote the initiative, at least at first, with $215 million tucked into its upcoming budget proposal for the 2016 fiscal year.
The majority, $130 million, would go to the National Institutes of Health to build the national research database. The NIH’s National Cancer Institute would get $70 million to continue work on the genetic underpinnings of cancer. The final $15 million would go to the FDA for new regulatory tools and the Office of the National Coordinator for Health Information Technology to develop better ways to share data and protect privacy across disparate systems.
It’s unclear how the database project would incorporate private sector work, from studies conducted by health providers to massive sequencing projects like the one underway at Human Longevity, a San Diego startup from J. Craig Venter.
In a briefing yesterday with reporters, Jo Handelsman, associate director for science at the White House Office of Science and Technology Policy, said the national long-term database would be more than a “biobank”—the description used in a Science magazine report that broke the story early Thursday—because it wouldn’t be “a single repository for data or samples.” Instead, it would link together several ongoing studies, “more like a distributed Internet approach.”
That’s a huge task, and an ambitious one, which NIH director Francis Collins acknowledged. “The interoperability will be a big challenge,” he said, not least of which is the puzzle of “how to glue electronic medical records together.”
[Quote from Obama added.] And it will all have to be done with security and privacy as a priority. Obama told the crowd gathered Friday that privacy experts will be part of the design process “from the ground up, making sure we harness these new technologies in a more responsible way.”
At every turn, making all the data line up and the systems talk to each other will be tough. “Different studies often capture different dimensions of phenotype”—physical, measureable traits—“even items with the same or similar name,” said David Shaywitz, chief medical officer of DNANexus, a Mountain View, CA-based company that provides cloud-based genomic data sharing and analysis tools.
As an example, one study might measure patients’ average levels of the bad kind of cholesterol (low-density lipoprotein, or LDL), another study might measure the highest levels, another might use another parameter, “and it might all be [categorized] under ‘LDL,'” Shaywitz wrote via email while flying to the White House event.
“It’s also true for genetic data,” wrote Shaywitz. “The way data are analyzed in one study might not be the same way they’re analyzed in another.”
Making longitudinal studies talk to each other is a huge challenge, but the studies themselves are nothing new. The most famous in the U.S. has been taking place since 1948 in the city of Framingham, MA. There are plenty of others, including the Multiple Myeloma Research Foundation’s CoMMpass study, Duke University’s study of longevity in China, and as detailed here, the Seattle-based Institute for System Biology’s 100K Wellness Project.
But the type of data available, and sheer quantity, is unprecedented, with genetic information relatively cheap and easy to obtain, and people warming to the idea of sharing personal data online and through wearable devices like the FitBit. Gathering every last health detail about a person is becoming more practical every day.
Administration officials said they would lean on existing studies, but look for ways to supplement those studies with their own recruiting efforts by asking volunteers to step forward.
They also said that, whether data were gathered from existing studies or from new recruits, privacy and personal engagement were paramount. Both present thorny questions about technology and policy, which Collins summed up in one example: How do you keep track of participants over the long haul who have the markers for a certain disease, but never get sick? The government would need “broad consent” to recontact them, Collins said. “It would be much more powerful going forward to have that permission.”
How the idea is received by Republicans who control the House and Senate remains to be seen. On one hand, the proposed financial outlay shouldn’t raise the alarm of budget hawks. On the other hand, the last time the Obama administration tried to converge huge databases of health information into a connected system for the country’s benefit, the rollout didn’t go so well; Republicans have used Obamacare as a cudgel against their political rivals at every turn.
But ‘big data’ health specialists counter that government medical institutes like the NIH have had practice and success with data-sharing initiatives. Stanford’s Butte noted that the government has run the Framingham study, “starting well before computers were easily usable.”
Meanwhile, Fred Upton (R-MI), the chair of the House Energy and Commerce committee, is working on a bipartisan initiative called 21st Century Cures with Diana DeGette (D-CO). Upton and DeGette have convened hearings, discussions, and roundtables in Washington and around the country, but there’s still little indication what the upcoming legislation will entail. Handelsman said the White House was looking forward to working with Congress on it.
When asked who would head up the precision medicine initiative, Handelsman laughed off Collins’s joking suggestion that she could be the czar. Precision medicine is a priority for the President, Handelsman said, but the White House’s role would be to “coordinate and connect agencies.”
Collins said one next step is to convene a “blue ribbon panel” to help put the million-person database idea in place. Another is to use $10 million to help the FDA install more flexible regulations for sequencing and diagnostics to speed the approval of targeted therapies.
By posting a comment, you agree to our terms and conditions.