Understanding Open Science
The dream of getting to cures, faster, is one shared by many. But every new instrument we’ve invented to examine the human body has shown us only more complexity and interconnection. The failure of all of our new technologies to translate rapidly into drugs has demonstrated that we’re the folks in the parking lot looking under the lamppost for our keys not because that’s where they might be, but because that’s where the light is.
In tandem with the explosion of technologies to query the body (genome, protein, metabolite, and more), we’ve seen an explosion in organizational models for pharmaceutical companies. Reorganizations around therapeutic area, or around development pipeline steps, or around technological elements, have been tried. None have broken the complexity of the body, and thus none have truly shortened the time and cost of getting a drug to market. It’s still 17 years and a billion dollars, give or take.
So what’s left?
It may seem strange, but left untried in technological and organizational revolutions is the methodological revolution that has taken over software and, increasingly,cultural works: collaborations built on standard, shared pre-competitive systems, low transaction costs, openly licensed intellectual property, and massively increased sample sizes of participants. The short-hand for this methodology is “open source.”
The life sciences would seem, on the surface, ideal for open source. It’s a world built on disclosure – whether publication or patent – it doesn’t count until you tell the world. It’s a world where the knowledge itself snaps together in a fashion that looks eerily like a wiki, where one person only makes a small set of edits in an experiment that establishes a new fact. And it’s a world where the penalty for redundancy is high – no one in their right mind wants to spend scarce research dollars on a problem that has been solved already, a lead that is a dead end, a target guaranteed to lead to side effects.
Indeed it’s precisely this apparent fit between life sciences and open source that has inspired countless projects from the non-profit sector. But those projects, for the most part, run into the teeth of a cruel reality: the business of the life sciences, including the academic business, is set up in such a way as to reward local information hoarding even when it would benefit society, or even increase profits, in a global context.
Fragments of information are published as papers, not integrated into knowledge models that can be applied to data – and aggressive publishers use copyright to prevent that integration from happening post hoc. Data that can’t be reimbursed by a payor is kept behind pharma firewalls, even though it could be reused to build commonly held maps of absorption, distribution, metabolism, excretion – the maps that might let all more easily navigate toxicity mechanisms and get better drugs to market. And money is regularly thrown after projects that have failed,silently, in labs around the world but never been disclosed.
An “open” approach, strategically applied, can fix … Next Page »
Trending on Xconomy
By posting a comment, you agree to our terms and conditions.