A recent meeting focused on “Reinventing Biotech’s Business Model” provided an interesting window into the spectrum of approaches being used to create new biotech companies. Unfortunately, it did little to relieve the concerns I recently voiced regarding the expanding numbers of “virtual” biotechs. This model is becoming a popular archetype for the creation of new biotech companies because they require much smaller initial capital investments than traditional biotechs. The basic idea is to build a company around a small managerial group who then farm out most of the basic operations to contract organizations. These outsourced functions usually include research and development, clinical trials, intellectual property, regulatory, financial, and even human resources.
These disposable companies are not designed to reach adulthood.They are raised for the sole purpose of being gobbled up while still young fry by the larger, wealthier fish in the pond. While some of these companies have been acquired for attractive valuations, I remain concerned about their potential to create useful medicines. There are several other attributes of “virtual” biotechs that contribute to my anxieties beyond the distortions these companies may introduce in the biotech ecosystem. The first has to do with the quality of the science on which at least some of these companies are likely built, and the second concerns the fact that “virtual” companies are not likely to contribute much towards the development of robust biotech clusters.
Most biotech companies, no matter their business model, need to start off with some sort of molecule that they plan on turning into a drug or treatment that will attract funding from investors. “Virtual” companies, however, are not equipped to do independent research, as they have no dedicated lab space. So where do their drug development projects come from? They generally originate from one of the three places that are also plumbed for drug leads by more conventional biotechs:
1) New ideas provided by entrepreneurs directly or indirectly associated with the new company.
2) Castoffs from Big Pharma or biotech companies that did not develop them for a variety of reasons (e.g. change in clinical focus; drug performance wasn’t stellar in early trials; prioritization left no money for development; projects left over after purchase of more valued corporate assets).
3) Published data on potential drug targets, which mostly arise from academic research papers.
It is this last category that is the primary source of my trepidation. One might expect that research chosen as the basis for forming a new company would be described in ground breaking, high profile papers. After all, practically all science builds upon previous discoveries, which is why many researchers spend so much time reading the literature and attending conferences. This concept was famously summarized by Isaac Newton in a letter he wrote in 1676 to his rival Robert Hooke, “If I have seen a little further it is by standing on the shoulders of Giants.” Suppose, however, that instead of standing on the shoulders of Giants, Newton had found himself kneeling on the feet of Dwarfs? What would he have seen and come up with from that position? Would he have “seen a littler further” if his studies in mathematics, optics, celestial mechanics, and gravitation were based on the work of contemporaries and predecessors who had produced mostly bad science?
The process through which many scientific advances are made came to mind after reading an alarming paper recently published in Nature by C. Glenn Begley and Lee M. Ellis. The authors reported that significant efforts were made to reproduce the results found in 53 “landmark” cancer research papers as part of a corporate R&D program at Amgen. The goal was to confirm the findings, which were then expected to serve as platforms for the development of new drugs. The net results of this large-scale effort were dispiriting: they were only able to substantiate the data in a paltry 11 percent (6) of the papers. The key findings in the other 89 percentof the articles could not be reproduced. Scientists at Bayer HealthCare in Germany obtained similar results in an earlier study. Both of these analyses back up the work of John Ioannidis, who has written extensively about “Why Most Published Research Findings are False”.
The authors are not claiming that all of the non-reproducible papers were truly wrong or fraudulent. There can often be minor details in the way experiments are performed that hinder their replication by others. This minutia can be as simple as using a different brand of plastic vial or buying an enzyme from one source instead of another. However, the primary implication of these studies is that a significant percentage of the work was indeed wrong, and therefore should not be relied upon as the basis for drug discovery efforts.
This lack of data reproducibility in academic research is apparently well known in the VC community. According to Bruce Booth of Atlas Venture, “the unspoken rule is that at least 50% of the studies published even in top tier academic journals…. can’t be repeated with the same conclusion by an industrial lab”. As a result, many VC firms don’t invest in early stage research, and those that do generally require additional validation work before investing. The problem of building a company around published data affects both lab-based and “virtual” biotech companies. Amgen and Bayer clearly had both the financial resources and the laboratory facilities to undertake this validation work. “Virtual” drug development companies, however, don’t have the ability to determine directly if the basic research underpinning their own efforts might be tainted and unreliable. Without this internal capability, they may embark on a program based on faulty science that will be doomed to failure from the outset. The smarter companies would certainly make an effort to engage a contract research organization (CRO) to verify the data. But where would these early stage companies get the funds to pay someone else to perform this critical validation work? Would their investors view the money spent replicating published results as a worthwhile use of limited financial resources at the expense of just pushing ahead with the program?
My consulting experience suggests caution is required. I was once hired to do a scientific review by a (non-virtual) biotech company that was based upon a single publication written by the company’s CEO. It made for a very uncomfortable meeting when I had to explain to him that he had completely misinterpreted his data and that his company was built not on bedrock, but on unstable landfill. I was also engaged to review the data at a “virtual” biotech company that hired a second CRO to confirm some academic findings that could not be replicated by an initial CRO. Data obtained from the second CRO was also very weak and not supportive of the scientific hypothesis. I recommended that both of these companies abandon their respective projects and redirect their financial resources, a suggestion they each ignored to their eventual detriment.
This lack of reproducibility of published scientific data ties in to another disheartening finding: the rate of retraction of scientific articles has jumped alarmingly in the past few years. There are a number of reasons why a science paper might be retracted: the scientists realized an error in their interpretation post-publication; they were unable to reproduce the findings themselves, or some or all of the work was fraudulent. While errors are more common that fraud, retractions based on fraudulent data alone rose seven-fold between 2004 and 2009. The science described in a retracted paper would clearly be a poor choice on which to base a new biotech company. A “virtual” company founded on misinterpreted or fraudulent data, and with no direct means to validate it, may be staring into the abyss in short order.
One other concern I have with the “virtual” biotech model: it does virtually nothing to help establish or expand a vibrant biotech cluster. A cluster is a localized assemblage of companies whose size and proximity to each other helps facilitate the success of the entire group. If one company in a healthy cluster downsizes, the other organizations are likely to hire at least some of their employees. This helps retain a local talent pool and can greatly facilitate hiring in the area. Clusters often revolve around anchor companies, large organizations that serve as “job sinks” in an area. Successful anchor companies tend to grow and spin out other companies, and the whole cluster grows like a snowball rolling downhill. Here in Seattle, Microsoft and Amazon serve as large anchors for software, and Boeing leads the aerospace cluster. Our biotech cluster, however, has diminished in size over the past decade due to the acquisition and downsizing of the largest biotechs and numerous layoffs among the smaller companies.
Leading biotech clusters in the U.S. continue to attract new companies and programs like magnets attract iron filings. A number of Big Pharma companies (e.g. Pfizer, Merck KGaA, Novartis, and AstraZeneca) recently moved jobs to the Boston area, the largest U.S. biotech hub, as they restructured and downsized their R&D programs. Merck has just committed up to $90M creating a new translational research facility, the California Institute for Biomedical Research (Calibr), in the San Diego hub. Companies like to be in vibrant clusters interspersed with strong academic institutions. “Virtual” companies, being small and ephemeral, do not make for a strong cluster, although the CROs that they employ may certainly do so.
I did hear about one new biotech model at the meeting that I thought had some promise. Inception Sciences of San Diego is based on the concept of establishing a core drug discovery platform as a holding company. Individual projects may be sold or spun off as independent companies, but the scientific staff remains engaged with the remaining projects. This model would allow for the direct testing and validation of internal projects because they actually have a dedicated lab group. Research focused on multiple projects means that if one project is based on faulty science, it won’t necessarily kill the company. While the expectation is for the individual projects to succeed and leave, the company itself is meant to survive, thus contributing to the strength of the local cluster. This approach therefore has several potential advantages compared to the “virtual” biotech model. However, for those of you who continue to see the merits in “virtual” biotechs, please remember the Russian proverb favored by both Vladimir Lenin and Ronald Reagan: “Trust, but verify.”
By posting a comment, you agree to our terms and conditions.