What’s even harder to endure than the collapse of an economic or technology bubble? The long lull that follows.
The current bubble hasn’t popped quite yet—and it might not, at least not in the sudden and messy way that the dot-com bubble did, in March 2000. What’s just as possible is that that we’ll see a slow leak. I’m talking about a gradual downgrading of expectations about the technology economy—reflected in concrete measures like falling company valuations, lower rates of venture investment and new company formation, and a slowdown in IPO and M&A activity—but without a dramatic crash on the NASDAQ or the NYSE.
In any case, I expect the mobile/cloud/Internet bubble to end in the next year or so, whether it happens with a bang or a whimper. There are just too many signs of irrational excess to ignore; something has to give. What I want to discuss today is what comes next. If you think of yourself as a technology entrepreneur, an innovator, or an early adopter, you won’t be happy with my predictions.
I think we’re in for an extended pause in technology progress: a period with no major changes in the way computing and communications are structured.
We’ve seen such pauses in the past. They’re the downhill sides of longer cycles of roughly 15 years in the development of computing. The current cycle began in 2007, so going by my theory, we won’t see the next set of game-changing innovations until the early 2020s—well into the second term of the Hillary Clinton or Jeb Bush administration.
In a place like Silicon Valley, when boom times bring dizzying change and the mouthwatering prospect of instant wealth, the idea of roughly seven years of stasis might sound terrifying. But it’s not a death sentence; it’s just a breather. For average consumers, it might even be a good thing. It would give us time to reshape our habits, laws, and institutions to adapt to the enormous changes we’ve already seen in this century. And it’s worth noting that I’m mainly talking about information technology here. I don’t think there’s any reason to expect—and we certainly can’t afford—a slowdown in areas like clean energy, transportation, food production, and life sciences.
Why do I think we’re at the beginning of a seven-year pause? Because of history. Even if chipmakers keep finding new ways to lower the cost of computing power, the way this power gets organized, channeled, and used by consumers doesn’t change smoothly. It advances in violent jolts. The advent of the Mosaic Web browser in 1993 was one such jolt, as was the introduction of the iPhone in 2007. Since then, I don’t think we’ve seen a single game-changing new innovation, nor do I see one on the near horizon.
Wearables? No, that’s an area marked by tentative and incremental advances. Virtual reality? It’s still many years away from the mainstream, whatever Facebook’s reasons for spending $2 billion on Oculus. Virtual assistants? Ditto—Siri, Google Now, and Cortana are interesting, but the AI advances needed to make these systems truly useful will take a while longer. (But this is actually my favorite candidate for the next big jolt; more on that below.)
My perspective comes from looking at patterns of acceleration and deceleration in technology development over the last 75 years, and from thinking about periodicity in other fields. One of my favorite professors in college was the evolutionary biologist Stephen Jay Gould. In the early 1970s, Gould and his colleague Niles Eldredge proposed the idea of “punctuated equilibrium” to explain big gaps or jumps in the fossil record that, under a more traditional, gradualist view of Darwinian evolution, shouldn’t have been there.
The story Gould and Eldredge came up with—and it’s accepted today by most paleontologists—is that evolution proceeds in fits and starts. There are rare, sudden moments when thousands of new species can emerge. In between, there are long periods of stasis when there’s still genetic variation from generation to generation, but species mainly wobble around a “phenotypic mean.” The changes don’t accumulate; a finch stays recognizable as a finch.
Information technology doesn’t evolve in the same sense that species do. The forces and time scales involved are completely different. But if you follow the history of computing hardware, you still see patterns that look a lot like punctuated equilibrium. The periods of major “speciation” and branching are brief and relatively rare, and they seem to come every 15 to 20 years or so. Here are a few of computing’s big breakthrough moments:
1946: The construction of the first electronic general-purpose computers like ENIAC, powered by vacuum tubes.
1964-65: The advent of mainframes like the IBM 360 and minicomputers like the DEC PDP-8, powered by transistors and integrated circuits.
1981-84: The PC revolution, exemplified by the IBM PC, the Commodore 64, and the Macintosh. (This was a long-lasting change: our laptops today are just portable PCs.)
1993-1996: The first stirrings of handheld computing, with the Apple Newton and the Palm Pilot. These devices were commercially inconsequential, but they laid the groundwork for…
2007-2008: The smartphone revolution, led by the Apple iPhone, quickly followed by the first Android phones.
That’s just on the hardware side, of course. In networking and communications, there’s been a parallel set of advances, like the creation of the Arpanet in 1969 and the full commercialization of the Internet in 1995. But the key events in networking were the invention of the World Wide Web in 1989 and the development of the Mosaic browser in 1993. Mosaic popularized the Web and led quickly to the founding of Netscape and the whole Dot-com and social media phenomenon.
Note just how much time went by between Mosaic and the iPhone, the two signature events of the modern computing era: 14 years. The first seven years, from 1993 to 2000, saw the rise of e-commerce, a huge wave of venture capital investment in free-spending Internet companies, enormous hype, and a traumatic crash, with … Next Page »
By posting a comment, you agree to our terms and conditions.