Hardware vs. Software: The Defining Technology Battle of This Decade


History repeats itself, it seems, because the defining technology battle of this decade is going to come straight from the 80s: it’s hardware versus software. [tweet] Every decade brings substantial advancements to both software and hardware, but in certain decades the strategic importance of one versus the other shifts dramatically in many segments. I’m using the term hardware loosely to include software wrapped in metal, which is still what companies such as Cisco and EMC live off of. Here is an extremely brief recent history of computing:

  • 1950s: the decade of mainframes (go IBM!)
  • 1960s: the decade of minis (go DEC!)
  • 1970s: the decade of change (DEC ships VAX, Intel ships microcomputers)
  • 1980s: the decade of the PC with the clone wars and the commoditization of hardware, assisted by a then little-known company run by a Harvard dropout by the name of Bill Gates
  • 1990s: the decade of telecom/network hardware (Cisco goes public in 1990) and Internet software
  • 2000s: the decade of storage appliances and smartphones on the hardware side and large-scale Internet software

The period between 1950 and 1980, and the business models of the dominant players, were about hardware. In the ’80s, for the first time, software stood on its own and started taking a significant portion of spending at the expense of hardware. In the ’90s there was more of everything: servers, routers, storage, and during Bubble 1.0 large enterprises wanted at least one of every type of Web-related software. During the millennium decade, hardware made big advances through smartphones and in the fast-growing storage business, while companies were able to spend less on software thanks to broader adoption of open-source technology. This decade will be defined by a reversal of this trend, one that will mimic the ’80s in terms of hardware commoditization.

Most hardware doesn’t matter because some hardware matters a lot. [tweet] Apple owns the top of the PC pyramid through its brilliance in hardware design and through the software leverage of OS X and iTunes. This forces all other PC manufacturers into a deadly, low-margin competition in the low and mid tiers.

Netbooks accelerate the race to the bottom. [tweet] In a short period of time, netbooks have become a big part of portable shipments. Pushed by subsidies from mobile operators wishing to lock users into multi-year plans, netbooks will become “smartphones with larger screens.” Netbooks are great for browser-based applications, which makes the netbook OS and hardware even less important. That’s good, because there isn’t much margin in a $300 netbook.

Virtual appliances replace physical appliances. [tweet] For many years, appliance vendors have extracted additional margin by slapping their logo on a commodity appliance. CIOs want none of this. Virtualization and advancements in distributed systems make it possible to run all kinds of enterprise applications and infrastructure services such as storage, networking, and security on commodity hardware. Commodity again means lower margins for hardware manufacturers, including companies such as EMC and Cisco, who have reacted by shifting their focus to service businesses and pure software packaging.

Cloud computing makes hardware less relevant. [tweet] This decade will be defined by a migration to cloud-based computing for everyone from consumers to the largest of enterprises. On the enterprise side, the move is driven by the desire to lower costs and add flexibility. On the consumer side, it’s driven by the need to manage data and applications across several devices (laptop, netbook, e-reader, mobile phone, etc.). Cloud-based architectures buck a multi-decade trend and emphasize service level agreements (SLAs) that come from software as opposed to hardware. Instead of powerful, expensive servers, high performance and availability come though horizontal scaling of unreliable, cheap servers combined with new distributed software architectures. On top of this, the very large cloud vendors will operate vast server farms which, increasingly, as Google does today, will deploy commodity custom servers. Even less margin for the major hardware players.

Software begins to dominate as the driver for mobile device purchases. [tweet] Historically, back in the now distant days when mobile phones were primarily used for calling, consumers chose the phone first and then went along with whatever software came with the device. RIM was the first to change this with the Blackberry. Then came Apple’s iPhone. In both cases, these were systems-hardware and software came together and were supported by additional desktop and server software, namely the Blackberry desktop client and Blackberry Enterprise Server and iTunes / iTunes Store. What is more important? The design of the Blackberry device or the fact that it’s the best mobile e-mail machine on the planet? Would you have bought the iPhone if it ran Motorola’s clamshell software? Google’s Android mobile OS pushes the divide further. I’m not suggesting that there won’t be really successful mobile hardware innovations. There will be a continuous streak of delightful innovation in devices. I’m simply arguing that, in this decade, the relative importance of mobile software and the third party ecosystem of software products and services will dominate.

Tablets are the obvious dark horse on the hardware side. If the human I/O problem is solved, we could see a radical shift in form factor that should exceed that of netbooks. I guess we won’t know for a few years. Even if Apple “does an iPhone” with its iSlate, it will be a long time before their volume meaningfully affects the landscape. Steve doesn’t like to sell things cheap.

There is one less obvious dark horse that hasn’t been named yet so let’s call it Rackware. [tweet] The “commodity custom servers” in data centers I mentioned above will look quite different from the typical servers that go on racks today. Google’s already do. In fact, they may combine CPU, memory, storage and I/O in very different and more variable ways than current servers for heat density, workload optimization and I/O virtualization reasons. Also, they may come not in server units but in rack units or other types of units (such as Google’s containers) that make deployment and management, including power and cooling, much cheaper and easier. It is foreseeable that a company could create sufficient new intellectual property in this area-both on the system and supply chain management side-to command premium margins for a period of time. Dell is a good example of this: one of the key differentiators they had in the early days was a supply chain patent that covered the just-in-time manufacturing of customized PCs.

The large scale value shift from hardware to software will have significant ramifications for innovation, venture capital, and investing. It will be an exciting decade.

Simeon (Sim) Simeonov is a serial entrepreneur and investor. Currently, he is founder and CTO of Shopximity where he works to make shopping better for everyone. You can read his blog at http://blog.simeonov.com and follow him on Twitter at @simeons. Follow @

Trending on Xconomy

By posting a comment, you agree to our terms and conditions.

13 responses to “Hardware vs. Software: The Defining Technology Battle of This Decade”

  1. When IBM entered the market in the early 80’s with its open architecture 8088 box – an application software explosion occured across business, education and entertainment. This same creative application cycle is taking place on mobile devices…and I expect we will see the same sort of consolidation we saw during the PC boom. After all, how many useful applications are there in the 100K available at the iTunes store. If Apple’s tablet is as useful as people expect it to be – we will see all those iPhone apps migrated over, and I would expect the software survivors (from the 80/90’s to migrate their well known applications to the platform as well. I think we are entering into the golden age of mobile application software, and it will be interesting to see what the vendor landscape looks like five years from now.

  2. Mark, that’s very true. Both devices and networks are opening up and I expect Android to force Apple to relax some of its control over the iPhone developer ecosystem.

  3. This is an awfully high look at the IT landscape as a whole but I am not sure what the message is Sim – or if there even intended to be one (maybe the article was more of an observation piece). If that is the case then may I add our two cents when it comes to the h/w vs. s/w battle in the storage industry.

    Intelligent software will redefine the current way of doing business in IT dramatically over the next 2-3 years (transition will happen quickly once the paradigm shift takes hold). Today’s data center is built from stacks and stacks of hardware (disk proliferation = massive investments in BU infrastructure). A process that has been 15 years in the making. It is this way because Big Iron isn’t doing anything dramatic to curb the current thinking (which is to say “throw more disk at it”).

    How long has it been since EMC promised to be more software centric, yet their software management platform remains the “brown-headed step child” (I am a red head) of the company. At the end of the day the reality is this: Too much hardware has caused billions of dollars in mismanaged data. In the next decade the Global IT department needs to think radically in their approach to data management and focus on software solutions that will help reduce mismanagement, minimize storage obsolescence, and increase accessibility.

    This approach will undoubtedly reduce the amount of hardware purchased, impacting revenue growth and forcing Big Iron into build vs. buy decisions that will eventually change the landscape of the storage industry forever.

    One thing your article left out was the consolidation each generation went through that ultimately determined s/w or h/w’s dominance in the next decade.

    This is where it will get interesting in 2010, 2011 and 2012!


    Bobby Moulton

  4. Bobby, thanks for your comment. It’s been a long time.

    The point of the piece is simple: the turbulence will create a ton of opportunities. This type of shift is part of my core thesis in looking for companies to work with through FastIgnite.

    Thanks for adding the deeper commentary on storage.

  5. Paola says:

    Hardware will win this decade is my guess.

  6. John says:

    I’ve seen disk consolidation into storage devices to maximize the use of disk space, server consolidation into virtual machines to maximize server resources and reduce footprint, power, and cooling.

    While some predict the decline of the physical server, what I see coming is the increase demand for virtual servers pushing the demand for more reliable physical servers. Virtual servers can take less than an hour to provision versus 3 – 4 weeks for a physical server. This lends itself to virtual server sprawl.

    Managing this virtual server environment (also considered to be part of the ‘cloud’) is an area of opportunity for investigation.

    When 15-30 virtual servers are running on a single piece of hardware, even if its in a server farm, reliability and availability become increasingly important.

    And as critical applications start to move into the virtual world the requirement for highly available servers will increase as well. High availability is a term that gets thrown around much too lightly, but that’s another story.

    Full disclosure, I work for a Stratus Technologies, http://www.startus.com. We produce the world’s most reliable servers.

    Regarding ‘the cloud’, it’s made up of internal and external clouds; private and public clouds; and hybrid clouds. There’s SaaS, PaaS, IaaS, and outsourcing; I believe a company’s strategic applications will remain on internal clouds, while commodity applications are considered for external clouds.

  7. John, VM sprawl is a big issue. It affects storage utilization and server load in a “fake” way–utilization goes up but not necessarily for good reasons.

    The need for better VM sprawl management tools is clear. A good company to look at in this space is vKernel.

  8. Agreed, Simeon and John. VM sprawl is an issue that will dominate technology trends in the next decade as server virtualization continues to proliferate and environments grow.

    The shift from hardware to software will be exciting and presents cost savings for companies, providing they provision and plan correctly.

    Removing sprawl manually will only provide temporary relief, it will come right back if you don’t deal with the root cause; which is a lack of automation and accessible information in server virtualization environments.

    Dealing with the root cause is the only effective way of maximizing the ROI associated with server virtualization.

    Recent research on the topic can be found on our website: http://www.embotics.com/knowledge-center/white-papers

  9. Mike Werner says:

    Sim, interesting article, although I’m not sure of the prediction that this is an either/or scenario…. I think we are much beyond that. Software is more integrated into peoples business and personal lives than at any other point in time. Hardware advances today (in the many forms you point out) are as rapid now than at any other point in history.

    I believe that we’ll see a continued harmony between software and the multitude of really great hardware platforms in the coming decade, not a decade dominated by one or the other. We have reached a time where software and hardware advances allow for very rich integration – the cloud being an enabling factor for building persistent data and user experiences – people will have interactions with applications across three screens (pc/mobile device/TV) and the cloud as we continue to build smart, managed apps.

    BTW, the point on Apple ‘owning the top of the laptop pyramid’ is really discussing the fact that they have 90% marketshare in the 7% of high end ($1000+)laptops… which overall will be a declining segment. No arguments on their great design…

  10. Yes, there is some healthy tension between electrical engineers and computer scientists.