Real Men Don’t Need Fabs: Part 2 of Our Interview with Marvell CEO Sehat Sutardja

Yesterday we published the first part of an in-depth Q&A with Sehat Sutardja, the notoriously hands-on CEO of Marvell Technology Group. The Santa Clara, CA, company (NASDAQ: MRVL) has spent the last 16 years building power-efficient “mixed-signal” analog and digital chips for devices such as disk drives, cell phones, and portable media players. The first section of the conversation covered themes like Sutardja’s management style, his plans for growing Marvell, and the company’s commitment to supporting the One Laptop Per Child project.

But I also had time during my hour-long meeting with Sutardja back in December to ask about Sutardja’s background. We talked about how he started tinkering with electronics as a child in Indonesia, how he made his way to the United States for a formal education in electrical engineering, why he decided to launch Marvell as a “fabless” chipmaker at a time when that was quite unusual, and why the company embraced the low-power chip architecture known as ARM. Sutardja doesn’t give many interviews, so I didn’t want to let this material go to waste. At the same time, it’s a lot of inside stuff, of interest mainly to shareholders or analysts who follow Marvell closely. So I’ve bundled it up below—think of this as a supplement to yesterday’s main Q&A.

Xconomy: What were the early years at Marvell like? How do you get a new semiconductor company off the ground?

Sehat Sutardja: In order to get a better picture of the business, we have to go back even further in time. I started studying electronics when I was 12 years old, playing with radios, amplifiers, car electronics, ignitions, power supplies, CB radios, almost anything that I could get my hands on in Indonesia. I was really curious, and really hooked into electronics. I never thought that electronics could be a job or a business or something to make money on. My parents were pretty worried. I got my radio repair technician’s certificate when I was 13 years old, and they were thinking, “Oh my goodness, this kid wants to be a technician, a bum. He doesn’t want to become a doctor.” The successful neighbors were all doctors. In Indonesia [being a radio repair technician] is guaranteed to be a miserable and poor life. But I never cared about that, because I was really intrigued about how and why these things work, starting with transistors and more complicated circuits.

Six years later, when I graduated from high school, the path was already set in stone. I said, “I need to continue to study more on electronics. I need to go to the United States.” This was the place to go. This is where those chips were built—by Texas Instruments, by Fairchild, RCA, National Semiconductor, Signetics, Raytheon. I needed to go to college to really learn this. So I got my degree and then worked, and later got my PhD [at Berkeley], finished school, and went into industry.

I realized as soon as I got to industry that the industry had not matured yet. There were still a lot of things that could be done. At the time we initially decided to start a company, we realized that the process technology was getting to a small enough geometry that we could integrate more functions into a single chip. If we had started 10 years earlier, it wouldn’t have been possible to do. For complicated functions, you would have needed five to 10 chips on a circuit board.

On top of this, the [chip] foundries were starting to get more mature. TSMC, the largest chip foundry in the world, was starting to get into 8-inch manufacturing [a reference to the 8-inch diameter of the wafers then used in semiconductor lithography], when the standard had been 6 inches. This was at a time when AMD’s CEO, Jerry Sanders, used to say “Real men have fabs. These fabless guys are nobodies, just boys.” But we thought the time was right, and that we didn’t have to build our own fabs to have access to advanced process technology. All we needed was just ideas and knowledge and hard work. We knew what worked and did not work, we knew what had failed. We started focusing on developing a chip that required a complex mixed signal—analog and digital and complex digital signal processing in a single chip. It happened to be a disk drive controller.

X: In retrospect, do you think it was a good idea to go the fabless route?

SS: In hindsight, we were naive. If there had been just one more challenge in front of us, it would have been a disaster. It was hard enough to build a new chip at the time, and all of our competitors, the “real men,” had their own fabs. A lot of customers, especially in the drive industry, had never done business with fabless companies. We said that our volume was going to be 50 million units—at the time, 50 million was a huge number. People said, “If you don’t have a fab, how can you guarantee that? You must be crazy.”

Not to mention all the big competitors, who had deep pockets and dozens if not hundreds of sales and marketing guys, and we didn’t have anybody. We were just engineers talking to customers. Also, the design cycle took two years to port customers’ software to our chip. All the competitors had to do was match our performance—they didn’t have to be better, they could even be slightly worse—and we would not have had that business.

X: But you wound up having huge success in the disk drive controller business. Was it a big leap from doing that to making networking chips?

SS: It was easier than disk drives. The digital signal processing in Ethernet chips has to run at 125 megahertz. On the first channel [of our disk drive controllers] we were already running at 270 megahertz. So we saw we could just borrow many of the things we’d built and run them at slower frequencies. We borrowed a lot of building blocks from the disk drive chip. So even though our gigabit Ethernet chip was maybe 10 times bigger than the disk drive chip, we finished it in less than a year.

X: How did you differentiate your Ethernet chip from the competition?

SS: Our first gigabit Ethernet chip was introduced in early 2000, and at the time, Broadcom’s chip consumed 7.5 watts to 8 watts per chip. So we build our first chip to consume 1.8 watts, about one quarter of the power. We knew we couldn’t make it faster, because it has to run at 125 megahertz, no more, no less. So we took our expertise building mixed-signal digital signal processors to make it run at a much lower power. Nobody believed that this thing worked at one-quarter the power: they said there must be something wrong with it. There was a lot of FUD [industry lingo for “fear, uncertainty, and doubt”]. We had to go through every single step to prove it. It took us a year to convince people. But eventually, we convinced them, and for the first few years, Cisco’s gigabit Ethernet used our silicon.

X: Marvell committed very early to the ARM (Advanced RISC Machine) architecture. Why?

SS: I got exposed to RISC architecture when I was at Berkeley, doing my analog studies. We used machines from Sun Microsystems, which later on switched to Sun SPARC, which was RISC-based. Then IBM introduced PowerPC, and there were various MIPS processors in the market, and of course you have the x86 800-pound gorilla, which I was exposed to when I was working on building an x86 clone for a video digital signal processing company. So I knew about the strengths and weaknesses of all the different architectures. Years later, when I was starting Marvell, we were building peripherals and interface circuits and mixed-signal DSP chips, and we knew that over time these were just going to be parts of bigger chips. That’s what Moore’s Law says; the things you are building on one chip now, in a few years are going to be a quarter of a chip. So what is the something else that you put on there? It’s a processor. A processor was going to be something that you would need to build a system on a chip, for when you want to personalize the chip and write code and applications and software for it.

So, when you looked at the processors we needed, outside of x86 there was no consensus on what would be the next common architecture. NEC had their own architecture. Toshiba, Mitsubishi, Hitachi, it was a big mess. The whole industry was so fragmented. The only things that were not fragmented were MIPS and ARM. Everything else was proprietary and fragmented, and there was no way they were going to be successful in the long run. It would be harder and harder to find engineers familiar with those processors. So we said, if we can standardize onto one architecture, there will be more engineers.

So I started looking into this, and I said, “Well, it’s either MIPS or ARM.” At the time, MIPS was focusing on very high performance, workstation-class products, and ARM was focusing on dirt-cheap phones with tiny cores. We looked at the giant core on MIPS processors and said, we cannot afford this. We need this tiny core, but we want performance like the big core. Maybe not equal to it, but maybe 70 percent of the performance. So we said, let’s just pick the ARM architecture. Fundamentally, there was no reason why an ARM processor could not run faster. We just needed to have an architecture license to build a faster one, if we needed. That’s how we started. We could have gone the other way around, and taken a big MIPS core and made it smaller, but ARM made more sense.

Wade Roush is a freelance science and technology journalist and the producer and host of the podcast Soonish. Follow @soonishpodcast

Trending on Xconomy