IT History Society Blog

The Race for Microprocessor Leadership in Silicon Valley: Jan 7, 2013 IEEE Life Member Meeting in Mt View, CA

January 16th, 2013 by Alan Weissberger

Abstract

The microprocessor changed what is now known as Silicon Valley from a mostly agricultural and defense electronics region into a center of innovation for many new technologies. How did that happen and what challenges were faced along the way?

This IEEE Life Member panel will discuss and debate the development of microprocessor technologies in the 70’s, 80’s and 90’s. We’ll examine the evolution of CISC (complex instruction set computing) and RISC (reduced instruction set computing) architectures and the battle for dominance in the commercial market place. The technological developments that lead to the creation of RISC architectures and the reaction of CISC suppliers to address this competitive threat will be covered. The role of architecture in today’s industrial and consumer markets will be discussed. The panel members will also share their views on the factors that lead up to the microprocessor architectural wars and the impact of microprocessor companies on Silicon Valley.

This panel session will be moderated by CHM CEO and President John Hollar, who will provide a brief introduction to the mission and accomplishments of the CHM.

The panel members and their former company affiliations:

*Anant Agrawal – SPARC chip designer at Sun Micro
*John Mashey- Software architect at MIPS
*Dave House- Marketing Director & later GM of Intel’s Microprocessor Division

Note:  Uday Kapoor of Oracle helped organize and rehearse the panel session.  He also made a video of the event for future playback on a website TBD.

—————————————————————————————————————————————–

Event Summary

Through John Hollar’s skilled moderation, the panelists revealed a lot of hitherto undisclosed information about microprocessor activities at Intel, Sun Micro and MIPS. The audience was thrilled to hear that information from primary sources who were there in the mid 1980s when the race between general purpose (CISC*) and reduced instruction set (RISC) microprocessors was heating up.  John’s opening remarks about CHM progress has been previously published as a blog entry on this website.

* General purpose processor architecture is often referred to as Complex Instruction Set Computing, even though there are very few complex instructions.  The term CISC is therefore somewhat of a misnomer.  It was used to differentiate classical processor acrchitecture from Reduced Instruction Set Computing (RISC), where fewer instructions are available to be executed by the CPU.

Introductions

John Mashey worked on memory management and exception handling – both to make MIPS chips run UNIX well, but also for embedded control requirements and applications.  MIPs first shipped its RISC microprocessor chips and boards in Dec 1985.  The company struggled to figure out whether they should sell chips, boards or packaged systems.  In the end, they did all of them, plus software licensing.  MIPs convinced three semiconductor companies to use their chips and got several systems companies to buy them. as well.

Sun Micro was a systems company that chose to make their own processor chip to give them a performance edge in the workstation (and later) server markets, according to Anant Agrawal.  Processor performance from Motorola and other merchant microprocessor companies wasn’t sufficient for network computing, so Sun decided to design their own microprocessors.  Faced with a very near term time to market, Anant designed the first SPARC processor chip based on gate array architecture.  The initial SPARC design was a joint effort amongst system designers, logic designers, software engineers, and semiconductor process experts.  The resulting SPARC performance gave Sun Micro a distinct price-performance advantage in the workstation market.

Dave House surprised few when he said, “Intel never really had the best microprocessor architecture.”  Therefore, they needed a very wholistic strategy to dominate the microprocessor chip business.  The complete LSI solution Intel offered included I/O and peripheral chips such as Clock, Interrupt and Direct Memory Access (DMA) controllers, as well as a serial communications controller (8251 USART).  They also provided a popular development system, an In Circuit Emulator (ICE), and PL1/PLM compiler (to facilitate programming in high level languages rather than just assembly language).  A very important early decision was to hire microproccessor savvy Field Application Engineers (FAEs) to offer superior technical support for Intel’s customers.

Once Intel microprocessors were designed into IBM PCs, Microsoft controlled Intel’s environment through their Operating System (DOS, Windows, etc), optimized compilers and other Microsoft software that ran on Intel chips.  The two were partners with Intel Inside (the PC) and Microsoft providing the basic PC OS and support software (e.g. Microsoft Office).  This was in sharp contrast to Sun Micro and MIPS which controlled the entire software stack.

As most folks know, Intel has always had the best semiconductor process in the world.  That’s what enabled them to compete effectively, even when they didn’t have the best microprocessor chip architecture.  Dave revealed that in the late 1970s and early 1980s, Intel’s highly profitable EPROM business helped to fund the advances in their semiconductor process development. 

Former Intel CEO Andy Grove (“only the paranoid survive”) kept the company strongly focused on problem solving and the competition.  That enabled Intel to maintain their microprocessor leadership position after they got the IBM PC design win.  Starting with the introduction of Windows in 1995, the Intel and Microsoft alliance was known as “Wintel” for such PCs and servers.

RISC Processors from Sun Micro and MIPS

Anant stated that Sun Micro became the leader in the workstation market in the mid 1980s.  By the mid 1990s, Sun took the lead in servers as well. By the late 1990s, the SPARC microprocessor was able to manage and process large amounts of data.  Sun was always thinking about how to best serve tomorrow’s customers, not just today’s.  For example, they conceived and developed multi-core and multi-threaded processors by anticipating the need for higher performance CPUs.  Despite Sun’s great systems solution built into their SPARC chips, the company struggled with semiconductor manufacturing (in sharp contrast to Intel).

MIPS was the smallest player in the microprocessor race, always worrying that Intel would come out with a competitive RISC chip.  John Mashey ran competitive intelligence at MIPS.  That unit was especially important, because as the smallest microprocessor maker, MIPS had to be especially alert to the competition.

John Mashey opined that in the late 1980s and early 1990s, “MIPS had a very good story.”  That might have been due to the widely held belief that RISC-based systems would maintain a price/performance advantage over the ad-hoc Wintel system.  In April of 1991, the Advanced Computing Environment (ACE) standardized on the MIPS architecture rather than Intel’s. 

The ACE Consortium was started by Microsoft in an effort to create an alternative to Intel for Windows/NT and caused substantial concern there, especially as Compaq had joined.  Combined with the usual difficulties of large consortia, when changes at Compaq caused it to drop out, ACE failed to gain market traction.

—————————————————————————————————————————————————

Based on today’s mobile computing requirements, moderator John Hollar asked, “Were low power processors around in the 1980s?

John Mashey was quick to reply that RISC chips had to be “lean,” with relatively low power consumption, even though that was not the early emphasis at MIPS . The company’s processors were used for applications that ran on UNIX, but also for real time communications tasks such as data networking.  Major networking vendors, including Cisco Systems, adopted the 64-bit MIPS processors (the industry’s first) in the early 1990s.  As a result, many of the 700M MIPS cores in 2011 were in networking applications and consumer devices.

[After the session, Dave and I agreed that Intel’s CMOS technology played a very important role in the commercial success of their microprocessors in the late 1980s and onward.  Two important characteristics of CMOS devices are high noise immunity and low static power consumption.  Throughout the 1970s and early to mid 1980s, Intersil was the leading CMOS semiconductor maker.  In 1987 Intel made the 80186 microprocessor using a CMOS process.  The clock speed was increased up to 25MHz from the 10MHz maximum of the NMOS version of the 80186.]

———————————————————————————————————————————————

John Hollar then asked, “When was the microprocessor race resolved?”  Reference was made to the Intel 8085 vs Zilog Z80, then the 8086/8088 vs Moto 68000.  But the more focused, implied question was how Intel responded to the competitive threat from RISC microprocessor makers.

Dave said that the Aug 1981 announcement of the Intel microprocessor (8088) inside the IBM PC was a watershed event.  It created the “PC generation.” 

Later, RISC architectures threatened Intel’s microprocessor dominance, with Wall Street and the ACE consortium thinking that RISC would win the race. 

But what performance advantage did RISC actually have over general purpose microprocessors? 

Intel microprocessor performance was doubling about every 18 months, so any RISC performance advantage was relatively short lived.  Intel was able to increase performance of their micro’s by continuing to refine and develop their world class semiconductor process, especially Silicon Gate CMOS.  And that IC process was the key to Intel maintaining their lead over RISC processor architectures and chips.

“Intel’s controlled silicon technology made them the winner in the PC generation,” Dave asserted.  “Intel made 85% to 90% (profit) margins in their microprocessor business, so they could afford to spend $2B on a new waver fabrication plant,” he added.

Summing up, Dave stated that “Intel’s domination of the microprocessor business lasted a good 25 years (co-inciding with the PC era).  We are now in the mobile computing era, where ARM chips are the leader.”

Anant said that SPARC was a success in making Sun Micro workstations and servers perform better than the competition and hence make the company a leader in those markets.  In response to a question from this author regarding Sun selling SPARC chips on the merchant market,  Anant candidly remarked,  “Sun succeeded in the microprossor board business with SPARC, but not in the (microprocessor) chip business.”

John Mashey felt that MIPS succeeded in making software easier to run on their processor chips.  MIPS had a strong heritage of input from compiler writers (based on MIPS use of word addressing), but improvements in chip architecture helped MIPS processors run a wide variety of operating systems very efficiently.

———————————————————————————————————————————————-

John Hollar’s last question was,  “How did the microprocessor environment contribute to the entrepreneurial spirit and innovation that Silicon Valley is noted for?”

Dave replied that Intel gave 100% of its employees stock options, which caused them to have a financial stake in the company.  That served to better align the interests of Intel employees with that of the company.

[Presumably, the stock price would rise if the employees worked hard to make Intel successful.  More profits for Intel would lead to a higher stock price and financial rewards for stock option holders].

Anant said that the environment at Sun Micro was similar with options granted to all employees.  Anant closed by saying, “Sun was the place to be if you were an engineer, marketer or a sales person. Sun attracted a lot of talent from all over USA, and actually from all over the world. They demonstrated leadership in workstations, then in servers, created Java and then muti-core, multi-threaded technologies for microprocessors. Sun grew from a very small company to over 44,000+ employees. This had a significant positive financial impact on Silicon Valley and enticed many talented IT professionals to move here.”

John Mashey said he thinks that stock options were important as well, but in many cases, people were attracted to MIPS by the “once-in-a-lifetime” chance to design a computer architecture and software that could be really important–perhaps a game changer in the IT industry.

————————————————————————————————————————————————–

With that, John Hollar wrapped up the panel session by thanking the participants and the audience.  A great time was had by all!

————————————————————————————————————————————————–

Acknowledgements:

The author thanks the panelists for their diligent review of this article and their clarifying comments.  We especially thank CHM CEO John Hollar for agreeing to moderate this illuminating and important IEEE Life Member program.  We hope there will be continuing collaboration between the CHM and IEEE.

18 Responses to “The Race for Microprocessor Leadership in Silicon Valley: Jan 7, 2013 IEEE Life Member Meeting in Mt View, CA”

  1. Sherise Reisig Says:

    Very useful summary of the microprocessor performance wars between RISC and Intel 8×86 in the mid to late 1980s. Thank you very much for an excellent article!

  2. aweissberger Says:

    Dave House was quite modest when responding to the resolution of the RISC vs Intel uP wars in the mid to late 1980s. He said that it was Intel’s superb semiconductor process and excellent customer support via FAEs that enabled Intel to prevail. Another reason was the superb “marketing machine” that Dave put together. That unit was a PR powerhouse, producing app notes, articles, conference papers & panel sessions, seminars & workshops for design engineers, etc.

    Dave was head of Intel’s uP Marketing from 1976 to ….? Later he became GM of Intel’s uP Division for several years. He gets a lot of credit in my book. He successfully made the transition from system design engineer to semiconductor marketing guru. Dave was my classmate at Northeastern University MSEE program (Burlington, MA campus) from 1968-69. Although he doesn’t remember it, we were in many of the same classes and talked frequently during breaks.

  3. John Jenks Says:

    Thanks for a very illuminating event summary that clears up a lot of confusion regarding RISC microprocessors from Sun Micro and MIPS.

    There was a similar threat in the late 1990s from “network processors” with a lot of start ups funded. Intel and IBM developed such network processors, but they didn’t seem to get any market traction. Does anyone have a clue why not?

  4. Tom Sacks Says:

    While I missed this IEEE event, reading your summary report made me feel like I was right there. Thanks for an excellent writeup, including your added parenthetical remarks for clarity and perspective.

    Indeed, I’d like to see more co-operation between CHM and IEEE (which hardly ever has historical talks).

  5. aweissberger Says:

    Thanks for the complements on this piece.
    Regarding network processors, there are very few companies shipping them now as ARM cores are being used in many telecom & networking infrastructure applications.
    http://www.forbes.com/sites/ericsavitz/2013/01/10/ces-arm-holdings-to-expand-push-to-tvs-servers-networks/

    Mkt research firm TechNavio says the vendors now dominating the NP market are Broadcom Corp., EZchip Semiconductor Ltd., Intel Corp., LSI Corp., and Renesas Electronics Corp. Other vendors mentioned in their recent report are Applied Micro Circuits Corp., Cavium Inc., Freescale Semiconductor Inc., PMC-Sierra Inc., Marvell Technology Group Ltd., IBM Corp., and Fujitsu Ltd. See: http://www.prnewswire.com/news-releases/global-network-processor-and-co-processor-market-2011-2015-185648242.html

    Indeed, a lot of start-ups pursuing NPs went bust after the telecom bubble burst in 2001-2002: http://en.wikipedia.org/wiki/List_of_defunct_network_processor_companies

    NPs seems to be a fad that has fizzled due to the shrinking of the telecom infrastructure market and wide popularity of ARM cores.

  6. Basant Khaitan Says:

    My compliments to Alan for the excellent event-summary. Network processing, of course, is nearly universal today. However, these chips are rarely standalone in high volume systems. Generally the dedicated network functions get absorbed in media processors (TI, Broadcom) and innumerable ASIC implementations with built-in ARM or MIPS cores.

  7. John Lattyak Says:

    John Mashey’s comment about “making software easier”, is interesting. For MIPS it was both an asset and a drawback at the same time. Especially compared to the x86, which needed talented people to write a compiler optimizer — at first glance it seems like a deficit, but in the end it produced a much better optimizer, and a bunch of programmers that know x86 extremely well. It was serendipitous for x86 to be moderately difficult. In contrast, the i860 never got traction in the market because compilers were too complex, even though it was RISC.

    RISC vs CISC, depending on your pov, both won, or both lost. Today’s x86 processors are an x86 front-end decoder layered on top of a RISC core. Are they both RISC & CISC, or is it neither, or does it matter? It is somewhat ironic, x86, the most successful instruction set, was thrown together in 3 weeks, because the i432 slipped, and Intel quickly needed to ship the 8086. It begs the question, is the instruction set really that important?

    In my opinion, x86 success is a combination of 3 things. (1) A wide range of systems, ranging from low-end laptops to high-end servers. s/w dev can be done on a laptop/desktop, and be deployed on a server — this is the point most h/w engineers miss. (2) Multiple h/w vendors, both at the chip level, like AMD, Cyrix, VIA, etc., and at the system level. (3) Multiple O/S’s, Windows, Linux, MacOS. In the end, CISC/RISC was irrelevant, long-term slow evolution is the winner.

    Another successful trend I find is making the instruction set independent. The SPARC and IBM-Power systems are popular running Java, a virtual instruction set. And now Scalia, which fixes a bunch of problems with the Java language, but generates Java byte-codes to run in a JVM. Also Android with the Dalvik VM. All these allow s/w development on a laptop, and deployment on a different server configurations, or different mobile device.

    In Network Processor space, it is typically a custom ASIC using an ARM core to control it, with a separate ARM core running the host-O/S, usually Linux. Unfortunately, they are still struggling with developing s/w. Without a virtualization layer, everything is stuffed into the device driver code, which ends up having an enormous number of lines of extremely complex code. The solution is to run the host-O/S as a guest-O/S on the ASIC-specific core.

    Often, simulators (or emulators) are used to try to solve ASIC s/w problems, but instead they make things worse. If you were to ask a s/w engineer to make a library to handle a protocol, but did not tell them what h/w it is to be deployed on, they would make a very configurable library that had hooks for h/w acceleration, and optimized typical use code-paths. There would be a generic unoptimized code-path that handles all use cases, which could be verified and used as a reference to test against more optimized code-paths.

    In my experience, the best designed code (and most reusable code) was written for multiple (>=3) different target machines at the same time. And RISC/CISC doesn’t have much influence.

  8. Jerry Clarke Says:

    Many thanks to Alan W for a great event summary and to John Lattyak his illuminating and comprehensive comment!

  9. Uday Kapoor Says:

    Alan, thanks for being the co-organizer of this memorable evening. I was privileged to have managed a microprocessor team at Intel before joining Cypress Semiconductor to manage a joint Cypress/Sun team to design the first full custom SPARC Integer Unit. So I was one of the early design managers that took the ‘riscy’ decision to switch from CISC to RISC architecture. The conversation in the panel brought back many fond memories. And the innovation continues unabated.

  10. Lance Leventhal Says:

    The article is excellent. I’ve had John Mashey at several conferences to promote the Computer History Museum.

    Our next conference is the Ethernet Technology Summit, April 2-4, 2012 in Santa Clara, CA. We will have a 40 year Ethernet anniversary session & awards ceremony at that conference.

    http://www.ethernetsummit.com/

  11. aweissberger Says:

    It certainly was a fascinating event! Was not even aware that RISC posed a significant threat to Intel’s dominance of microproccesor business. Thought it was just used within high performance workstations and servers from Sun Micro and MIPs. Comments are very interesting too!

  12. John Mashey Says:

    (Last try got lost, but in answer to John Lattyk):
    Some relevant references:
    [1] A perspective on the 801/Reduced Instruction Set Computer, Marty Hopkins, IBM, 1987.
    http://www.ece.ucdavis.edu/~vojin/CLASSES/EEC272/S2005/Papers/801-Hopkins_87.pdf

    [2] A VLSI RISC, Patterson & Sequin, UCB, 1982
    http://www.cs.nmsu.edu/~pfeiffer/classes/573/sem/s05/presentations/Paper00.pdf

    [3] MIPS: A VLSI Processor Architecture, Hennessy, Jouppi, Baskett, Gill, Tech Report #223, June 1983, Stanford.

    {4} MIPS: A microprocessor architecture, Hennessy, Jouppi, Przybylski, Rowen, Gross, Baskett, Gill (1982) http://dl.acm.org/citation.cfm?id=800930

    [5] Register Allocation by Priority-based coloring, Chow & Hennessy,1984 (i.e., work at Stanford, backends included 68K and PDP-10). OR
    The Priority-based Coloring Approach to Register Allocation
    http://www.info.uni-karlsruhe.de/lehre/2003SS/Seminar-SSA-Codeerzeugung/TOPLAS-Hennessy-Chow-1990.pdf 1990 (MIPSco) These are highly-cited articles.

    [6] http://yarchive.net/comp/risc_definition.html
    That’s a 1995 (slight) update to the old USENET post.

    0) My personal history of machines where I did compiler/assembler or at least looked at a lot of low-level code is: IBM S/360, PDP-11, VAX, MC68K, MIPS.

    1) In the 1980s, most C compilers’ ancestry was the Portable C compiler, which was OK to retarget, but had relatively simple code generation/backend optimization. C was designed to have relatively predictable code, for which programmers would insert explicit “register” declarations as hints, and for which programmers would not expect a lot of code motion (see later “volatile.”) Early C compilers were like WATFIV, somewhat.

    2) Multi-language compiler systems tended to have:
    2a) multiple frontends that generated a common internal format.

    2b) (maybe) a global optimizer that did sophisticated register allocation across blocks, rearranged code, etc.
    The first I ever used was IBM FORTRAN H in the late 1960s.

    2c) back-end code generators tuned to specific machines, doing peephole optimization, code selection (sometimes complex, if it took a lot of work just to figure out the relative timing of code sequences. Once upon a time, I often consulted the 360/50 and 360/67 timing charts. I also once broke the MC68K C compiler at Convergent by “improving” the peephole optimizer.)

    3) RISC designs were usually driven by:
    a) Set of languages considered important.
    b) Benchmarks in those languages
    c) Compiler technology

    Considering some of the early RISCs:
    a) HP, IBM, Sun, and MIPS all cared about C and FORTRAN, and HP (and maybe) IBM cared about COBOL. MIPS cared about C, FORTRAN and Pascal, with some nods to others.

    b) Everybody drove designs from analysis of sets of benchmarks, as modified by knowledge. (I.e. Stanford MIPS was word-addressed, but MIPSco MIPS was byte-addressed with the byte, halfword, word instructions needed in the real world, as DEC later discovered after leaving some out of Alpha.)

    c) When this all got going, HP and IBM had some history with global optimizers, and so did MIPS, via Stanford & Fred Chow and some other folks, so we all designed CPUs with ~32 integer registers, and now register windows, stack caches, etc. [We were of course familiar with them; before I left Bell Labs in 1983, I’d been asked to evaluate Dave Ditzel’s CRISP proposal and had recommended it.]
    Although Sun’s compilers improved over time, I think they started with more traditional C compilers, whose statistics naturally lead to different choices, as they did for others. (It’s been 25 years, but designers might well sit around comparing notes as to why the same methodologies led to different results. I had a long session like that at Cupertino one time.)

    4) Knowing we were starting with state-of-the-art global optimzation let us avoid adding extra hardware, and we never could have had as good a set of compilers, with the relatively small MIPS staff, if we’d had to dedicate a lot of effort to complex back-end code selection and special-casing. In addition, one finds that the number and symmetry of registers matters.

    If one has only a few registers, or there are major asymmetries (as in the A and D registers of the 68K), there’s not much payoff for doing sophisticated global optimization or even thinking about interprocedural optimization. One is better off working hard on peephole and well-tuned code selection.

    Of course, visible registers are expensive, and at some point, adding more doesn’t really help much. Some help can be gotten in more complex O-O-O implementations, which needs lots of invisible registers anyway.
    From [6], note that I though the X86, despite the messiness, actually had a better chance of cost-effective high-performance implementations than “cleaner” ISAs like VAX or 68K (especially 68020 onward).

    5) In 1984/1985, production C compilers that did serious global optimization were rare, and some of the same techniques that worked in FORTRAN or Pascal caused trouble in C, even if there were no optimizer bugs. (We once had to do a binary search on UNIX kernel functions to find one missing load or store.) But the crucial issue was “volatile”, which was only then coming into C, and it took a while to understand what it really meant. C originated on machines with simple pipelines and compilers, and was not used to having code rearranged. In particular, OS code, especially drivers, often was written to read/write device registers in ways that had side effects. I.e., while (something) {a = p->device.x; computation z = a;}

    Worked just fine, until a global optimizer decided that this could be:
    if (something) {z = p->device.x; computation}

    There were also cases where people expected to load and test some memory location that could be changed by another process.

    It eventually turned out that “volatile” had to be defined, not just to avoid long term register allocation, but to mimic the loads/stores you got from a simpler compiler, in the same number and order. We were doing this 1Q86.

    6) At one point I wrote MIPS assembler for some of the string routines, as we always did that for a new CPU … but then threw it all away, because the optimizer was good enough.

    MIPS compiler technology was a strong asset and the ISA design also, as we *never* could have done MIPS with the relatively tiny team otherwise.

  13. aweissberger Says:

    Mash, Thanks a lot for your comprehensive comment and references related to John Lattyk’s comment above. And even more thanks for your participation in the January 7th IEEE panel session! Judging by all the comments (and private emails) received, it was a sMASHing success!

  14. Jack Simpson Says:

    Excellent article with very informative comments. Hope IEEE will have similar IT history panel sessions in the future. Two topics of interest are storage systems and networking/data communications. Thanks!

  15. Brad Alford Says:

    Upcoming CHM events are listed at: http://www.computerhistory.org/events/upcoming/

    Hope to see more collaboration between CHM and IEEE. Thanks to Alan Weissberger for organizing the Jan 7th IEEE meeting and writing the excellent summary of that event.

  16. Alan Weissberger Says:

    CHM has posted a transcript of this panel session here:
    http://archive.computerhistory.org/resources/access/text/2013/03/102746592-05-01-acc.pdf

  17. louis vuitton outlet mall Says:

    Hello, your articles here IT History Society – Blog to write well, thanks for sharing!

  18. cars Says:

    A little bit of knowledge goes a long way in all situations in life.
    Buying a car is no different! That means you need to read advice from experts,
    as detailed below, to ensure that when you shop for
    that car, you really know what you’re doing and how to get the best deal.

Leave a Reply