About Us

The IT History Society is a 501(c)3 not-for-profit organization dedicated to the preservation of knowledge about the people, products, and companies that together comprise the field of computing.

Since 1978 our organization, and its hundreds of members, have worked toward this goal, and we invite you to contribute your own knowledge and memories on this website! (read more)

Ted Hoff: Errors & Corrections in Intel Trinity book by Michael Malone

Editor's NOTE:  This article was written by Ted Hoff, PhD EE and edited by Alan J. Weissberger, Chairman of the IEEE SV History Committee. From Ted Hoff: The errors listed below are in approximately the same order as they appear in Malone's book. To aid the reviewer, chapters are identified in brackets. [CH 14] As of 1969, “CPU on a chip” was discussed in the electronics literature, but generally thought to be some time away–most CPUs were just too complex for the state of the semiconductor art at that time. Therefore, at the beginning of 1969, the microprocessor did not cross from “theory to possibility”–it was still not seen as feasible due to the limitations in the LSI processes. The state of the art was such that Intel's 1101, a 256 bit MOS static RAM, was on the drawing board. [Editor's Note: In addition to Intel, at least two systems companies- Fairchild Systems Technology and Four Phase Systems were designing "MOS LSI microprocessor" chip sets for internal use in late 1969. Fairchild's was for use as a micro-controller in its Sentry semiconductor tester systems. Four Phased Systems designed the AL1—an 8-bit bit slice CPU chip, containing eight registers and an ALU for use in their data terminals.] Malone claims Busicom came to Intel to seek a CPU on a chip, but that claim is proved false by the fact that Busicom engineers rejected every suggestion that might have moved them in that direction. Their only interest was in a calculator chip set, and a calculator set is not a CPU. Malone is also wrong in claiming I was thinking about a CPU on a chip at the time of the Busicom project. He is also wrong in claiming I had used minicomputers to design ICs before joining Intel. I had not used any computer smaller than the IBM 1130, which was typically a room-full installation. My only experiece with IC design was participating in a trial course at Stanford, where a partial layout of a few transistors was done with no usage of computers of any kind. Representing the PDP-10 as a minicomputer is wrong–the PDP-10 was DECs top of the line mainframe. DECs PDP-8 was a minicomputer but its architecture was not appropriate for the Busicom project, and implying I used it is incorrect. The PDP-8 was a 12-bit word machine, and was not suitable for programs in ROM because of the way it processed subroutines. I was not looking to build a general purpose processor. I did not “volunteer” to “manage” Busicom–I was asked to act as liaison, to assist the Busicom team achieve their technology transfer. I was willing to take it on, although refusing the request probably would have been unwise at this early time in my employment. Malone’s implication I expected Busicom to request a computer-on-a-chip is a fabrication. I never expected that. Stating that Busicom’s design had “morphed” into a “monster” from a more straightforward design is incorrect. Prior to the arrival of the Busicom team we had not seen details of their design–and upon seeing the details it seemed more difficult than we had been led to believe at the April meetings. Referring to me as “project director” is incorrect. Citing the terms of the agreement between Intel and Busicom after the arrival of the Busicom engineers is misleading. The agreement, i.e. 60,000 chip sets (to be specified by Busicom at a later time) were to be sold by Intel at a price not to exceed $50.00, was signed in April, some two months before the Busicom engineering team arrived.  The agreement was not revisited after the Busicom engineers arrived at Intel. Stating that the Intel/Busicom design would fail “catastrophically” and Intel would be left “high and dry” misrepresents my conclusions. Rather I was concerned that it would be difficult to meet the cost targets because of package requirements and chip complexity, and that the number and complexity of chips needed would burden Intel’s limited design staff such that it could impact our work on memory. I never considered that Busicom was "blowing" a product opportunity, nor did I know how to fix the problems at this time. These statements are just more of Malone's fabrication. After I took my concerns to my immediate supervisor, Bob Noyce, he suggested I try to see if there might be a way to simplify the set and authorized me to work with the Busicom engineers. At that point in time there were no discussions of applications other than the Busicom calculators. My work amounted to making a few suggestions as to how some simplifications might be accomplished. The Busicom team listened, but preferred to do their own simplifcation. They did take time to critique some of my ideas, pointing out where problems might arise. Malone insists that my work was a secret, but if it were, how could the Busicom engineers cite what it lacked? Malone states I was burning up time, and working on a small-chip concept. That is not what was happening at all. Busicom had its own approach, and I was suggesting some modifications that I felt would make the job easier. Part of my job assignment at this time  was to work with the Busicom engineers and with Bob Noyce’s assent, felt justified in learning more of Busicom’s design to see where some reduction in complexity might be found. Stating that I was telling Bob Noyce I wanted to emulate a computer is another fabrication. I did make use of my knowledge of computers and how complex problems are solved using programs in the effort to simplify the chip set. The set already had read-only-memory (ROM) and the most obvious ways to simplify the set seemed to be to move some functions from hardware to ROM. That is not the same as starting from  scratch with a general-purpose CPU. When Shima, et.al., objected that some function was missing, I would show how code in ROM could implement that function when using a simplified structure. The Busicom chip set already needed firmware, so my suggestions only involved some modest additional coding. My discussions with Noyce were about chip set simplification and the Busicom engineers reluctance to consider my suggestions. I was not trying to emulate a computer, rather to use computer-like techniques to simplify the set. Noyce encouraged me to continue even if the Busicom team was not ready to accept my proposals, as a possible back-up to what the Busicom team was developing. Noyce’s questions about operating systems etc. are taken out of  context. Bob would come around quite frequently and talk about many issues, including computer technology. I believe he wanted to become more comfortable when talking to Intel memory customers, i.e. computer manufacturers. The discussion of operating system concepts had nothing to do with microprocessors at this point in time. Malone’s claim I reported to Andy Grove is false! As noted above, I reported to Bob Noyce, and Bob was the only Intel signatory on the  April agreement. In my 14 years at Intel, I never reported directly to Grove. One more Malone fabrication. There was no “skunk-works operation.” I did not give up working with the Busicom team nor cease trying to suggest simplifications to their design. I was not needed to get the 1101 256 bit static RAM/semiconductor memory “out the door.” Malone goes to great length to criticize Bob Noyce's actions saying he had signed off on an unproven product that had a “tiny” chances of success? Noyce had just encouraged me to continue trying to get some simplification of the Busicom chip set to which we had already committed–any simplification would have improved the chance of success. Malone says the simplified chips were for a “market that might never exist.” He apparently forgot about those 60,000 chip sets we had  agreed to deliver–and my work represented only a possible simplification of their set requirements, and was a possible backup should Busicom’s engineers be unsuccessful in solving their complexity problem. I was not at this point pursuing a processor design. However, every attempt to simplify the Busicom set tended to move me in the direction of a more general purpose–more programmable architecture. Malone claims Bob gone “renegade” on his own company–just because he authorized work toward making a customer's requirements more compatible with his own company's? I believe most would consider his decision a prudent way to try to keep the Busicom project something that might benefit Intel. I did not work mostly alone as Malone states–I continued discussions with the Busicom team, and was in communication with the MOS designers to ensure any suggestion I might make was consistent with Intel's MOS design capability. I had other tasks as well, but in general did not involve putting out “fires.” The closest event to a "fire" was just before the first 1101 wafers with a chance of working were to come out of fab. Andy told me that functional testers weren't ready and asked if I could help. I threw together a very crude tester over the weekend, and used it to test two wafers. We found 13 good devices on one, two on the other, and celebrated with champagne. My work on the Busicom project took more like two rather than three months, i.e. July and August, 1969. By the end of that period, the suggestions I had been making pretty much added up to the architectural structure for what would be the 4004 CPU. Malone continues to insist the project was secret, and then claims that working with a customer to match its requirements to Intel’s capabilities was a potential scandal. If so, perhaps all engineering should be outlawed? Malone incorrectly states I hired Stan Mazor, because I felt deficient in software. In fact, Stan had been working in computer architecture, and at last I would have someone I could collaborate with. Most of the architecture, etc. was done by the time Stan arrived, but he could help put the whole package together. Unlike Malone’s implication, I had been in frequent communication with Bob Graham, and again the project was not secret. I did not have much to do with the 1102 (Intel’s first 1K DRAM), other than hearing about its problems from Les Vadasz. Malone falsely states the 1103 1K DRAM used the 1102 “core” in spite of earlier statements to the effect Gordon Moore wanted the 1103 to be independent of the 1102. The “new project” passage gets twisted by previous errors. Andy Grove may have been upset with a new burden, but this task just represented the Busicom obligation coming due, not really a “new” project. [CH 15] Malone argues that Faggin was the only one who could have designed the 4004 chip set, but consider that it was if anything less complex than some of the chips of the original Busicom calculator set. One of the reasons Intel was chosen by Busicom was that it was perhaps the only semiconductor company not yet doing calculator chips for Busicom competitors. How did those semiconductor companies get their calculator chips designed?  For example, Mostek designed a single-chip calculator for Busicom, as reported in the February, 1971 issue of Electronics magazine. That chip had 2100 transistors, which was very close to the 4004 transistor count. Bootstrap circuits were well known in metal gate MOS, where gate overlap could be controlled. Initially there were questions of silicon gate applicability to shift registers because they used such techniques. Intel found it could make shift registers using silicon gate MOS, and offering shift registers played a role in forging the connection to CTC. The set of four chips was officially known as the MCS-4 family. The 4004 was a CPU on a chip, because the other chips in the set were memory and I/O. The interface logic on the 4001 and 4002 chips helped to eliminate the “glue logic” that subsequent microprocessors needed–thus making the 4004 more of a single-chip CPU than subsequent microprocessors. Later the 4008 and 4009 chips were provided to perform the glue logic function and allow non-family memory chips to be used with the 4004 as well. Malone’s explanation of bytes and digits is incorrect. Proper terms are: bit (has only two states), a nibble (4 bits), a digit (4-bits for Binary Coded Decimal (BCD);  limited to the range from 0 to 9), a byte (8 bits) and a word (can be many different bit lengths). The 4004 was a 4-bit processor, not a 4-digit processor.   Modern microprocessors are typically 32 or 64 bits, not digits. Also one must separate data quantum size from data path width. For example, the 8088, used in the original IBM PC, processed most data in 16-bit sizes, but used a data path/bus only 8-bits wide. Stan Mazor was more than a computer programmer. At the time of the dual presentation meeting (pg 154) the CPU of the Intel proposal consisted of two chips, designated “arithmetic” and “timing.” Later Stan suggested combining the two–leading to a true CPU on a chip. Again Malone mis-labels the MCS-4 work as secret. The only resources involved until MOS design began were the modest efforts of Stan and myself. Had Intel been required to design and manufacture the original Busicom chip set, Faggin and Shima would have needed considerable extra design staff–or would have taken several more years to complete the set. Consider that the MCS-4 family was one complex logic chip, two memory chips, and one fairly simple I/O expander. Even the reduced Busicom set would have been 8 complex logic chips and two memory chips. On pages 252-253, Malone notes that 1975 had been the most miserable year to date for Intel. and the "company's factory in Peking, a key part of the manufacturing, burned to the ground." A U.S. company having a factory in Peking (now known as Beijing) in 1975 would have been quite unusual--China was still closed to the rest of the world and the U.S. didn't have diplomatic relations with China till January 1, 1979 .  That Intel factory was actually in Penang, Malaysia (Source: Intel 1975 Annual Report). The 1975 Intel report goes on to note that a massive effort allowed them to recover with minimum problems for their customers. It also noted that insurance claims were being filed. [CH 16] There is no indication Intel would have gone into the microprocessor business without the Busicom project. It is more likely that companies making logic sets, e.g. TTL, would have ultimately made the first CPU on a chip. They would have made increasingly complex slice chips, and added program sequence controllers, then finally combined all. [ch 17] Again, Malone mistates the nature of the 1103 1K DRAM.  It was not built on the Honeywell core. Malone wrongly asserts that Bob Noyce had taken on a long-shot project that was secret, etc.--all fabrication. [ch 18] Malone is wrong in stating I lobbied against announcing, I only cautioned against overselling. I urged we identify new uses for computers, for example those that had been done by relays, SSI/MSI logic, etc. The argument about customers needing to learn programming was an urge to offer certain types of support, not in opposition to announcing the product. Stan Mazor who was the primary contact with Computer Terminal Corp. (CTC).  It was Stan who made the proposals to them, not myself. Hal Feeney reported to Les Vadasz, he was not one of my “subordinates.” Malone took a comment about computers as big expensive pieces of equipment way out of context. There were some skeptics who had a difficult time grasping the concept that a computer could be inexpensive. Fortunately, they were in a minority. Malone talks of throwing away a $400 chip set. Why would one throw away a whole set if only one chip is bad? If the CPU chip was tossed, it would represent $30.00 (100 quantity) as of Sept. 1972. I never lobbied against marketing the microprocessors, only against claiming them to be minicomputer replacements.  Again Malone is wrong in these assertions. Minicomputers of that day were about the size of a portable TV set, or the traditional “breadbox,” not a couple of refrigerators as Malone states. They cost about $10,000, not a couple of hundred thousand dollars as Malone claims. A 4004 microprocessor chip set, consisting of the four chips, cost $136 in single quantity in Sept. 1972, – not $400. In 100 quantity, that set cost $63. We were not offering the set(s) to replace mainframes–it was a new market, extending computing power into areas unthinkable a few  years prior to that time.  For many years, the industry refers to that market as "embedded control." It should be noted that minicomputers were not replacements for mainframe computers either. Minicomputers created new markets for computers in many industries, including process control, laboratory instrumentation, test systems, etc. Malone reports Bob Graham’s comments in regard to minicomputers and market share out of context. Those comments came much earlier in 1971, regarding the issue of whether Intel should offer the chips as part of its product line, not at the time that marketing plans were being  developed. Inside of Intel, the engineering side (e.g. Faggin and myself) saw tremendous potential - not for replacing big mainframe computers–but for what the industry refers to as embedded control.  For example, Faggin needed to build LSI testers, and I needed to build an EPROM programmer.  Microprocessor control made those jobs an order of magnitude easier than if we had used random logic and/or discrete components. We strongly believed there were other engineers (outside of Intel) that felt the same way we did about using microprocessors for (many) different types of embedded control applications. Mainframe computers were solid state, but most were of a significantly higher level of performance than the first generation of microprocessors.   Most mainframes in the 1970's and 80's used ultra high speed Emitter Coupled Logic (ECL) for the CPU, which provided orders of magnitude higher performance than MOS microprocessors did in those years. Mainframes were never our target market for the MCS4, 8008, 8080 or other MOS LSI microprocessors. When I travelled on business with Bob Noyce, it was usually about memory. There was one major microprocessor promotional event, in 1972. Stan and I took three one-week trips over a period of five weeks, presenting the microprocessor concept. Attendance was well above original expectations and our message was very well received. [ch 19] Malone is wrong in stating the 8008 microprocessor was a “turbocharged” version of the 4004--in many applications the 8008 was slower than the 4004. Malone defends his statement with many erroneous numbers. The 4004 clock speed was 740 kHz, not 108. The correct number was noted earlier in Malone's book. The 8008 clock speed was 500 kHz, not 800. The 800 kHz speed was that of a later speed selected version of the 8008, known as the 8008-1, not yet available as of September, 1972. Further, the 8008 took two clock steps to do what the 4004 did in one. Thus it was typically slower than the 4004, even at doing 8-bit arithmetic, and it needed a lot more glue logic than the 4004. We had not been unsuccessful with the 4004.  The 4004 announcement in the Electroic News and at the 1971 Fall Joint Computer Conference generated a greater response than just about any other Intel advertisement Again, Malone talks of replacing mainframes. He seems clueless about the area that was emerging, today called embedded control. At one meeting of microprocessor pioneers held in the 1990s, Shima reported microcontroller usage for embedded control was at least an order of magnitude greater than the use of microprocessors for PCs. At Intel, designers sometimes called to see if we could help with a problem, and many of those calls were routed to me. One customer needed to get data from the bottom of an oil well--I asked him if he had heard about our microprocessors? He had not, so I had Intel marketing send him data, and soon we had another customer. Malone is wrong in stating the 8008 was four chips, it was one. And The 8080 had moved some 8008 on-chip features off-chip, so it had somewhat greater memory needs than the 8008. On-board ROM memory for the 8080 is incorrect–the 8080 still required external ROM for programs. While the 8080 was introduced officially by Intel in April 1974, there had been pre-announcement activity and pre-announcement sales–an exception to Intel’s earlier policy. I’ve heard that at least 2,000 8080 devices were presold at a price of $360 each, such that development was fully paid for before the device was announced. NOTE: This editor also heard same thing in March 1974. He presented the keynote speech and was on a microprocessor panel with Intel engineer Phil Tai at an early March 1974 IEEE Conference in Milwaukee, WI.  Mr. Tai described the 8080 and planned support chips BEFORE the parts were formally announced by Intel several months later. “$360 was the single quantity price for the 8080.” he said at the time. ............................................................................... I did not consider my discussion with Bob Noyce as “almost shouting”–I just quietly pointed out that a postponement was actually a decision not to proceed at that time. It is also misleading to put this item here in the narration, as it occurred in the summer of 1971. Malone statement “endless manuals” is very misleading. For the MCS-4 there was a data sheet of 12 pages, covering all 4 chips: CPU, ROM, RAM and I/O expander. The data sheet for the 3101A, a single chip in its second generation, ran 8 pages. We offered a MCS-4 user guide, some 122 pages, which taught how to use the family in many applications. But we also offered memory design handbooks to help customers with those products. The August, 1973 version ran to some 132 pages. We did produce a 6-page index-card sized quick reference that could fit in a shirt pocket. In visiting customers, it was always gratifying to hear them praise Intel's level of support. Malone claims the EPROM was unusual in that is was a non-volatile ROM.  By design, all EPROM/ROMs were non-volatile so they could retain stored information with no power applied.   They were intended to serve as instruction/program memory (AKA "firmware") for micro-controller applications.  It is hard to imagine a more useless component than a volatile ROM.   Malone totally misrepresents the advantages offered by the EPROM. Before it, a customer using a MOS ROM had to send code to the semiconductor factory, where a mask would be made, wafers processed, then sorted, separated, packaged and tested again. The procedure could take weeks. With the EPROM, a customer could load his firmware into it by himself, not have to order parts from the factory with the code already installed. The customer could debug his code, make corrections, and erase and reuse his EPROM, all in about an hour, not weeks. Malone states for 1972 Intel had $18 million in revenue and was unprofitable. According to Intel's 1972 annual report, the company had over $23 million in revenue and over $1.9 million in profit. Malone states Intel’s 1972 staff could fit in a large conference  room. Intel’s 1972 annual report noted 1002 employees, so it would have been quite a large conference room. Malone describes me coming back from speeches noting a “sea change.” After the tour of 1972 I gave relatively few speeches, and never saw a “sea change.” The 8080 was not the first single chip microprocessor–it required more glue logic chips than the 4004. The 4004 was the first commercially available, single chip microprocessor. Malone claims Intel gave no credit to Faggin for 30 years. Again wrong! Bob Noyce and I co-authored an article for the initial issue of IEEE Micro magazine (February, 1981) in which we gave credit to Faggin and Shima, and included a photo of Shima. A  November/December 1981 issue of Solutions (A Publication of Intel Corporation) states that the microprocessor chip design proceeded in 1970 under the guidance of Dr. Federico Faggin. and that Dr. Faggin would later found one of the most innovative microprocessor firms, Zilog, Inc .………………………………… Summary & Conclusions: From my read of the Intel Trinity book, it seems that Malone gets some idea or notion in his head, then does not let reality interfere with his fantasies.  He appears to assume my motive was to do a CPU on a chip, regardless of its consequences to Intel, and that for some unknown reason Bob Noyce acquiesced.  In reality, I was just trying to simplify a chip set we had already agreed to manufacture and sell.  It just happened that to achieve the most simplification, a simple processor turned out to be the best solution for the Busicom calculator project. Malone evidently can’t comprehend embedded control, which in terms of numbers accounts for much more usage of microprocessors than PCs. He cannot seem to grasp there are any uses for computers of any type, minicomputer, microprocessor, microcontroller, other than as mainframe replacement, when even minicomputers did not perform that function.  Minicomputers were mostly used for control of real time systems, like supervisory and process control in the late 1960s and early 1970s.  Not as mainframe replacements for heavy duty number crunching. Note: the Editor worked on a minicomputer controlled integrated circuit test system at Raytheon Digital System Lab from 1968-69 and from 1970-73 on a minicomputer command & control/telemetry system for the Rinconda (Santa Clara County) water treatment plan.   Later, many process control, machine and device controller functions used one or more microprocessors, which replaced minicomputers and lots of random logic. [Weissberger's paper on Microprocessor Control in the Processing Plant was published in an IEEE Journal in 1974-75 and is available for download on IEEE Xplore.] By being unable to comprehend any usage for microprocessors other than in PCs, Malone misses the major use of microprocessors, especially embedded control in the early to late 1970s. The amazing fact is that microprocessors became commercially available in 1971 (MCS4 chip set), but PC’s didn’t come out till the late 1970s- early 1980s (the IBM PC was introduced in summer of 1981).   So how could early microprocessors be directed at PCs when none existed till 1977 and the industry really didn’t gain traction till the IBM PC in 1981? .................................................................................................... Note: The next article in this series will be on the glaring omissions/credit not given in Malone's Intel Trinity book. While the author of this article (Ted Hoff) is most noted for his co-invention of the microprocessor, his work at Intel on semiconductor memories and LSI codec/filters were at least, if not more important. That can be verified by the IEEE CNSV Oct 2013 panel session on Intel's transition to success. Here are some links related to this event: http://www.californiaconsultants.org/events.cfm/item/200 Full event video Program Slides Five photos from the event Event Summary National Geographic 1982 Story "The Chip" References: Author Michael Malone at the Commonwealth Club: The Story Behind Intel Inventor Ted Hoff’s Keynote @ World IP Day- April 26, 2013 in San Jose, CA   

Share this post