IT History Society Blog

Ted Hoff: Errors & Corrections in Intel Trinity book by Michael Malone

September 12th, 2014 by Alan Weissberger
  • As of 1969, “CPU on a chip” was discussed in the electronics  literature, but generally thought to be some time away–most CPUs were just too complex for the state of the semiconductor art at that time.   Malone is confused about this.
  • At the beginning of 1969, the microprocessor did not cross from “theory to possibility”–it was still not seen as feasible due to limitations in the MOS LSI process.  The state of the art semiconductor memory being shipped was a 256 bit RAM, e.g. Intel 1101 256 bit static RAM.

[Editor's Note:  In addition to Intel, at least two systems companies- Fairchild Systems Technology and Four Phase Systems were designing "MOS LSI microprocessor" chip sets for internal use in late 1969.  Fairchild's was for use as a micro-controller in its Sentry semiconductor testers.  Four Phased Systems designed the AL1—an 8-bit bit slice CPU chip, containing eight registers and an ALU for use in their data terminals.]

  • Busicom showed no evidence of seeking a CPU on a chip as Malone claims.  Their only interest was in a calculator chip set.  Malone is proved wrong by the Busicom engineers rejecting any move toward a CPU on a chip.   Implying Busicom was seeking a CPU on a chip is false.
  •  Busicom’s chip set was not a CPU; it was a calculator set.  They are not the same thing.
  •  Neither Intel nor I were thinking about a CPU on a chip at that time.
  • I  did not have experience with minicomputers before joining Intel as Malone claims.  He’s wrong.
  • I had not “used minicomputers to help design ICs” either before or after joining Intel.  I had only done one partial IC design as part of a grad EE course at Stanford and it did not involve any computer usage.  I had no experience with minicomputers before joining Intel, contrary to Malone’s assertions.
  • Representing the PDP-10 as a minicomputer is wrong–the PDP-10 was DECs top of the line mainframe.  DECs PDP-8 was a minicomputer.
  •  The PDP-8 architecture was not appropriate for the Busicom project, and implying I used it is incorrect.  The PDP-8 was a 12-bit word machine, and was not suitable for programs in ROM.  I was not looking to build a general purpose processor.
  • I did not “volunteer” to “manage” Busicom–I was asked to act as liaison, to assist the Busicom team achieve their technology transfer.  I was willing to take it on, although refusing the request probably would have been unwise at this early time in my employment.
  • Malone’s implication I expected Busicom to request a computer-on-a-chip is a fabrication.  I never expected such.
  • Stating that Busicom’s design had “morphed” into a “monster” from a more straightforward design is incorrect.  Prior to the arrival of the Busicom team we had not seen details of their design–and upon seeing the details it seemed more difficult than we had been led to believe at the April meetings.
  • Referring to me as “project director” is incorrect.
  • Citing the agreement terms, i.e. 60,000 chip sets at a price not to exceed $50.00 here (after the arrival of the Busicom engineering team) is misleading.  Those terms were part of the April, 1969 agreement.  The Busicom engineers did not arrive until late June of 1969.
  • Stating that the design would fail “catastrophically” and  Intel would be left “high and dry” misrepresents my conclusions.  Rather I was concerned that it would be difficult to meet the cost targets because of package requirements and chip complexity, and the number and complexity of chips needed would burden Intel’s limited design staff such that it could impact our work on memory.
  • I never considered that Busicom was blowing a product opportunity, and I did not know how to fix the problems at this time nor “save” Intel.  Stating so is fabrication.
  • At this point there were no discussions of other applications, only a few suggestions as to how the Busicom set might be simplified.  The Busicom team preferred to do that themselves, but did take time to critique some of my ideas.
  •  Malone keeps asserting that the project was a secret, but if it were, how would te Busicom engineers know what it was or what it lacked?
  • Malone states I was burning up time, and working on a small-chip concept.  That is not what was happening at all.  Busicom had its approach, and I was suggesting some modifications that I felt could make the job easier.  Part of my job assignment at this time was to work with the Busicom engineers and with Bob Noyce’s assent, felt justified in studying Busicom’s design to see where some reduction in complexity might be found.
  • Stating that I was telling Bob Noyce I wanted to emulate a computer misrepresents the situation at this time.  I made use of my knowledge of computers and how complex problems are solved using programs.  I looked for ways to move parts of the Busicom hardware into its ROM.  That is different than starting from scratch with a general-purpose CPU.  When Shima, et.al., objected that some function was missing, I would show how code in ROM could implement that function if using a simplified structure.  The Busicom set already needed firmware, so these suggestions only involved some modest additional coding.
  • My discussions with Noyce were about chip set simplification and the Busicom engineers reluctance to consider my suggestions.  I was not trying to emulate a computer, rather to use computer-like techniques to simplify the set.  Noyce encouraged me to continue even if the Busicom team was not ready to accept my proposals, as a possible back-up to what the Busicom team was developing.
  •  Noyce’s questions are misrepresented here and/or taken out of context.  Bob would come around quite frequently and talk about many issues, including computer technology.  I believe he wanted to become more comfortable when talking to Intel memory customers, i.e. computer manufacturers.  The discussion of operating system concepts had nothing to do with microprocessors at this point in time.
  •  Malone’s claim I reported to Andy Grove is false!  At that time I reported to Bob Noyce, and Bob was the only Intel signatory on the April agreement.  I never reported to Grove. Malone has fabricated a story that has no truth to it whatsoever.
  •  There was no “skunk-works operation.” I just did not give up working with the Busicom team nor cease trying to suggest simplifications to their design.
  • I was not needed to get the 1101 256 bit static RAM/semiconductor memory “out the door.”
  • Why does Malone state Noyce had signed off on an unproven product that had a “tiny” chances of success?  Noyce had just encouraged me to continue trying to get some simplification of the Busicom chip set to which we had already committed–any simplification would have been beneficial to Intel.
  •  Malone says the simplified chips were for a “market that might never existed.”  The market was those 60,000 chip sets we had agreed to deliver–and my work was toward a modification of the requirements or a backup if Busicom’s engineers could not be persuaded to accept the simpler approach or to solve their complexity problem.
  • I was not at this point pursuing a processor design.  However, every attempt to simplify the Busicom set tended to move me in the direction of a more general purpose–more programmable architecture.
  •  In no way had Bob gone “renegade” on his own company–and at this point it was not a microprocessor decision.  He was making  prudent decision to try to keep the Busicom project something that might benefit Intel.
  •  I did not work mostly alone as Malone states–I continued discussions with the Busicom team, and was in communication with the MOS designers.  I had other tasks as well, but in general did not involve putting out “fires.”
  • Rather than three months, it was two–July and August, 1969 during which I derived the architectural structure for what would be the 4004 CPU.
  •  The project was not secret, and why does Malone consider working with a customer to match its requirements to Intel’s capabilities be considered a scandal?  If so, perhaps all engineering should be disallowed?
  • Malone incorrectly states I hired Stan Mazor, because I felt deficient in software.  In fact, Stan had been working in computer architecture, and at last I would have someone I could collaborate with.  Most of the architecture, etc. was done by the time Stan arrived, but he could help put the whole package together.
  • Unlike Malone’s implication, I had been in frequent communication with Bob Graham, and the project was not secret.
  • I did not have much to do with the 1102 (Intel’s first 1K DRAM), other than hearing about its problems from Les Vadasz.
  • Malone states the 1103 1K DRAM used the 1102 “core” which is not true.
  •  The “new project” passage gets twisted by previous errors.  Andy Grove may have been upset with a new burden, but this task just represented the Busicom obligation coming due, not really a “new” project.
  •  If Faggin was the only one who could design chips of the 4000 family complexity, then how did all those other semiconductor companies (remember, Intel was the only one not doing calculator chips for some other company) get their calculator chips designed?
  • Bootstrap circuits were well known in metal gate CMOS, where gate overlap could be controlled.  There were questions of its applicability in silicon gate.  The technique applied to shift registers, and there was some concern that silicon gate might not be suitable for shift registers.  Intel found it could make shift registers using Silicon Gate process technology, and its offering shift registers played a role in forging the connection to CTC.
  • The set of four chips was officially known as the MCS-4 family.  The 4004 was a CPU on a chip, because the other chips in the set were memory and I/O.  The availability of the 4001 and 4002 helped to eliminate the “glue logic” that subsequent microprocessors needed–thus making the 4004 more of a single-chip CPU than subsequent microprocessors.  Later the 4008 and 4009 chips  were added to perform the glue logic function and allow non-family memory chips to be used as well.
  • Malone’s explanation of bytes and digits is incorrect.  Proper terms are:
    bit (has only two states), a nibble (4 bits), a digit (any number from 0 to 9), a byte (8 bits) and a word (can be many different bit sizes)
  • The 4004 was a 4-bit processor, not a 4-digit processor.   Modern microprocessors are typically 32 or 64 bits, not digits.  Also one must separate data quantum size from data path width.  For example, the 8088 processed most data in 16-bit sizes, but used a data path/bus only 8-bits wide.
  • Stan Mazor was more than a computer programmer.  At the time of the dual presentation meeting (pg 154) the CPU consisted of two chips, designated “arithmetic” and “timing.”  Stan proposed combining the two–leading to a true CPU on a chip.
  • Again Malone mis-labels the MCS-4 work as secret.  The only resources involved until MOS design began were the modest efforts of Stan and myself.  Had Intel been required to design and manufacture the original Busicom chip set, Faggin and Shima would have needed considerable extra design staff–or would have taken several more years to complete the set.  Consider that the MCS-4 family was one complex logic chip, two memory chips, and one fairly simple I/O expander.  Even the reduced Busicom set would have been 8 complex logic chips and two memory chips.
  • There is no indication Intel would have gone into the microprocessor business without the Busicom project.  It is more likely that companies making logic sets would have ultimately made the first CPU on a chip.
  •  The 1103 1K DRAM was not built on the Honeywell core.
  •  Again the assertion that Bob Noyce had taken on a long-shot project that was secret, etc. is a fabrication.
  •  I did not lobby against announcing, only against overselling.  I urged new uses for computers that had been done by relays, SSI/MSI logic, etc.
  • The argument about customers needing to learn programming was an urge for a certain type of support, not against announcing product
  •  It was Stan Mazor who was the major contact with Computer Terminal Corp., and Stan who made the proposals, not myself.
  •  Hal Feeney reported to Les Vadasz, he was not one of my “subordinates.”
  •  Comment about computers as big expensive pieces of equipment is way out of context.  There were some skeptics who had a difficult time grasping the concept that a computer could be inexpensive.  Fortunately, they were in a minority.
    It would not have been a $400 chip set tossed away, most likely only a CPU chip, which cost $30.00 in 100 quantity as of Sept. 1972.
  •  I never lobbied against marketing the microprocessor, only against claiming them as minicomputer replacements.
  •  Minicomputers were about the size of a portable TV set, or the traditional “breadbox,” not a couple of refrigerators as Malone states..  They cost about $10,000, not a couple of hundred thousand dollars as Malone claims.
  •  A 4004 microprocessor chip set, consisting of the four chips, cost $136 in single quantity in Sept. 1972, – not $400.  In 100 quantity, that set cost $63.
  • We were not offering the set(s) to replace mainframes–it was a new market, extending computing power into areas unthinkable a few years prior to that time–today we call that market embedded control.  Microprocessor control to replace random (TTL) logic.
  •  Bob Graham’s comments came much earlier in 1971, regarding the issue of whether Intel should offer the chips as part of its product line, not at the time that marketing plans were being developed.
  •  Inside of Intel, in the engineering side, i.e. Faggin and myself both saw tremendous potential, not for replacing big mainframe computers–it was what we now call embedded control.  Faggin needed to build testers, and I needed to build an EPROM programmer–and the microprocessor made those jobs an order of magnitude easier.
  •  Computers were solid state, but many were of a much higher level of performance than the first generation of microprocessors–but that was not our target market.
  •  When I toured with Bob Noyce, it was usually about memory.  There was one major microprocessor promotional even, in 1972.  Stan and I took three one-week trips over a period of five weeks, presenting the microprocessor concept.  Attendance was well above original expectations.
  •  The 8008 microprocessor was not a “turbocharged” version of the 4004.  In many applications the 8008 was slower than the 4004 chip set.
  •  The 4004 clock speed was 740 kHz, not 108, and the 8008 clock speed was 500 kHz, not 800.  The 800 kHz speed was that of a later speed selected version of the 8008, known as the 8008-1.  Further, the 8008 took two clock steps to do what the 4004 did in one.  Thus it was sometimes slower, sometimes a trifle faster than the 4004.  It did need a lot more glue logic than the 4004.
  •  We had not been unsuccessful with the 4004.  The announcement at the 1971 Fall Joint Computer Conference. Electronic News had generated ore response than just about any other Intel advertisement.
  •  We were not replacing mainframes, our market was emerging.  I got many calls about the microprocessor, or in some cases asking for help in a design problem.  Once a customer needed to get data from the bottom of an oil well and asked if I could help.  I asked him if he had heard about our microprocessors?  He had not, so had our marketing send him data, and soon we had another customer.
  •  The 8008 was one chip, not four.  And the 8080 had moved some on-chip features off-chip, so had somewhat greater memory needs than the 8008.
  • On-board ROM memory for the 8080 is incorrect–the 8080 still required external ROM for programs.
  • While the 8080 was introduced officially by Intel in April 1974, there had been pre-announcement activity and pre-announcement sales–an exception to Intel’s earlier policy. I’ve heard that at least 2,000 8080 devices were presold at a price of $360 each, such that development was fully paid for before the device was announced.

NOTE: This editor also heard same thing in 1974.  He presented the keynote speech and was on a microprocessor panel with Intel engineer Phil Tai at an early March 1974 IEEE  Conference in Milwaukee, WI.  Tai described the 8080 and planned support chips BEFORE the parts were formally announced by Intel several months later.  “$360 was the single quantity price for the 8080.” he said at the time.

  •  I did not consider my discussion with Bob as “almost shouting”–it was just pointing out quietly that a postponement was actually a decision not to proceed at that time.  It is also misleading to put this item here, as it occurred in the summer of 1971.
  • “Endless manuals” is misleading.  For the 4004 there was a data sheet which was 12 pages, but covered all 4 chips:  CPU, ROM, RAM and I/O expander.  The data sheet for the 3101A, a single chip then in second generation, was 8 pages.  We did offer a user guide, some 122 pages, which taught how to use the MCS-4 family in many applications.  We also offered memory design handbooks to help users with those products.  The August, 1973 version ran to about 132 pages.  We did produce a quick reference card of 6 index card sized pages that could fit in a shirt pocket.
  •  Malone mentions  a volatile ROM (which would lose storage contents on a power failure).  All ROMs were non-volatile by definition.   That was the reason they were used to store instruction memory for microprocessor applications.
  •  The advantage of the EPROM was that the customer could program the EPROMs by himself, and did not have to order them from the factory with the code already installed (a separate list of omissions will be published here soon).
  •  Malone states for 1972 Intel had $18 million in revenue and was unprofitable.  In 1972 Intel did over $23 million revenue and posted a profit over $1.9 million (Intel 1972 annual report)
  •  Malone states Intel’s 1972 staff could fit in a large conference room.  Intel’s 1972 annual report stated 1002 employees, so it would have been quite a large conference room.
  • Malone describes me coming back from speeches noting a “sea change.”  After the tour of 1972 I gave relatively few speeches, and never saw a “sea change.”
  • The 8080 was not the first single chip microprocessor–it required more glue logic chips than a 4004.  The 4004 was the first commercially available, single chip microprocessor.
  • Malone claims Intel gave no credit to Faggin for 30 years.  Wrong! Bob Noyce and I co-authored an article for the initial issue of IEEE Micro magazine (February, 1981)  in which we gave credit to Faggin and Shima, and included a photo of Shima.  A November/December 1981 issue of Solutions (A Publication of Intel Corporation) states that the microprocessor chip design proceeded in 1970 under the guidance of Dr. Federico Faggin. and that Dr. Faggin would later found one of the most innovative microprocessor firms, Zilog, Inc.

……………………………………………………………………………………………………………

Summary & Conclusions:

From my read of the Intel Trinity book, it seems that Malone gets some idea in his head, and does not let reality interfere with his fantasies.  He assumes my motive was to do a CPU on a chip and to hell with any consequences.  He evidently can’t comprehend embedded control, which is what microprocessors were mainly used for in the 1970s (i.e. as a replacement for random logic control of a process/machine/device/etc).

About two decades ago the Microprocessor Forum had an anniversary celebration and got a lot of microprocessor pioneers together. It was probably 1991 or 1996 (20th or 25th anniversary). Shima reported the use of microcontrollers in Japan as 600 million per year. My guess is that the PC sales in Japan were in the 10 to 20 million range.   I added the material about minicomputers, because they weren’t replacements for mainframe computers, but rather tended to generate a new market for computers–real time systems and process control.

[Note: the Editor worked on a minicomputer controlled integrated circuit test system at Raytheon from 1968-69 and a minicomputer command & control of a water treatment plan from 1970-73.  His paper on Microprocessor Control in the Processing Plant was published in an IEEE Journal in 1974-75 and is available for download on IEEE Xplore.]

By being unable to comprehend any usage for microprocessors other than PCs, Malone misses the major use of  microprocessors, especially embedded control in the early to late 1970s.  The amazing fact is that microprocessors became commercially available in 1971 (MCS4 chip set), but PC’s didn’t come out till the late 1970s- early 1980s (the IBM PC was introduced in summer of 1981).   So how could early microprocessors be directed at PCs when none existed till 1977 and the industry really didn’t gain traction till the IBM PC in 1981?

Note: The next article in this series will be on the glaring omissions/credit not given in Malone’s Intel Trinity book.   While the author of this article (Ted Hoff) is most noted for his co-invention of the microprocessor, his work at Intel on semiconductor memories and LSI codec/filters were at least, if not more important.  That can be verified by the IEEE CNSV Oct 2013 panel session on Intel’s transition to success.  Here are some links related to this event: http://www.californiaconsultants.org/events.cfm/item/200

Full event video
Program Slides
Five photos from the event
Event Summary
National Geographic 1982 Story “The Chip”

 

References:

Author Michael Malone at the Commonwealth Club: The Story Behind Intel

Inventor Ted Hoff’s Keynote @ World IP Day- April 26, 2013 in San Jose, CA


Author Michael Malone at the Commonwealth Club: The Story Behind Intel

September 9th, 2014 by Alan Weissberger

On August 6, 2014, Michael Malone, Author of The Intel Trinity, spoke at the Commonwealth Club of Silicon Valley.  The program was held  in the upper galleries of the Tech Museum in San Jose, CA.

Similar to his earlier speech at the Computer History Museum, Mr. Malone emphasized the evolution, leaders, and current direction of Silicon Valley technology.   The history of Intel and its three great leaders- Bob Noyce, Gordon Moore, and Andy Grove – was discussed only in metaphors and general terms.  The few examples he provided seemed to be historically inaccurate, based on this author’s recollection.

…………………………………………………………….

The Commonwealth Club event abstract  states: “From his unprecedented access through the corporate archives, Malone has chronicled the company’s history and will offer his thoughts on some of the formidable challenges Intel faces in the future.”

…………………………………………………………….

In my opinion, the most important thing Malone said during his talk (including the Q&A session) was that Intel was the “keeper of Moore’s law,” which has been responsible for almost all the advances in electronics for several decades.  That’s due to Intel being able to continue to  advance the state of the art in semiconductor processing and manufacturing which enables them to pack more transistors on a given die size, increase speed, and reduce power consumption.

Another important point Malone made is that the willingness to take risks and “good failure” are important aspects of Silicon Valley’s innovation process.   The right kind of failure can be a career booster.  For example, leadership, vision and high confidence of a CEO/CTO of a failed start-up is valued more than a lucky success.  Malone said that ~95% of Silicon Valley companies fail, and that few companies maintain their lead for more than a few years.  He’s certainly right about that!

Next came what appeared to be a contradiction.  “Our acceptance of failure and even good failure is overrated,” he said.  That’s because a failure can not be equated with success.  “When it occurs, failure is what it .But if you learn from your failure deeply enough and apply those lessons to your next job/start-up it is a good failure” (and, therefore, a very good thing).

How has Silicon Valley gained worldwide respect for innovation and tech leadership? “People here in Silicon Valley have learned to learn and change for the better as a result of their good failures.” So how then can a “good failure” be “over rated” if it’s a key ingredient of the success story of Silicon Valley?

“Intel has made more mistakes/failed more than any company I’ve ever studied,”  Malone opined.  He then qualified that statement saying “Intel failed in a positive way.  Intel has taken more risks over the last half century than probably any company.”  To continue to progress Moore’s law, “Intel is required to take four or more existentialist risks per decade,” Malone added.  We can’t disagree with that, as continuing to invest in wafer fabs and new semiconductor processes is risky and expensive.

“Intel is that rarest of company’s- one that has learned how to learn; turn a failure into a good failure and a success.”

[That was certainly true till 2007, when Apple introduced the iPhone and the mobile computing boom started.  Intel has no- cceeded in mobile computing as they invested in and was the cheerleader for WiMAX - a failed "4G" wireless technology.  The company didn't invest in LTE which, almost all wireless telcos were committed to for "4G."  Also, Intel was not able to reduce power consumption of its AToM processor, so was unable to compete with ARM Ltd's CPU core (used in over 90% of all mobile devices).  Despite several acquisitions (especially Infineon's telecom chip group) there are still no LTE chips or SoC's from Intel.  Nor have they captured significant market share of microprocessors used as "the brains" of mobile devices.]

Malone then goes on to tell the story of Intel’s first microprocessor (the 4004), as he does in his book.  [According to Intel insiders I know, that story is highly inaccurate. We will explain why in a follow up article.]

Malone makes it seems like the invention of the Intel 4004 was a mistake, because Intel was an upstart semiconductor memory company and took on the Busicom calculator/ custom chip-set project because they needed the money to survive.  According to Malone,  Intel turned that mistake around and created the microprocessor chip business, even though no one at Intel really knew what that business was about or would evolve into .  Malone claims that after a few years (date not specified) the entire Intel management team was behind the decision to ditch memories and become a microprocessor company with only two EXCEPTIONS (who presumably were not aware of that decision) — Intel’s CEO (Bob Noyce) and Chairman of the Board (Arthur Rock).  Really?  A totally different account of Intel’s transition from a memory to microprocessor company is detailed here (Oct 2013 IEEE program video segments and slides available).

It’s beyond the scope of this article to analyze and debate Malone’s account of Intel entering and committing big bucks to the microprocessor business.  What’s surprising is Malone didn’t even mention the 8008 or 8080 microprocessors during his talk.  Or the competition Intel faced in the mid 1970s from National Semiconductor, Motorola, and Zilog.

Next, was the tale of “Operation Crush” – Intel was threatened by Motorola’s new microprocessor- the 68000 around 1979-1980. So the company “locked up its management team for four days to come up with a response,” which was reportedly a statement that “we will offer a systems solution,”  e.g. development system, in circuit emulator, peripheral chips, etc.  Really?  Intel had been providing those tools and support LSI chips since the 8080 microprocessor came out in 1975.

The true story of “Operation Crush” is chronicled by an article on the Intel website. It’s goal was to get 2,000 “design wins*” for the 8086/88 microprocessors within a year after its launch in 1980.  It did better than that with 2,500 design wins, including IBM’s selection of the 8088 for their first PC.

Dave House (a classmate of this author at Northeastern University MSEE program- 1968-69), was a leader in that process- he proposed the 8088 with compatible 8 bit bus peripheral chips after IBM had rejected the 8086.  House is also quoted on why Operation Crush was a success in the aforementioned article on Intel’s website.  Yet Mr. House was not mentioned in Malone’s speech and gets no credit whatsoever in his book.

*  A “design win” is a new customer selecting and ordering a given component/module for its systems design.

………………………………………………………

Another very interesting point Malone made was that Silicon Valley lacks a voice/ role model/ tech business leader it once relied on. He began by chronicling the leaders/icons/spokesmen for the Valley over time.

The first “Mayor of Silicon Valley,” Malone said, was Stanford’s Fred Terman, who fostered University-Industry cooperation via the Stanford Research Park and paved the way for the valley’s tech future. The second was Hewlett-Packard founder David Packard; and the third was Intel co-founder Bob Noyce, whose death at age 62 in 1990 created the regional leadership vacuum we still have.

“With Noyce’s death, who was going to take his place?” Malone wondered. “The next guys in line were Steve Jobs and Larry Ellison. You weren’t going to put those guys in charge of a community.”

“This valley needs some sort of strong leadership and a well recognized spokesman,” he said. “Until we get that, this valley’s going to speak in a lot of different voices. We really need to speak with a single voice here,” he added.

“Perhaps that voice (they Mayor of Silicon Valley) might be (Stanford President) John Hennessy,” Malone said.  But that’s not likely, he added, because Malone believes Hennessy wants to retire soon and move to a beach home or equivalent retirement paradise.

[A 1.5 hour interview this author did with Professor Hennessy can be viewed here along with comments on the event from the Professor and attendees.  The individual captioned video segments are here]

Related excerpt from WSJ OP ED on August 22, 2014:

Why Silicon Valley Will Continue to Rule the Tech Economy  (on-line subscription required)

Human talent and research and design labs are arriving to dominate the new era of devices.
This shift is already under way. The epicenter of Silicon Valley has always migrated. With the return to hardware, it is now preparing to leap back to where it began 75 years ago—to Mountain View……

Finally, Silicon Valley needs a de facto “mayor,” the person who represents its broad interests, and not those of a particular company, industry or advocacy groups. The Valley began with such individuals—Stanford’s Fred Terman, Dave Packard and then Intel founder Robert Noyce. But that ended with Noyce’s premature death in 1990. Now, poised to reinvent itself one more time and lead the global economy again, Silicon Valley needs another leader to address the great changes to come.

Closing Question:  Why did Malone continue as a journalist despite being so close to the leaders of Silicon Valley?

Malone said he grew up in Mt. View from 1963 and then moved to Sunnyvale later in the decade.  In the late 1960s,  he knew Steve Jobs from elementary school and his buddies were on the swim team with Steve Wozniak.  But it gets a whole lot more cozy than that!

“On a given afternoon in the 1960s, Ted Hoff, Bob Noyce, and Wozniak were all crossing each other on a corner very near my home (in Sunnyvale, CA).”  He infers he knew all of them very well along with David Packard (who wrote his grad school recommendation letter) and other Silicon Valley celebrities.

[NOTE: Go to 1:07 of the event audio to hear it yourself!]

“Longitudinally, I’ve seen all of Silicon Valley, he said.  “It was all right there in my backyard.”

Closing Comment:

There’s at least one problem with the assertion that Hoff, Noyce, and Wozniak were buzzing around Malone’s corner street in the late 1960s:  Ted Hoff, PhD, did not know Malone in the 1960s and he didn’t live in Sunnyvale during that entire decade!

We will be back with Mr. Hoff’s rebuttal to Malone’s Intel Trinity book in a future blog post.    Here is the first one:

Ted Hoff: Errors & Corrections in Intel Trinity book by Michael Malone

…………………………………………………………….

 

 


The Evolution of the Desk

September 8th, 2014 by Katie Miller

A group of students at the Harvard Innovation Lab have created a time-lapsed visualization of the impact of computers, IT, and technology on our lives. The video provides a historical review of the office desk, beginning from the 1980s all the way to present day. The opening scene introduces a desk cluttered with what are now seemingly archaic items – a fax machine, a rolodex, a globe, a radio/alarm clock, a corded phone, an encyclopedia, Yellowpages, glue, tape, scissors, a Polaroid camera, and even an Oxford American Dictionary.

evolution1

The computer is a Macintosh 128K, the original Macintosh personal computer, released simply as the Apple Macintosh.  It originally sold for $2,495 ($5,595 in inflation-adjusted terms) and was the first personal computer with a graphic user interface – something that previously only available on hardware that cost more than $100,000.  The Macintosh had 128 KB of memory and an 8 MHz CPU made by Motorola, along with a 400 KB, single-sided 3.5-inch floppy disk drive.

As the 1980’s progress, we see the Macintosh making way to a more modern laptop, an IBM Thinkpad.  We also see the calculator getting sucked into the computer screen by a Microsoft Excel logo, signaling the emergence of the first spreadsheet programs that were available with a graphical interface in 1985. Shortly after, the glue, tape, and scissors get replaced by Powerpoint, which was launched by Microsoft in 1992 and quickly became the leading slideshow presentation program in the world.

evolution2

By the mid 1990’s, software and Internet applications begin to disrupt many of the items on the desk. We see an Amazon icon replacing the catalog on the bottom left corner of the desk, a Dictionary.com logo usurping the Oxford American Dictionary, and the classifieds making way to the emergence of Craigslist.

We also see a radical in the world of publishing, as the notepad gets replaced by Blogger and the fax machine disappears in favor of Adobe Acrobat and the PDF standard.

evolution3

2004 is when the pace of innovation really begins to accelerate, with Google leading the charge.  Google Maps replaces the globe, Gmail wipes away the envelopes, and the calendar on the wall makes way to Google Calendar. Facebook also makes a huge dent replacing our contact and address books, while Skype and Pandora disrupt the phone and radio, respectively.

In 2006, we see a refresh of the laptop to the MacBook Pro, and from there, the pace of innovation really begins to take off.  YouTube, Yelp, LinkedIn, and Wikipedia all make their entrance in place of the photograph, the Yellowpages, the rolodex, and the encyclopedia. But it’s Google News that probably makes the most radical of disruptions, all but ending the relevance of traditional print newspapers.

evolution4

By 2008, we have a nearly empty desk, and this is where things really begin to take off.  Disruptive applications like Box and Dropbox introduce the concept of cloud file storage, while services like Square and PayPal optimize online payments and e-commerce. The last few years also see an emergence of the shared-service economy, with startups like Lyft, Uber, and Airbnb making a dent in traditional sectors like the hotel and taxi industries.

All in all, the visualization depicts the radical impact of technology over the last 35 years. Advancements in computer infrastructure, software, and IT have managed to declutter a desk full of dozens of physical items into a simple, empty surface consisting of just a laptop, and a phone.

Watch the full video here – Evolution of the Desk.


Bloodless Beige Boxes | The Story of an Artist and a Thinking Machine

September 2nd, 2014 by Michael Baylor

When was the last time you walked into a data center and were stopped dead in your tracks by the beauty of a computer?  Right, probably never. That is why you will most likely never see a computer in any art history books…but there is one that may well change that.

Even though there is amazing beauty in the intricate mesh of microelectronic circuits inside, most people never really get to see that. Instead, we have come to view a computer as nothing more than a box that we plug things into. That is most likely because there is more thought given to the exterior design of a toaster than most computers.

As a result, computers have become so utilitarian that we do whatever we can to avoid looking at them. We hide them under desks, in computer rooms & data centers and now apparently, we are so disgusted by the sight of them that we hide them in the Clouds (pun intended). By the way, have you ever seen a containerized Cloud data center? Now that is an abomination of computer design if there ever was one.

A Box is Just a Box

The fact is, most computer enclosures are relatively utilitarian. It is just a box after all and what’s inside is all that is important, right? At least that’s the way most computer manufacturers view it. Packaging is normally the last consideration that goes into the design of a computer. It’s not really even “design” at least in the artistic sense, other than the consideration of where to slice some holes for cables and ventilation.

But that’s not the way it was for Thinking Machines, the company that in the 1980’s was years ahead of its competition and without question, put the “sexy” into supercomputing.

Most technology companies adhere to the age old design philosophy that “form follows function” where form is reduced to the utilitarian minimum necessary to fulfill structural and functional requirements.

Thinking Machines however, recognized that few people would appreciate the extreme differences between the Connection Machine and any other computer in the world by simply looking at a boring beige enclosure. The Company wanted the outside to tell a story about the machine and convey the unique architecture that would be hidden from the human eye.

© Thinking Machines Corporation, 1987. Photo: Steve Grohe

© Thinking Machines Corporation, 1987. Photo: Steve Grohe

The design for the Connection Machine CM-1 and the subsequent enhanced version, the CM-2, was conceived by world renowned artist Tamiko Thiel. But just as the Connection Machine broke the mold on computer design, Ms. Thiel is not your average “artist”. Beyond the global accolades for her artistic work, she has a B.S. from Stanford in general engineering/product design and also an M.S. from MIT in mechanical engineering. She followed that with a stint at Akademie der Bildenden Kuenste (Academy of Fine Arts) in Munich and has broken new ground in the field of augmented reality with installations globally.

Tamiko wrote a beautiful essay entitled “The Design of the Connection Machine” where she describes in eloquent detail, the process of designing the “wardrobe” for the most technologically advanced and revolutionary machine of its time. She begins with a concise problem statement that unless you continue reading, understates the complexity of the design challenge she faced.

“Despite our ambitious goals for the appearance of the machine, Thinking Machines’ concern was based on a pragmatic need: to communicate to people that this was the first of a new generation of computers, unlike any machine they had seen before.”

Unlike Any Design They Had Seen Before

Tamiko’s quest to find a form began with the design of the magnificent machine itself. One of computing’s great engineering accomplishments was the physical design of the Connection Machine. 65,536 processors grouped 16 to a chip for a total of 4,096 chips.

She began the visioning process through a working session with none other than Nobel laureate Richard Feynman, who also worked at Thinking Machines during his time off from Cal Tech. Dr. Feynman helped her visualize the hypercube architecture so that she could begin the process of translating the internal design into an enclosure that would conjure up images of mad scientists working deep inside the Cheyenne mountain range.

In her essay, Tamiko wrote “The search for a form had to start with bare practicalities: how do you physically organize a machine with 65,536 processors? Is it physically possible to build it like a ‘normal’ machine, or would we have to wallpaper a room with boards, and weave a rat’s nest of cables between them? The processors were grouped 16 to a chip, making a total of 4,096 chips. These chips were to be wired together in a network having the shape of a 12-dimensional hypercube. The term ‘12-D,’ far from having to do with warp drives and extraterrestrials, had the practical but complicated meaning that each computer chip would be directly wired to 12 other chips in such a way that any two chips, and thereby the 16 processors contained in each chip,  could communicate with each other in 12 or less steps. This network would enable the rapid and flexible communication between processors that made the Connection Machine so effective.”

Let the Drama Begin

Tamiko’s design evoked emotion, something that is not normally associated with computers. The magnificent machine commanded attention and it was clear to anyone that saw it that this was no ordinary computer. It informed the viewer with authority; “I am new breed of supercomputer, a majestic machine that is capable of something very special.”

The final design, used for both the CM-1 and its faster successor, the CM-2, was a massive, 5 feet tall cube formed in turn of smaller cubes, representing the 12-dimensional hypercube structure of the network that connected the processors together.

“This hard geometric object, black, the non-color of sheer, static mass, was transparent, filled with a soft, constantly changing cloud of lights from the processor chips, red, the color of life and energy. It was the archetype of an electronic brain, a living, thinking machine.”

In fact, Steve Jobs was so impressed with her design for the Connection Machine that he wanted her to design the NeXT computer. Private communication from Joanna Hoffman, who was working at NeXT at that time, told Tamiko years later. Unfortunately, Thiel had already moved to Europe to study fine art, and was not to be found.

Richard Feynman in Apple's "Think Different" campaign

Richard Feynman in Apple’s “Think Different” campaign

Tamiko’s artistic prowess extended to the design of the last iteration of the Connection Machine, the CM-5. The CM-5 was designed by Maya Lin, the architect best known for the Vietnam Veterans Memorial in Washington DC. According to Thiel “I did play a small part in it: right at the end of the design I turned up in Boston, and Danny Hillis told me that the design, while beautiful, seemed to him to lack “life.” I spent a day looking at the machine and talking to Maya and Danny about it. I realized that what was missing was the sense that the machine was alive, which I had accomplished by making the doors of the CM-1/CM-2 transparent, so that the status lights inside could be seen as they flashed on and off with the processor activity. In the CM-5 these lights were moved to a separate, additional panel on the edge of each machine segment. I suggested that the sides of the CM-5 be made transparent. The reply was that this had been thought of, but the machine needed to be in a Faraday cage. Yes, a cage, I replied – the sides didn’t have to be solid, a mesh would suffice. And if the mesh occluded some lights some of the times, then when you walked around the machine the lights would appear and disappear, which would again give you the sense of a life inside of the machine. Danny loved the idea, but I never saw the CM-5 in person and don’t know if this was implemented or not.”

Lobbyists Outsmart the Thinking Machine
In spite of building the most advanced computer of its time and hands down the most beautiful, Thinking Machines was ultimately out-lobbied by its competitors and closed its doors in 1994. There are still a few of these machines left, a CM-2 in the collection of the Smithsonian Institution National Museum of American History, and one in the Computer History Museum in Mountain View, Silicon Valley, California.

The blood, sweat, intellect and artistry that went into the design of this magnificent machine created an archaeological artifact that will continue to astound future generations and the Tamiko Thiel design could stand side by side with any of the world’s great art treasures.

Bloodless beige boxes? I think not!

If you would like to know more about the artist, you can view samples of her work at http://www.tamikothiel.com/ where you can also order apparel with the original concept drawings that Tamiko created for her design of the Connection Machine like the one worn by Dr. Feynman in Apple’s “Think Different” campaign, shown above.

I would like to thank Ms. Thiel for her generous giving of time, insights and guidance in researching and creating this story.

 


History Session @ Flash Memory Summit, Aug 7th, Santa Clara, CA

July 28th, 2014 by Alan Weissberger

Session 302-C: An Interview with Simon Sze, Co-Inventor of the Floating Gate (History Track)
Organizer: Brian A. Berg, President, Berg Software Design

Thursday, August 7 9:45am-10:50am Santa Clara Convention Center

Speaker

Simon Sze, Professor, National Chiao Tung University (Taiwan)

Session Description:

What was the origin of the “floating gate” transistor, the foundation for all of today’s nonvolatile memory?  A small group at Bell Labs thought of replacing core memory with non volatile semiconductor memory that didn’t exist at the time. A lunchtime conversation about layered chocolate or cheesecake spawned the concept of a “floating gate” layer for a MOSFET.

Come hear Simon Sze, co-inventor of the floating gate, share details of this and many other interesting stories about how storage technology has progressed, including work by Intel, Toshiba, and many now-forgotten companies.

Intended Audience:
Marketing and sales managers and executives, marketing engineers, product managers, product marketing specialists, hardware and software designers, software engineers, technology managers, systems analysts and integrators, engineering managers, consultants, design specialists, design service providers, marcom specialists, product marketing engineers, financial managers and executives, system engineers, test engineers, venture capitalists, financial analysts, media representatives, sales representatives, distributors, and solution providers.

Session Organizer: Brian A. Berg is Technical Chair of Flash Memory Summit.  He is President of Berg Software Design, a consultancy that has specialized in storage and storage interface technology for 30 years.  Brian has been a conference speaker, session chair and conference chair at over 70 industry events.  He is active in IEEE, including as a Section officer, an officer in the Consultants’ Network and Women in Engineering affinity groups, and Region 6 Milestone Coordinator.  He has a particular interest in flash firmware architecture, including patents and intellectual property.

……………………………………………………………….

About the Interviewee:
Professor Simon Sze, PhD is the co-inventor of floating gate non-volatile semiconductor memory which provided the basis for today’s flash devices.  His invention led to such hugely popular consumer electronics as smartphones, GPS devices, ultrabooks, and tablets.  Dr. Sze has also made significant technical contributions in other areas such as metal-semiconductor contacts, microwave devices, and submicron MOSFET technology.   He has written over 200 technical papers and has written or edited 16 books.  He is currently a National Endowed Chair Professor of Electrical Engineering at National Chiao Tung University (Taiwan).   He is also an academician of the Academia Sinica, a foreign member of the Chinese Academy of Engineering, and a member of the US National Academy of Engineering.  Simon spends half his time in the Taiwan, where he teaches and looks after his 99 year old uncle.

 

About the Chairperson / Interviewer:
Alan J. Weissberger, ScD EE is the Chair of the IEEE Silicon Valley Technology History Committee, Content Manager for the global IEEE ComSoc Community website, North America Correspondent for the IEEE Global Communications Newsletter, Chair Emeritus of the IEEE Santa Clara Valley ComSoc, and an IEEE Senior Life Member.  He is a former Adjunct Professor in the Santa Clara Univ. EE Department where he established the grad EE Telecom curriculum.  As a volunteer for the Computer History Museum, SIGCIS.org and ITHistory.org, he writes technical summaries of lectures and exhibits.

FMS History Session description

Register for FMS here

……………………………………………………………………………………………

Related Session:

Following this history session (at 11am), Prof. Sze will receive the FMS Lifetime Achievment award as co-inventor of the floating gate transistor. More information here.

………………………………………………………………………………………………………………………………

Questions & Issues for Simon to Discuss:

1.   How did the concept of using non-volatile semiconductor memory to replace core memory evolve at Bell Labs in early 1967? Note that there were no commercially available semiconductor memories at that time and Intel didn’t even exist.

2.   Please describe your floating gate transistor project, which was started in March 1967 and completed in May of that same year.  What did layer cake have to do with it?  What type of experiments did you do and what were the results?  What did your AT&T Bell Labs boss say about the paper you wrote on floating gate and its potential use in Non volatile semiconductor memories?

3. Why didn’t Bell Labs attempt to commercialize floating gate or other research related to MOSFETs? After all, they were the #1 captive semiconductor company in the U.S. supplying components to Western Electric and later AT&T Network Systems for decades.

4. Why was the floating gate transistor so vital to NVMs like EPROMs and (later) Flash? History shows that Intel, SanDisk and Toshiba made NVM components based on that technology, but many years after it was invented. How did that happen?

5. 1967 was your best year – even better than years you saw others commercialize your floating gate invention. Please (briefly) tell us why.

6. Describe your relationship with floating gate co-inventor Dawon Kahng who was of Korean descent. How did you two get along- at work and personally? Were there any other Bell Labs co-workers or bosses that impacted your career or life?

7. On a broader scale, what was the work environment like at Bell Labs in the 1960s and how did it change during your 27 years there?

8. You left Bell Labs in 1990 to become a full time professor in Taiwan where you graduated from National Taipei University before you pursued your advanced degrees in the U.S.   24 years you are still a Professor there as well as at Stanford University where you got your PhD.   You’ve also taught numerous guest lectures and courses in other countries such as England, Israel, and mainland China. Please tell us about your academic career, including why you decided to study semiconductor physics at Stanford in the early 1960s and your experience as a Professor and Guest Lecturer.

9. You’ve been very successful as a prolific author of books, chapters, papers, etc. Your textbook on the Physics of Semiconductors is a classic. Tell us about the methodology you used to publish research and textbooks and a few other books/chapters/papers you are especially proud of.

10. You’ve said that Moore’s law hit a wall in 2000, but moved ahead due to advances in making Flash memories. Could you please elaborate on that for us and tell us how long you think Moore’s law can keep going. NOTE: Moore’s law only applies to MOS LSIs- not bipolar or analog components. Up till 2000, Moore’s law was driven by advances in DRAMs.

11. You have a “long wave” theory on the pervasiveness of electronics called the “Cluster Effect” which looks far out into the future. What’s in store for us there- in particular, when Moore’s law ends.

12. What advice would you give to aspiring technology researchers, engineers, authors & educators?

……………………………………………………………

Closing Remarks

Q & A