IT History Society Blog

Archive for the ‘Uncategorized’ Category

Ted Hoff: Errors & Corrections in Intel Trinity book by Michael Malone

Friday, September 12th, 2014

Editor’s NOTE:  This article was written by Ted Hoff, PhD EE and edited by Alan J. Weissberger, Chairman of the IEEE SV History Committee.

From Ted Hoff:

The errors listed below are in approximately the same order as they appear in Malone’s book. To aid the reviewer, chapters are identified in brackets.

[CH 14]

As of 1969, “CPU on a chip” was discussed in the electronics literature, but generally thought to be some time away–most CPUs were just too complex for the state of the semiconductor art at that time. Therefore, at the beginning of 1969, the microprocessor did not cross from “theory to possibility”–it was still not seen as feasible due to the limitations in the LSI processes. The state of the art was such that Intel’s 1101, a 256 bit MOS static RAM, was on the drawing board.

[Editor's Note: In addition to Intel, at least two systems companies- Fairchild Systems Technology and Four Phase Systems were designing "MOS LSI microprocessor" chip sets for internal use in late 1969. Fairchild's was for use as a micro-controller in its Sentry semiconductor tester systems. Four Phased Systems designed the AL1—an 8-bit bit slice CPU chip, containing eight registers and an ALU for use in their data terminals.]

Malone claims Busicom came to Intel to seek a CPU on a chip, but that claim is proved false by the fact that Busicom engineers rejected every suggestion that might have moved them in that direction. Their only interest was in a calculator chip set, and a calculator set is not a CPU.

Malone is also wrong in claiming I was thinking about a CPU on a chip at the time of the Busicom project. He is also wrong in claiming I had used minicomputers to design ICs before joining Intel. I had not used any computer smaller than the IBM 1130, which was typically a room-full installation. My only experiece with IC design was participating in a trial course at Stanford, where a partial layout of a few transistors was done with no usage of computers of any kind.

Representing the PDP-10 as a minicomputer is wrong–the PDP-10 was DECs top of the line mainframe. DECs PDP-8 was a minicomputer but its architecture was not appropriate for the Busicom project, and implying I used it is incorrect. The PDP-8 was a 12-bit word machine, and was not suitable for programs in ROM because of the way it processed subroutines.

I was not looking to build a general purpose processor. I did not “volunteer” to “manage” Busicom–I was asked to act as liaison, to assist the Busicom team achieve their technology transfer. I was willing to take it on, although refusing the request probably would have been unwise at this early time in my employment.

Malone’s implication I expected Busicom to request a computer-on-a-chip is a fabrication. I never expected that.

Stating that Busicom’s design had “morphed” into a “monster” from a more straightforward design is incorrect. Prior to the arrival of the Busicom team we had not seen details of their design–and upon seeing the details it seemed more difficult than we had been led to believe at the April meetings. Referring to me as “project director” is incorrect.

Citing the terms of the agreement between Intel and Busicom after the arrival of the Busicom engineers is misleading. The agreement, i.e. 60,000 chip sets (to be specified by Busicom at a later time) were to be sold by Intel at a price not to exceed $50.00, was signed in April, some two months before the Busicom engineering team arrived.  The agreement was not revisited after the Busicom engineers arrived at Intel.

Stating that the Intel/Busicom design would fail “catastrophically” and Intel would be left “high and dry” misrepresents my conclusions. Rather I was concerned that it would be difficult to meet the cost targets because of package requirements and chip complexity, and that the number and complexity of chips needed would burden Intel’s limited design staff such that it could impact our work on memory.

I never considered that Busicom was “blowing” a product opportunity, nor did I know how to fix the problems at this time. These statements are just more of Malone’s fabrication. After I took my concerns to my immediate supervisor, Bob Noyce, he suggested I try to see if there might be a way to simplify the set and authorized me to work with the Busicom engineers. At that point in time there were no discussions of applications other than the Busicom calculators. My work amounted to making a few suggestions as to how some simplifications might be accomplished. The Busicom team listened, but preferred to do their own simplifcation. They did take time to critique some of my ideas, pointing out where problems might arise.

Malone insists that my work was a secret, but if it were, how could the Busicom engineers cite what it lacked?

Malone states I was burning up time, and working on a small-chip concept. That is not what was happening at all. Busicom had its own approach, and I was suggesting some modifications that I felt would make the job easier. Part of my job assignment at this time  was to work with the Busicom engineers and with Bob Noyce’s assent, felt justified in learning more of Busicom’s design to see where some reduction in complexity might be found.

Stating that I was telling Bob Noyce I wanted to emulate a computer is another fabrication. I did make use of my knowledge of computers and how complex problems are solved using programs in the effort to simplify the chip set. The set already had read-only-memory (ROM) and the most obvious ways to simplify the set seemed to be to move some functions from hardware to ROM. That is not the same as starting from  scratch with a general-purpose CPU. When Shima, et.al., objected that some function was missing, I would show how code in ROM could implement that function when using a simplified structure. The Busicom chip set already needed firmware, so my suggestions only involved some modest additional coding.

My discussions with Noyce were about chip set simplification and the Busicom engineers reluctance to consider my suggestions. I was not trying to emulate a computer, rather to use computer-like techniques to simplify the set. Noyce encouraged me to continue even if the Busicom team was not ready to accept my proposals, as a possible back-up to what the Busicom team was developing.

Noyce’s questions about operating systems etc. are taken out of  context. Bob would come around quite frequently and talk about many issues, including computer technology. I believe he wanted to become more comfortable when talking to Intel memory customers, i.e. computer manufacturers. The discussion of operating system concepts had nothing to do with microprocessors at this point in time.

Malone’s claim I reported to Andy Grove is false! As noted above, I reported to Bob Noyce, and Bob was the only Intel signatory on the  April agreement. In my 14 years at Intel, I never reported directly to Grove. One more Malone fabrication.

There was no “skunk-works operation.” I did not give up working with the Busicom team nor cease trying to suggest simplifications to their design.

I was not needed to get the 1101 256 bit static RAM/semiconductor memory “out the door.”

Malone goes to great length to criticize Bob Noyce’s actions saying he had signed off on an unproven product that had a “tiny” chances of success? Noyce had just encouraged me to continue trying to get some simplification of the Busicom chip set to which we had already committed–any simplification would have improved the chance of success.

Malone says the simplified chips were for a “market that might never exist.” He apparently forgot about those 60,000 chip sets we had  agreed to deliver–and my work represented only a possible simplification of their set requirements, and was a possible backup should Busicom’s engineers be unsuccessful in solving their complexity problem.

I was not at this point pursuing a processor design. However, every attempt to simplify the Busicom set tended to move me in the direction of a more general purpose–more programmable architecture.

Malone claims Bob gone “renegade” on his own company–just because he authorized work toward making a customer’s requirements more compatible with his own company’s? I believe most would consider his decision a prudent way to try to keep the Busicom project something that might benefit Intel.

I did not work mostly alone as Malone states–I continued discussions with the Busicom team, and was in communication with the MOS designers to ensure any suggestion I might make was consistent with Intel’s MOS design capability. I had other tasks as well, but in general did not involve putting out “fires.” The closest event to a “fire” was just before the first 1101 wafers with a chance of working were to come out of fab. Andy told me that functional testers weren’t ready and asked if I could help. I threw together a very crude tester over the weekend, and used it to test two wafers. We found 13 good devices on one, two on the other, and celebrated with champagne.

My work on the Busicom project took more like two rather than three months, i.e. July and August, 1969. By the end of that period, the suggestions I had been making pretty much added up to the architectural structure for what would be the 4004 CPU.

Malone continues to insist the project was secret, and then claims that working with a customer to match its requirements to Intel’s capabilities was a potential scandal. If so, perhaps all engineering should be outlawed?

Malone incorrectly states I hired Stan Mazor, because I felt deficient in software. In fact, Stan had been working in computer architecture, and at last I would have someone I could collaborate with. Most of the architecture, etc. was done by the time Stan arrived, but he could help put the whole package together.

Unlike Malone’s implication, I had been in frequent communication with Bob Graham, and again the project was not secret.

I did not have much to do with the 1102 (Intel’s first 1K DRAM), other than hearing about its problems from Les Vadasz. Malone falsely states the 1103 1K DRAM used the 1102 “core” in spite of earlier statements to the effect Gordon Moore wanted the 1103 to be independent of the 1102.

The “new project” passage gets twisted by previous errors. Andy Grove may have been upset with a new burden, but this task just represented the Busicom obligation coming due, not really a “new” project.

[CH 15]

Malone argues that Faggin was the only one who could have designed the 4004 chip set, but consider that it was if anything less complex than some of the chips of the original Busicom calculator set. One of the reasons Intel was chosen by Busicom was that it was perhaps the only semiconductor company not yet doing calculator chips for Busicom competitors. How did those semiconductor companies get their calculator chips designed?  For example, Mostek designed a single-chip calculator for Busicom, as reported in the February, 1971 issue of Electronics magazine. That chip had 2100 transistors, which was very close to the 4004 transistor count.

Bootstrap circuits were well known in metal gate MOS, where gate overlap could be controlled. Initially there were questions of silicon gate applicability to shift registers because they used such techniques. Intel found it could make shift registers using silicon gate MOS, and offering shift registers played a role in forging the connection to CTC.

The set of four chips was officially known as the MCS-4 family. The 4004 was a CPU on a chip, because the other chips in the set were memory and I/O. The interface logic on the 4001 and 4002 chips helped to eliminate the “glue logic” that subsequent microprocessors needed–thus making the 4004 more of a single-chip CPU than subsequent microprocessors. Later the 4008 and 4009 chips were provided to perform the glue logic function and allow non-family memory chips to be used with the 4004 as well.

Malone’s explanation of bytes and digits is incorrect. Proper terms are: bit (has only two states), a nibble (4 bits), a digit (4-bits for Binary Coded Decimal (BCD);  limited to the range from 0 to 9), a byte (8 bits) and a word (can be many different bit lengths). The 4004 was a 4-bit processor, not a 4-digit processor.   Modern microprocessors are typically 32 or 64 bits, not digits. Also one must separate data quantum size from data path width. For example, the 8088, used in the original IBM PC, processed most data in 16-bit sizes, but used a data path/bus only 8-bits wide.

Stan Mazor was more than a computer programmer. At the time of the dual presentation meeting (pg 154) the CPU of the Intel proposal consisted of two chips, designated “arithmetic” and “timing.” Later Stan suggested combining the two–leading to a true CPU on a chip.

Again Malone mis-labels the MCS-4 work as secret. The only resources involved until MOS design began were the modest efforts of Stan and myself. Had Intel been required to design and manufacture the original Busicom chip set, Faggin and Shima would have needed considerable extra design staff–or would have taken several more years to complete the set. Consider that the MCS-4 family was one complex logic chip, two memory chips, and one fairly simple I/O expander. Even the reduced Busicom set would have been 8 complex logic chips and two memory chips.

On pages 252-253, Malone notes that 1975 had been the most miserable year to date for Intel. and the “company’s factory in Peking, a key part of the manufacturing, burned to the ground.” A U.S. company having a factory in Peking (now known as Beijing) in 1975 would have been quite unusual–China was still closed to the rest of the world and the U.S. didn’t have diplomatic relations with China till January 1, 1979 .  That Intel factory was actually in Penang, Malaysia (Source: Intel 1975 Annual Report). The 1975 Intel report goes on to note that a massive effort allowed them to recover with minimum problems for their customers. It also noted that insurance claims were being filed.

[CH 16]

There is no indication Intel would have gone into the microprocessor business without the Busicom project. It is more likely that companies making logic sets, e.g. TTL, would have ultimately made the first CPU on a chip. They would have made increasingly complex slice chips, and added program sequence controllers, then finally combined all.

[ch 17]

Again, Malone mistates the nature of the 1103 1K DRAM.  It was not built on the Honeywell core.

Malone wrongly asserts that Bob Noyce had taken on a long-shot project that was secret, etc.–all fabrication.

[ch 18]

Malone is wrong in stating I lobbied against announcing, I only cautioned against overselling. I urged we identify new uses for computers, for example those that had been done by relays, SSI/MSI logic, etc.

The argument about customers needing to learn programming was an urge to offer certain types of support, not in opposition to announcing the product.

Stan Mazor who was the primary contact with Computer Terminal Corp. (CTC).  It was Stan who made the proposals to them, not myself.

Hal Feeney reported to Les Vadasz, he was not one of my “subordinates.”

Malone took a comment about computers as big expensive pieces of equipment way out of context. There were some skeptics who had a difficult time grasping the concept that a computer could be inexpensive. Fortunately, they were in a minority.

Malone talks of throwing away a $400 chip set. Why would one throw away a whole set if only one chip is bad? If the CPU chip was tossed, it would represent $30.00 (100 quantity) as of Sept. 1972.

I never lobbied against marketing the microprocessors, only against claiming them to be minicomputer replacements.  Again Malone is wrong in these assertions.

Minicomputers of that day were about the size of a portable TV set, or the traditional “breadbox,” not a couple of refrigerators as Malone states. They cost about $10,000, not a couple of hundred thousand dollars as Malone claims.

A 4004 microprocessor chip set, consisting of the four chips, cost $136 in single quantity in Sept. 1972, – not $400. In 100 quantity, that set cost $63.

We were not offering the set(s) to replace mainframes–it was a new market, extending computing power into areas unthinkable a few  years prior to that time.  For many years, the industry refers to that market as “embedded control.” It should be noted that minicomputers were not replacements for mainframe computers either. Minicomputers created new markets for computers in many industries, including process control, laboratory instrumentation, test systems, etc.

Malone reports Bob Graham’s comments in regard to minicomputers and market share out of context. Those comments came much earlier in 1971, regarding the issue of whether Intel should offer the chips as part of its product line, not at the time that marketing plans were being  developed.

Inside of Intel, the engineering side (e.g. Faggin and myself) saw tremendous potential – not for replacing big mainframe computers–but for what the industry refers to as embedded control.  For example, Faggin needed to build LSI testers, and I needed to build an EPROM programmer.  Microprocessor control made those jobs an order of magnitude easier than if we had used random logic and/or discrete components. We strongly believed there were other engineers (outside of Intel) that felt the same way we did about using microprocessors for (many) different types of embedded control applications.

Mainframe computers were solid state, but most were of a significantly higher level of performance than the first generation of microprocessors.   Most mainframes in the 1970’s and 80’s used ultra high speed Emitter Coupled Logic (ECL) for the CPU, which provided orders of magnitude higher performance than MOS microprocessors did in those years.

Mainframes were never our target market for the MCS4, 8008, 8080 or other MOS LSI microprocessors.

When I travelled on business with Bob Noyce, it was usually about memory. There was one major microprocessor promotional event, in 1972. Stan and I took three one-week trips over a period of five weeks, presenting the microprocessor concept. Attendance was well above original expectations and our message was very well received.

[ch 19]

Malone is wrong in stating the 8008 microprocessor was a “turbocharged” version of the 4004–in many applications the 8008 was slower than the 4004. Malone defends his statement with many erroneous numbers.

The 4004 clock speed was 740 kHz, not 108. The correct number was noted earlier in Malone’s book. The 8008 clock speed was 500 kHz, not 800. The 800 kHz speed was that of a later speed selected version of the 8008, known as the 8008-1, not yet available as of September, 1972. Further, the 8008 took two clock steps to do what the 4004 did in one.

Thus it was typically slower than the 4004, even at doing 8-bit arithmetic, and it needed a lot more glue logic than the 4004.

We had not been unsuccessful with the 4004.  The 4004 announcement in the Electroic News and at the 1971 Fall Joint Computer Conference generated a greater response than just about any other Intel advertisement

Again, Malone talks of replacing mainframes. He seems clueless about the area that was emerging, today called embedded control. At one meeting of microprocessor pioneers held in the 1990s, Shima reported microcontroller usage for embedded control was at least an order of magnitude greater than the use of microprocessors for PCs.

At Intel, designers sometimes called to see if we could help with a problem, and many of those calls were routed to me. One customer needed to get data from the bottom of an oil well–I asked him if he had heard about our microprocessors? He had not, so I had Intel marketing send him data, and soon we had another customer.

Malone is wrong in stating the 8008 was four chips, it was one. And The 8080 had moved some 8008 on-chip features off-chip, so it had somewhat greater memory needs than the 8008.

On-board ROM memory for the 8080 is incorrect–the 8080 still required external ROM for programs.

While the 8080 was introduced officially by Intel in April 1974, there had been pre-announcement activity and pre-announcement sales–an exception to Intel’s earlier policy. I’ve heard that at least 2,000 8080 devices were presold at a price of $360 each, such that development was fully paid for before the device was announced.

NOTE: This editor also heard same thing in March 1974. He presented the keynote speech and was on a microprocessor panel with Intel engineer Phil Tai at an early March 1974 IEEE Conference in Milwaukee, WI.  Mr. Tai described the 8080 and planned support chips BEFORE the parts were formally announced by Intel several months later. “$360 was the single quantity price for the 8080.” he said at the time.

…………………………………………………………………….

I did not consider my discussion with Bob Noyce as “almost shouting”–I just quietly pointed out that a postponement was actually a decision not to proceed at that time. It is also misleading to put this item here in the narration, as it occurred in the summer of 1971.

Malone statement “endless manuals” is very misleading. For the MCS-4 there was a data sheet of 12 pages, covering all 4 chips: CPU, ROM, RAM and I/O expander. The data sheet for the 3101A, a single chip in its second generation, ran 8 pages. We offered a MCS-4 user guide, some 122 pages, which taught how to use the family in many applications. But we also offered memory design handbooks to help customers with those products. The August, 1973 version ran to some 132 pages. We did produce a 6-page index-card sized quick reference that could fit in a shirt pocket. In visiting customers, it was always gratifying to hear them praise Intel’s level of support.

Malone claims the EPROM was unusual in that is was a non-volatile ROM.  By design, all EPROM/ROMs were non-volatile so they could retain stored information with no power applied.   They were intended to serve as instruction/program memory (AKA “firmware”) for micro-controller applications.  It is hard to imagine a more useless component than a volatile ROM.  

Malone totally misrepresents the advantages offered by the EPROM. Before it, a customer using a MOS ROM had to send code to the semiconductor factory, where a mask would be made, wafers processed, then sorted, separated, packaged and tested again. The procedure could take weeks. With the EPROM, a customer could load his firmware into it by himself, not have to order parts from the factory with the code already installed. The customer could debug his code, make corrections, and erase and reuse his EPROM, all in about an hour, not weeks.

Malone states for 1972 Intel had $18 million in revenue and was unprofitable. According to Intel’s 1972 annual report, the company had over $23 million in revenue and over $1.9 million in profit.

Malone states Intel’s 1972 staff could fit in a large conference  room. Intel’s 1972 annual report noted 1002 employees, so it would have been quite a large conference room.

Malone describes me coming back from speeches noting a “sea change.” After the tour of 1972 I gave relatively few speeches, and never saw a “sea change.”

The 8080 was not the first single chip microprocessor–it required more glue logic chips than the 4004. The 4004 was the first commercially available, single chip microprocessor.

Malone claims Intel gave no credit to Faggin for 30 years. Again wrong! Bob Noyce and I co-authored an article for the initial issue of IEEE Micro magazine (February, 1981) in which we gave credit to Faggin and Shima, and included a photo of Shima. A  November/December 1981 issue of Solutions (A Publication of Intel Corporation) states that the microprocessor chip design proceeded in 1970 under the guidance of Dr. Federico Faggin. and that Dr. Faggin would later found one of the most innovative microprocessor firms, Zilog, Inc

.…………………………………

Summary & Conclusions:

From my read of the Intel Trinity book, it seems that Malone gets some idea or notion in his head, then does not let reality interfere with his fantasies.  He appears to assume my motive was to do a CPU on a chip, regardless of its consequences to Intel, and that for some unknown reason Bob Noyce acquiesced.  In reality, I was just trying to simplify a chip set we had already agreed to manufacture and sell.  It just happened that to achieve the most simplification, a simple processor turned out to be the best solution for the Busicom calculator project.

Malone evidently can’t comprehend embedded control, which in terms of numbers accounts for much more usage of microprocessors than PCs. He cannot seem to grasp there are any uses for computers of any type, minicomputer, microprocessor, microcontroller, other than as mainframe replacement, when even minicomputers did not perform that function.  Minicomputers were mostly used for control of real time systems, like supervisory and process control in the late 1960s and early 1970s.  Not as mainframe replacements for heavy duty number crunching.

Note: the Editor worked on a minicomputer controlled integrated circuit test system at Raytheon Digital System Lab from 1968-69 and from 1970-73 on a minicomputer command & control/telemetry system for the Rinconda (Santa Clara County) water treatment plan.   Later, many process control, machine and device controller functions used one or more microprocessors, which replaced minicomputers and lots of random logic.

[Weissberger's paper on Microprocessor Control in the Processing Plant was published in an IEEE Journal in 1974-75 and is available for download on IEEE Xplore.]

By being unable to comprehend any usage for microprocessors other than in PCs, Malone misses the major use of microprocessors, especially embedded control in the early to late 1970s. The amazing fact is that microprocessors became commercially available in 1971 (MCS4 chip set), but PC’s didn’t come out till the late 1970s- early 1980s (the IBM PC was introduced in summer of 1981).   So how could early microprocessors be directed at PCs when none existed till 1977 and the industry really didn’t gain traction till the IBM PC in 1981?

……………………………………………………………………………………….

Note: The next article in this series will be on the glaring omissions/credit not given in Malone’s Intel Trinity book. While the author of this article (Ted Hoff) is most noted for his co-invention of the microprocessor, his work at Intel on semiconductor memories and LSI codec/filters were at least, if not more important. That can be verified by the IEEE CNSV Oct 2013 panel session on Intel’s transition to success. Here are some links related to this event: http://www.californiaconsultants.org/events.cfm/item/200

Full event video
Program Slides
Five photos from the event
Event Summary
National Geographic 1982 Story “The Chip”

References:

Author Michael Malone at the Commonwealth Club: The Story Behind Intel

Inventor Ted Hoff’s Keynote @ World IP Day- April 26, 2013 in San Jose, CA 

 

Author Michael Malone at the Commonwealth Club: The Story Behind Intel

Tuesday, September 9th, 2014

On August 6, 2014, Michael Malone, Author of The Intel Trinity, spoke at the Commonwealth Club of Silicon Valley.  The program was held  in the upper galleries of the Tech Museum in San Jose, CA.

Similar to his earlier speech at the Computer History Museum, Mr. Malone emphasized the evolution, leaders, and current direction of Silicon Valley technology.   The history of Intel and its three great leaders- Bob Noyce, Gordon Moore, and Andy Grove – was discussed only in metaphors and general terms.  The few examples he provided seemed to be historically inaccurate, based on this author’s recollection.

…………………………………………………………….

The Commonwealth Club event abstract  states: “From his unprecedented access through the corporate archives, Malone has chronicled the company’s history and will offer his thoughts on some of the formidable challenges Intel faces in the future.”

…………………………………………………………….

In my opinion, the most important thing Malone said during his talk (including the Q&A session) was that Intel was the “keeper of Moore’s law,” which has been responsible for almost all the advances in electronics for several decades.  That’s due to Intel being able to continue to  advance the state of the art in semiconductor processing and manufacturing which enables them to pack more transistors on a given die size, increase speed, and reduce power consumption.

Another important point Malone made is that the willingness to take risks and “good failure” are important aspects of Silicon Valley’s innovation process.   The right kind of failure can be a career booster.  For example, leadership, vision and high confidence of a CEO/CTO of a failed start-up is valued more than a lucky success.  Malone said that ~95% of Silicon Valley companies fail, and that few companies maintain their lead for more than a few years.  He’s certainly right about that!

Next came what appeared to be a contradiction.  “Our acceptance of failure and even good failure is overrated,” he said.  That’s because a failure can not be equated with success.  “When it occurs, failure is what it .But if you learn from your failure deeply enough and apply those lessons to your next job/start-up it is a good failure” (and, therefore, a very good thing).

How has Silicon Valley gained worldwide respect for innovation and tech leadership? “People here in Silicon Valley have learned to learn and change for the better as a result of their good failures.” So how then can a “good failure” be “over rated” if it’s a key ingredient of the success story of Silicon Valley?

“Intel has made more mistakes/failed more than any company I’ve ever studied,”  Malone opined.  He then qualified that statement saying “Intel failed in a positive way.  Intel has taken more risks over the last half century than probably any company.”  To continue to progress Moore’s law, “Intel is required to take four or more existentialist risks per decade,” Malone added.  We can’t disagree with that, as continuing to invest in wafer fabs and new semiconductor processes is risky and expensive.

“Intel is that rarest of company’s- one that has learned how to learn; turn a failure into a good failure and a success.”

[That was certainly true till 2007, when Apple introduced the iPhone and the mobile computing boom started.  Intel has no- cceeded in mobile computing as they invested in and was the cheerleader for WiMAX - a failed "4G" wireless technology.  The company didn't invest in LTE which, almost all wireless telcos were committed to for "4G."  Also, Intel was not able to reduce power consumption of its AToM processor, so was unable to compete with ARM Ltd's CPU core (used in over 90% of all mobile devices).  Despite several acquisitions (especially Infineon's telecom chip group) there are still no LTE chips or SoC's from Intel.  Nor have they captured significant market share of microprocessors used as "the brains" of mobile devices.]

Malone then goes on to tell the story of Intel’s first microprocessor (the 4004), as he does in his book.  [According to Intel insiders I know, that story is highly inaccurate. We will explain why in a follow up article.]

Malone makes it seems like the invention of the Intel 4004 was a mistake, because Intel was an upstart semiconductor memory company and took on the Busicom calculator/ custom chip-set project because they needed the money to survive.  According to Malone,  Intel turned that mistake around and created the microprocessor chip business, even though no one at Intel really knew what that business was about or would evolve into .  Malone claims that after a few years (date not specified) the entire Intel management team was behind the decision to ditch memories and become a microprocessor company with only two EXCEPTIONS (who presumably were not aware of that decision) — Intel’s CEO (Bob Noyce) and Chairman of the Board (Arthur Rock).  Really?  A totally different account of Intel’s transition from a memory to microprocessor company is detailed here (Oct 2013 IEEE program video segments and slides available).

It’s beyond the scope of this article to analyze and debate Malone’s account of Intel entering and committing big bucks to the microprocessor business.  What’s surprising is Malone didn’t even mention the 8008 or 8080 microprocessors during his talk.  Or the competition Intel faced in the mid 1970s from National Semiconductor, Motorola, and Zilog.

Next, was the tale of “Operation Crush” – Intel was threatened by Motorola’s new microprocessor- the 68000 around 1979-1980. So the company “locked up its management team for four days to come up with a response,” which was reportedly a statement that “we will offer a systems solution,”  e.g. development system, in circuit emulator, peripheral chips, etc.  Really?  Intel had been providing those tools and support LSI chips since the 8080 microprocessor came out in 1975.

The true story of “Operation Crush” is chronicled by an article on the Intel website. It’s goal was to get 2,000 “design wins*” for the 8086/88 microprocessors within a year after its launch in 1980.  It did better than that with 2,500 design wins, including IBM’s selection of the 8088 for their first PC.

Dave House (a classmate of this author at Northeastern University MSEE program- 1968-69), was a leader in that process- he proposed the 8088 with compatible 8 bit bus peripheral chips after IBM had rejected the 8086.  House is also quoted on why Operation Crush was a success in the aforementioned article on Intel’s website.  Yet Mr. House was not mentioned in Malone’s speech and gets no credit whatsoever in his book.

*  A “design win” is a new customer selecting and ordering a given component/module for its systems design.

………………………………………………………

Another very interesting point Malone made was that Silicon Valley lacks a voice/ role model/ tech business leader it once relied on. He began by chronicling the leaders/icons/spokesmen for the Valley over time.

The first “Mayor of Silicon Valley,” Malone said, was Stanford’s Fred Terman, who fostered University-Industry cooperation via the Stanford Research Park and paved the way for the valley’s tech future. The second was Hewlett-Packard founder David Packard; and the third was Intel co-founder Bob Noyce, whose death at age 62 in 1990 created the regional leadership vacuum we still have.

“With Noyce’s death, who was going to take his place?” Malone wondered. “The next guys in line were Steve Jobs and Larry Ellison. You weren’t going to put those guys in charge of a community.”

“This valley needs some sort of strong leadership and a well recognized spokesman,” he said. “Until we get that, this valley’s going to speak in a lot of different voices. We really need to speak with a single voice here,” he added.

“Perhaps that voice (they Mayor of Silicon Valley) might be (Stanford President) John Hennessy,” Malone said.  But that’s not likely, he added, because Malone believes Hennessy wants to retire soon and move to a beach home or equivalent retirement paradise.

[A 1.5 hour interview this author did with Professor Hennessy can be viewed here along with comments on the event from the Professor and attendees.  The individual captioned video segments are here]

Related excerpt from WSJ OP ED on August 22, 2014:

Why Silicon Valley Will Continue to Rule the Tech Economy  (on-line subscription required)

Human talent and research and design labs are arriving to dominate the new era of devices.
This shift is already under way. The epicenter of Silicon Valley has always migrated. With the return to hardware, it is now preparing to leap back to where it began 75 years ago—to Mountain View……

Finally, Silicon Valley needs a de facto “mayor,” the person who represents its broad interests, and not those of a particular company, industry or advocacy groups. The Valley began with such individuals—Stanford’s Fred Terman, Dave Packard and then Intel founder Robert Noyce. But that ended with Noyce’s premature death in 1990. Now, poised to reinvent itself one more time and lead the global economy again, Silicon Valley needs another leader to address the great changes to come.

Closing Question:  Why did Malone continue as a journalist despite being so close to the leaders of Silicon Valley?

Malone said he grew up in Mt. View from 1963 and then moved to Sunnyvale later in the decade.  In the late 1960s,  he knew Steve Jobs from elementary school and his buddies were on the swim team with Steve Wozniak.  But it gets a whole lot more cozy than that!

“On a given afternoon in the 1960s, Ted Hoff, Bob Noyce, and Wozniak were all crossing each other on a corner very near my home (in Sunnyvale, CA).”  He infers he knew all of them very well along with David Packard (who wrote his grad school recommendation letter) and other Silicon Valley celebrities.

[NOTE: Go to 1:07 of the event audio to hear it yourself!]

“Longitudinally, I’ve seen all of Silicon Valley, he said.  “It was all right there in my backyard.”

Closing Comment:

There’s at least one problem with the assertion that Hoff, Noyce, and Wozniak were buzzing around Malone’s corner street in the late 1960s:  Ted Hoff, PhD, did not know Malone in the 1960s and he didn’t live in Sunnyvale during that entire decade!

We will be back with Mr. Hoff’s rebuttal to Malone’s Intel Trinity book in a future blog post.    Here is the first one:

Ted Hoff: Errors & Corrections in Intel Trinity book by Michael Malone

…………………………………………………………….

 

 

The Evolution of the Desk

Monday, September 8th, 2014

A group of students at the Harvard Innovation Lab have created a time-lapsed visualization of the impact of computers, IT, and technology on our lives. The video provides a historical review of the office desk, beginning from the 1980s all the way to present day. The opening scene introduces a desk cluttered with what are now seemingly archaic items – a fax machine, a rolodex, a globe, a radio/alarm clock, a corded phone, an encyclopedia, Yellowpages, glue, tape, scissors, a Polaroid camera, and even an Oxford American Dictionary.

evolution1

The computer is a Macintosh 128K, the original Macintosh personal computer, released simply as the Apple Macintosh.  It originally sold for $2,495 ($5,595 in inflation-adjusted terms) and was the first personal computer with a graphic user interface – something that previously only available on hardware that cost more than $100,000.  The Macintosh had 128 KB of memory and an 8 MHz CPU made by Motorola, along with a 400 KB, single-sided 3.5-inch floppy disk drive.

As the 1980’s progress, we see the Macintosh making way to a more modern laptop, an IBM Thinkpad.  We also see the calculator getting sucked into the computer screen by a Microsoft Excel logo, signaling the emergence of the first spreadsheet programs that were available with a graphical interface in 1985. Shortly after, the glue, tape, and scissors get replaced by Powerpoint, which was launched by Microsoft in 1992 and quickly became the leading slideshow presentation program in the world.

evolution2

By the mid 1990’s, software and Internet applications begin to disrupt many of the items on the desk. We see an Amazon icon replacing the catalog on the bottom left corner of the desk, a Dictionary.com logo usurping the Oxford American Dictionary, and the classifieds making way to the emergence of Craigslist.

We also see a radical in the world of publishing, as the notepad gets replaced by Blogger and the fax machine disappears in favor of Adobe Acrobat and the PDF standard.

evolution3

2004 is when the pace of innovation really begins to accelerate, with Google leading the charge.  Google Maps replaces the globe, Gmail wipes away the envelopes, and the calendar on the wall makes way to Google Calendar. Facebook also makes a huge dent replacing our contact and address books, while Skype and Pandora disrupt the phone and radio, respectively.

In 2006, we see a refresh of the laptop to the MacBook Pro, and from there, the pace of innovation really begins to take off.  YouTube, Yelp, LinkedIn, and Wikipedia all make their entrance in place of the photograph, the Yellowpages, the rolodex, and the encyclopedia. But it’s Google News that probably makes the most radical of disruptions, all but ending the relevance of traditional print newspapers.

evolution4

By 2008, we have a nearly empty desk, and this is where things really begin to take off.  Disruptive applications like Box and Dropbox introduce the concept of cloud file storage, while services like Square and PayPal optimize online payments and e-commerce. The last few years also see an emergence of the shared-service economy, with startups like Lyft, Uber, and Airbnb making a dent in traditional sectors like the hotel and taxi industries.

All in all, the visualization depicts the radical impact of technology over the last 35 years. Advancements in computer infrastructure, software, and IT have managed to declutter a desk full of dozens of physical items into a simple, empty surface consisting of just a laptop, and a phone.

Watch the full video here – Evolution of the Desk.

History Session @ Flash Memory Summit, Aug 7th, Santa Clara, CA

Monday, July 28th, 2014

Session 302-C: An Interview with Simon Sze, Co-Inventor of the Floating Gate (History Track)
Organizer: Brian A. Berg, President, Berg Software Design

Thursday, August 7 9:45am-10:50am Santa Clara Convention Center

Speaker

Simon Sze, Professor, National Chiao Tung University (Taiwan)

Session Description:

What was the origin of the “floating gate” transistor, the foundation for all of today’s nonvolatile memory?  A small group at Bell Labs thought of replacing core memory with non volatile semiconductor memory that didn’t exist at the time. A lunchtime conversation about layered chocolate or cheesecake spawned the concept of a “floating gate” layer for a MOSFET.

Come hear Simon Sze, co-inventor of the floating gate, share details of this and many other interesting stories about how storage technology has progressed, including work by Intel, Toshiba, and many now-forgotten companies.

Intended Audience:
Marketing and sales managers and executives, marketing engineers, product managers, product marketing specialists, hardware and software designers, software engineers, technology managers, systems analysts and integrators, engineering managers, consultants, design specialists, design service providers, marcom specialists, product marketing engineers, financial managers and executives, system engineers, test engineers, venture capitalists, financial analysts, media representatives, sales representatives, distributors, and solution providers.

Session Organizer: Brian A. Berg is Technical Chair of Flash Memory Summit.  He is President of Berg Software Design, a consultancy that has specialized in storage and storage interface technology for 30 years.  Brian has been a conference speaker, session chair and conference chair at over 70 industry events.  He is active in IEEE, including as a Section officer, an officer in the Consultants’ Network and Women in Engineering affinity groups, and Region 6 Milestone Coordinator.  He has a particular interest in flash firmware architecture, including patents and intellectual property.

……………………………………………………………….

About the Interviewee:
Professor Simon Sze, PhD is the co-inventor of floating gate non-volatile semiconductor memory which provided the basis for today’s flash devices.  His invention led to such hugely popular consumer electronics as smartphones, GPS devices, ultrabooks, and tablets.  Dr. Sze has also made significant technical contributions in other areas such as metal-semiconductor contacts, microwave devices, and submicron MOSFET technology.   He has written over 200 technical papers and has written or edited 16 books.  He is currently a National Endowed Chair Professor of Electrical Engineering at National Chiao Tung University (Taiwan).   He is also an academician of the Academia Sinica, a foreign member of the Chinese Academy of Engineering, and a member of the US National Academy of Engineering.  Simon spends half his time in the Taiwan, where he teaches and looks after his 99 year old uncle.

 

About the Chairperson / Interviewer:
Alan J. Weissberger, ScD EE is the Chair of the IEEE Silicon Valley Technology History Committee, Content Manager for the global IEEE ComSoc Community website, North America Correspondent for the IEEE Global Communications Newsletter, Chair Emeritus of the IEEE Santa Clara Valley ComSoc, and an IEEE Senior Life Member.  He is a former Adjunct Professor in the Santa Clara Univ. EE Department where he established the grad EE Telecom curriculum.  As a volunteer for the Computer History Museum, SIGCIS.org and ITHistory.org, he writes technical summaries of lectures and exhibits.

FMS History Session description

Register for FMS here

……………………………………………………………………………………………

Related Session:

Following this history session (at 11am), Prof. Sze will receive the FMS Lifetime Achievment award as co-inventor of the floating gate transistor. More information here.

………………………………………………………………………………………………………………………………

Questions & Issues for Simon to Discuss:

1.   How did the concept of using non-volatile semiconductor memory to replace core memory evolve at Bell Labs in early 1967? Note that there were no commercially available semiconductor memories at that time and Intel didn’t even exist.

2.   Please describe your floating gate transistor project, which was started in March 1967 and completed in May of that same year.  What did layer cake have to do with it?  What type of experiments did you do and what were the results?  What did your AT&T Bell Labs boss say about the paper you wrote on floating gate and its potential use in Non volatile semiconductor memories?

3. Why didn’t Bell Labs attempt to commercialize floating gate or other research related to MOSFETs? After all, they were the #1 captive semiconductor company in the U.S. supplying components to Western Electric and later AT&T Network Systems for decades.

4. Why was the floating gate transistor so vital to NVMs like EPROMs and (later) Flash? History shows that Intel, SanDisk and Toshiba made NVM components based on that technology, but many years after it was invented. How did that happen?

5. 1967 was your best year – even better than years you saw others commercialize your floating gate invention. Please (briefly) tell us why.

6. Describe your relationship with floating gate co-inventor Dawon Kahng who was of Korean descent. How did you two get along- at work and personally? Were there any other Bell Labs co-workers or bosses that impacted your career or life?

7. On a broader scale, what was the work environment like at Bell Labs in the 1960s and how did it change during your 27 years there?

8. You left Bell Labs in 1990 to become a full time professor in Taiwan where you graduated from National Taipei University before you pursued your advanced degrees in the U.S.   24 years you are still a Professor there as well as at Stanford University where you got your PhD.   You’ve also taught numerous guest lectures and courses in other countries such as England, Israel, and mainland China. Please tell us about your academic career, including why you decided to study semiconductor physics at Stanford in the early 1960s and your experience as a Professor and Guest Lecturer.

9. You’ve been very successful as a prolific author of books, chapters, papers, etc. Your textbook on the Physics of Semiconductors is a classic. Tell us about the methodology you used to publish research and textbooks and a few other books/chapters/papers you are especially proud of.

10. You’ve said that Moore’s law hit a wall in 2000, but moved ahead due to advances in making Flash memories. Could you please elaborate on that for us and tell us how long you think Moore’s law can keep going. NOTE: Moore’s law only applies to MOS LSIs- not bipolar or analog components. Up till 2000, Moore’s law was driven by advances in DRAMs.

11. You have a “long wave” theory on the pervasiveness of electronics called the “Cluster Effect” which looks far out into the future. What’s in store for us there- in particular, when Moore’s law ends.

12. What advice would you give to aspiring technology researchers, engineers, authors & educators?

……………………………………………………………

Closing Remarks

Q & A

 

 

 

 

 

 

 

 

DARPA Director Arati Prabhakar in Conversation with John Markoff @CHM June 11, 2014

Monday, June 30th, 2014

 

Introduction:

This CHM conversation (with NY Times moderator  John Markoff asking the questions) was more about the challenges faced by Ms Arati Prabhakar, PhD then it was about DARPA.  It would’ve been very appropriate for a Women in Engineering meeting.  However, there were several important topics related to Ms Prabhaker’s two terms of employment at DARPA, which we’ve attempted to capture in this event summary article.  Note the addendum on Silicon Valley looking to recreate its past via companies establishing innovation labs.

Brief Backgrounder of Arati Prabhakar:

Growing up as an Indo-American in Lubbock, TX was quite challenging, but not nearly as difficult as being a woman in the EE curriculum at Texas Tech.  After obtaining a BSEE from Texas Tech, MSEE and PhD in Applied Physics from Cal Tech, Arati  Prabhakar started working on Silicon Gallium Arsenide (GaAs) projects at DARPA in 1986.  Ms Prabhakar later started the Microelectronics Office there during her first tour of employment.  It’s now called the Micro Systems Technology Office (MTO).  After leaving DARPA in 1993, she became the Director at NIST and then worked for a number of Silicon Valley firms, before returning to DARPA in July 2012 to become its 12th Director.

GaAs Projects at DARPA- then and now:

  • GaAs was of interest to DARPA in the mid 1990’s, because of its “radiation hardened”properties.
  • Bell Labs tried to build a 16K bit memory out of GaAs material.
  • GaAs was said to have higher electron mobility than traditional semiconductors, but that didn’t turn into a competitive advantage.
  • GaAS did blossom in RF devices used in the microwave world.  It is also used in advanced radar systems (see below).

Author’s Note: Advances in traditional MOS VLSI/ULSI technology precluded GaAs being used for many promising potential applications (e.g. high speed communications).  That’s because it was more expensive than MOS, used more circuit board space, and consumed more power.

  • GaAS arrays are used today to build advanced radar systems for military aircraft and ships.
  • The GaAs power amp in cell phones (which communicates with cell towers) traces back to GaAs research done decades ago at DARPA.
  • GaAs will also be used in the electronics within cell towers (Arati did not say how).
  • Gallium Nitride technology has come into full fruition and will be used in the next generation of military radar.

What has DARPA Done and What Are They Doing Now?

Arati Prabhakar noted the following points during her conversation with NY Times John Markoff and selected questions from the audience that followed:

  • We are at the end of Moore’s law and semiconductor companies are well aware of that.
  • The semiconductor industry has become totally globalized, especially manufacturing (Note: outside of Intel, most ICs are made in China, Taiwan or South Korea).
  • Geopolitical threats have refocused a lot of U.S. government research to counter-terrorism.
  • Most powerful technologies developed at DARPA disrupt and change the way the military works.  That may cause intransigence in accepting new technologies progressed by DARPA. “Here comes DARPA again…” some military people might say.
  • Over several decades,  DARPA has built a compelling track record which has earned them more respect and credibility from the U.S. military.
  • It’s not easy for DARPA to shift from stealth technology development or to move to precision guided weapons or infra-red night vision military systems.
  • DARPA’s Robotics Challenge is a contest that had its first trials last December. The winner will be decided in the finals which will take place “in about another year (2015).”
  • Such a “DARPA Challenge” is great way to interact and build technology for a greater community.
  • Ground robotics is an incredibly difficult challenge.  Bandwidth was being squeezed on and off to simulate an environment where there was no communications.  As a result, the robot couldn’t count on consistent communications (with the host computer).
  • DARPA invested $$$$ in Boston Dynamics which Google has now bought.  DARPA was very pleased with that.  “It’s a very promising sign,” Arati said.
  • There’s a long history of DARPA making investment in technologies to show what’s possible.  As the technology matures, private capital gets involved in the next stage.
  • Robotics will take massive private investments and real markets to become commercially viable.  It has tremendous potential to help the military on unmanned missions.
  • A robot developed by a Japanese start-up company named Schaft  (now owned by Google) won the DARPA Robot Rescue challenge last year.  Along with seven of the other top-scorers, Schaft can now apply for more DARPA funds to compete in next year’s finals.
  • DARPA’s budget is only 2% or 3% of all U.S. government R&D.  [That's amazingly less than what one might expect from the organization that created the ARPANET- the precursor to the Internet].
  • What is R&D?  At DARPA, it’s building proto-types and basic research into new areas that will open up technology opportunities.
  • How fragile are research technologies and industrial policies?  It all depends on the follow on use in the military and commercial worlds.
  • “The brain is a new (research) area for DARPA,” said moderator John Markoff. (Arati said below that DARPA has been investing in brain technologies for some time- so not really “a new area”).
  • “Biology is intersecting with physical science and information technology,” said Arati.  It has enormous potential to be the foundation for a whole new set of powerful technologies.  DARPA’s interest is in turning biology into technologies.
  • DARPA has been investing in neural technologies and brain function research for some time.  One application is to get prosthetics to U.S. veterans that have lost their limbs.
  • The human brain controls how our limbs move.  Understanding neural signaling that leads to motor control is a DARPA goal.  Three videos were shown to illustrate several DARPA neuroscience initiatives.  The first showed a prosthetic arm controlled by a human’s brain.  The videos can be seen along with the entire program on the CHM YouTube channel.  See link below.
  • DARPA is trying to figure out how the U.S. can be the most productive user of new technologies and what areas of technology the U.S. should be thinking about developing.
  • DARPA has invested in biological technologies for the last 20 years, starting with biological defense systems.
  • DARPA has created a Biological Technologies Office (BTO) to foster, demonstrate, and transition breakthrough fundamental research, discoveries, and computer science for national security.
  • Amazing things are happening in neuroscience and neuro-technology as well as in synthetic biology.  DARPA wants to turn those into scalable engineering practices (both in terms of time and production cost) so that they can be commercialized.
  • The ability to sequence and synthesize DNA is on an aggressive cost curve (although not a DARPA story) and that’s an ingredient in the synthetic biology world.
  • Arati is inspired by the progress she’s seen in lots of engineering disciplines that together have potential to unleash important new trends in biology.
  • But she’s chastened about how little we understand in biological involved complexity, especially when compared to IT and semiconductors.
  • Biology technologies are highly adaptable, but intractable when compared to transistors or lines of code.
  • Biology related research examples at the BTO include:  synthetic biology, brain work,  fighting infectious diseases.
  • Arati said that DARPA’s two main jobs were:

1]  Core mission is to pursue advanced technologies for use in the U.S.

2] Obligation to engage and raise technology issues so they get into the public eye, and  that a broader community can then decide how society uses the new technologies.

  • DARPA has a very substantial cyber-security portfolio.  DARPA is not responsible for operational security in the U.S.
  • “Patch and pray” is what we have (for operational national security in the U.S.) and we’re trying to do that ever faster.  DARPA wants to figure out technologies that give us a future with respect to cyber security.
  • DARPA’s focus is on the related security technologies that can be used for society to have a secure life and have the foundations for security in and away from home.
  • There are many attack vectors, because of the vastness of our information environment.  Therefore, there can’t be a single silver bullet security solution.  Rather, a layered set of many different security technologies are needed, which could be combined to thwart various security threats.
  • DARPA is working to scale formal methods to build “meaningful size Operating Systems (OS’s) that can be proven to be correct for specified security properties.”
  • DAPRA recently flew a drone in the courtyard of the Pentagon with an OS that has some properties that are unhackable now. “This could be the beginnings of a something that’s a big dream,” Arati said.
  • Current state of GPS:  It’s cheap and easy to gauge your position once the satellite is up in the sky. Military is addicted to it.  But new position, navigation and timing systems are also needed.
  • Future geo-location timing and navigation systems will be based on layered systems and “atom physics.”  The latter involves cooling atoms by shining lasers at them so that their atomic properties can be tapped.
  • The challenge is how to get these new geo-positioning and timing technologies from a huge room with researchers, to a shoe box size unit in a submarine.  A key objective is to minimize or eliminate drift in timing or position.
  • Quantum components in the way we smell have been researched at DARPA , but quantum computing “is not an active area” (as many thought).  Arati was not sure if anything is going on in quantum communications at DARPA.
  • Completely breaking complexity of massive platform military system is a DARPA goal. “They are un-Godly expensive, complex, and tightly coupled.”  The result might be visible in next generation fighter planes, in drones and what a soldier carries on his body.
  • DARPA has created the Mining and Understanding Software Enclaves (MUSE) program to reduce software complexity.
  • MUSE seeks to make significant advances in the way software is built, debugged, verified, maintained and understood. Central to its approach is the creation of a community infrastructure built around a large, diverse and evolving corpus of software drawn from the hundreds of billions of lines of open source code available today.  Automating assembly of code to do higher level functions is a goal.
  • Today, 2/3 of R&D investment is from private enterprises, rather than the 50/50 private/public sector split of several years ago.
  • Private R&D investment is growing faster than GDP [Note that doesn't take much with GDP growing at <2% for last 5 years since recession "ended" in June 2009]
  • Role of government is shifting when it comes to R&D investments.  The federal government is no longer the prime source of research, as it was for years and decades.  We now have incredibly innovative industries, Arati said (this author strongly disagrees).
  • Our ecosystem (the U.S. government, universities, and industry) is healthy.  We have adapted to change after change and she is optimistic that will  continue in the future.
  • How universities pursue research and what’s going to drive the educational system are key questions U.S. must address.
  • “Universities have always been a way that enables DARPA to get great things done,” Arati said to wrap up the program.

CHM YouTube Channel Video: The video of this CHM event can be viewed here.

…………………………………………………………………………………………….

Comments from Richard Weiss, DARPA Director of Public Affairs (received July 4th via email):

I am surprised at the number of errors in your blog, especially since much of the relevant information is on DARPA’s website.

I don’t believe Arati would have said that one of DARPA’s two primary missions is “Obligation to engage and raise technology issues so they get into the public eye, e.g. how U.S. society uses the new technologies.”

DARPA is not a policy shop – it is a tech projects shop. So while DARPA does take seriously so-called ELSI obligations when a tech development raises societal questions, that is not a “primary mission” of DARPA.

It is incorrect (perhaps just sloppy sentence construction) to say that “Substantial cyber-security portfolio at DARPA is subject to “patch and pray.”” Certainly our portfolio is not “subject to” patch and pray. Our portfolio aims to improve upon the nation’s current reliance on “patch and pray” when it comes to cyber security.

………………………………………………………………………………………………

Author’s Response:  The post has been updated to reflect some of Mr. Weiss’ critiques (after carefully listening to the archived event webcast to verify accuracy of same).    Clarifications were also added (e.g. who said what) to avoid misunderstandings.

It’s my strong opinion that a reporter’s job is to accurately “report” what was said at an event.  An event summary is not a research paper where the company’s website is to be checked for accuracy or completeness or to validate/confirm what was said during the program.

Finally, you can check the archived event webcast to verify that Arati did say: “(It’s DARPA’s) Obligation to engage and raise technology issues so they get into the public eye…..”

………………………………………………………………………………………………….

Addendum: Excerpts from NY Times article- 

Silicon Valley Tries to Remake the Idea Machine (Bell Labs)

This superb article chronicles the decline of research amongst established silicon valley companies which have bought start-ups rather than invest in their own research labs (like the long gone AT&T Bell Labs or Xerox PARC).  But a resurgence is underway, starting with Google X.

A few excerpts from the article:

“The federal government now spends $126 billion a year on R. and D., according to the National Science Foundation. (It’s pocket change compared with the $267 billion that the private sector spends.) Asian economies now account for 34 percent of global spending; America’s share is 30 percent.”

“Most of the insurgent tech companies, with their razor focus on advancing the Internet, were too preoccupied to set up their own innovation labs. They didn’t have much of an incentive either. Start-ups became so cheap to create — founders can just rent space in the cloud from Amazon instead of buying servers and buildings to house them — that it became easier and more efficient for big companies to simply buy new ideas rather than coming up with the framework for inventing them. Some of Google’s largest businesses, like Android and Maps, were acquired. “M. and A. is the new R. and D.” became a popular catchphrase.”

“But in the past few years, the thinking has changed, and tech companies have begun looking to the past for answers. In 2010, Google opened Google X, where it is building driverless cars, Internet-connected glasses, balloons that deliver the Internet and other things straight out of science fiction. Microsoft Research just announced the opening of a skunk-works group called Special Projects. Even Bell Labs announced this month that it is trying to return to its original mission by finding far-out ways to solve real-world problems.”

“Instead of focusing on basic science research, “we’re tackling projects that advance science and solve significant problems,” says Regina Dugan, the former director of the Defense Advanced Research Projects Agency (Darpa), who now runs a small group inside Google called Advanced Technology and Projects. “What this means is you’re not compromising this idea of doing really important and interesting science and this sense of it really mattering.” To put a finer point on it, Astro Teller, who oversees Google X, told me: “We are not a research center. We think of ourselves as a moonshot factory, and the reasons for using that phrase is the word ‘moonshot’ reminds us to be audacious, and the word ‘factory’ reminds us we have to industrialize it in the end.””