IT History Society Blog

Archive for the ‘Uncategorized’ Category

Happy 50th Birthday S/360!

Monday, April 7th, 2014

system360I consider this set of 150 products announced on April 7, 1964, to be the most important introduced by an American company in the 20th century. And I am not alone in that view.  How we used computers around the world was shaped directly by these machines and software, including your cell phone.  From IBM’s perspective, the firm doubled its revenues in 36 months and went on to be the dominant computer company for a generation.  For the computer industry, it grew at 19-20% compound rates over the next five years.  The architecture of this technology proved so pervasive that (a) computer scientists are still struggling with how to break through to a new paradigm or way of doing computing, (b) all mainframes still use software written by IBM in the 1960s, and (c) it made possible the hiring of perhaps the most unique IBMer in the 1970s – ME!!  Only with the massive diversification in the kinds of employees made possible by the success of this system did it make sense for IBM to bring in the diversified workforce that it did, including two people with Ph.D.s in history – ironically both trained in Spanish history.  The other fellow, also into sales, was an expert on the 1920s and 1930s, graduated out of the U of New Mexico, and worked in Detroit. He was hired a year before me.

 And that is the rest of the story.  Happy Birthday S/360!

The New Digital Age: Authors Eric Schmidt and Jared Cohen in Conversation with Facebook’s Sheryl Sandberg at CHM

Monday, March 10th, 2014

Introduction:

On March 3, 2014, Eric Schmidt and Jared Cohen (co-authors of The New Digital Age) engaged in a stimulating conversation with Facebook’s COO Sheryl Sandberg.  The event took place at the Computer History Museum (CHM) as part of the museum’s Revolutionary series (see description below).  This very interesting and wide ranging discussion, was mostly related to the promise and perils of the digital revolution, especially the Internet as it impacts the developing world. 

Eric Schmidt is the Executive Chairman and former CEO of Google. Jared Cohen is the Director of Google Ideas and a former adviser to Secretaries of State Condoleezza Rice and Hillary Clinton.  During the opening remarks, Schmidt jokingly said to Sheryl Sandberg:  “You’ve done all right for yourself since leaving Google.”

The New Digital Age‘s release in paperback provided the impetus for this CHM program.  This event was part of the CHM’s acclaimed Revolutionaries speaker series, featuring renowned innovators, business and technology leaders, and authors in enthralling conversations often with leading journalists.

Discussion:

There is good and bad in the many ways online technology is changing life around the globe, according to Schmidt.  The Internet empowers people in developing countries that mostly use smart phones for (wireless broadband) access, providing new opportunities for education, business, entertainment, news, etc.  But there are very real problems with regulating its use.  “With no central leaders in the world today, the Internet causes a huge control problem for free speech, morality and copyright protection,” he said.   “When does the Internet stop (i.e. get turned off) and bad stuff happen in countries like Syria and Ukraine/Crimea that are engaged in civil war?”

Cohen said that smart phones were confiscated by the military regime in Somalia and a friend of someone he met there was shot and killed for having pictures unfavorable to the regime on his smart phone.  More details in a bullet point below.

Here are a few key points made by the speakers:

  • On-line privacy talk may come before the sex talk for future generations of children.
  • Digital data permanence may spark the need for identity insurance and that’s somewhat scary.
  • Cyber-safety is a responsibility we bear globally for all, especially the non-tech savvy users.  Cyber-security requires agility and should not be compromised.
  • Schmidt: Suggests WiFi towers (with broadband backhaul) be built in 3rd world countries to provide ubiquitous Internet access and empower individuals living there.  That would have a positive impact on economic growth in the developing world.
  • Schmidt: There is a race between technology and humans, but humans are still debating last decades problems.
  • Schmidt: Technology has reduced the number of jobs, contributing to higher youth unemployment.
  • Schmidt: Displacement of manufacturing jobs by technology is an enduring trend which is a threat to current workers that have repetitive jobs (which might be done by a machine/robot).
  • Schmidt: We need some “social safety net” for displaced workers.  Todays solutions are not good enough!
  • In the future, a totally automated home -with many artificial intelligent devices- will likely happen.
  • Sandberg:  Women are 25% less likely than men to be on line, which puts them at an economic disadvantage.  (It wasn’t clear if that was for the U.S., developing countries or the entire world).
  • Sandberg: Micro-lending in the developing world has money managed by women.  They need smart phones to do that as there may only be wireless Internet access.
  • Sandberg: Giving inexpensive smart phones to women would solve many family problems.  Hundreds of millions of women would be empowered on-line.  Men would not be able to block women’s economic advancement in that case.
  • Cohen: In Syria, a wave of online videos couldn’t immediately stop repeated chemical weapons attacks on civilians. He also said troops have operated armed checkpoints where they forced people to turn over their cellphones for review.
  • The Google execs said that Internet technology made it easier for people like Julian Assange of Wikileaks and former National Security Agency contractor Edward Snowden to turn government secrets into public controversies. That’s not always desirable, the authors say, arguing that governments need to keep some secrets for national security.
  • Schmidt: There are near monopoly Internet service providers in the developing world and competition is needed to get cost effective Internet access.
  • Schmidt: Getting wireless networks upgraded for faster access, lower cost per user and capability to carry more traffic is a big challenge for developing countries (especially when there is no competition to drive that transition).
  • Schmidt: Maybe there are limits to what the Internet can do or be.

Backrounder: The New Digital Age:

In research for their book, Schmidt and Cohen traveled to more than 35 countries, including some of the world’s most volatile and repressive societies.  There they met with political leaders, entrepreneurs, and activists to learn firsthand about the challenges they face. They tackle some of the most interesting questions about our future: how will technology change privacy and security, war and intervention, diplomacy, revolution and terrorism? How will technology improve our lives? What new disruptions can we expect?

For the new paperback edition, Schmidt and Cohen added a new “afterword” to address a number of events, including the wave of revelations about government spying on Internet users, that played out after the book was first published in April 2013. The new section also responds to criticism from Wikileaks founder Julian Assange, who accused the co-authors of uncritically embracing U.S. foreign policy and of glossing over the threat that vast centralized databases pose to individual privacy and freedom.

Addendum: Most highlighted passages from “The New Digital Age” 

Personalization

“On the world stage, the most significant impact of the spread of communication technologies will be the way they help reallocate the concentration of power away from states and institutions and transfer it to individuals.”

Civilizational advance

“By 2025, the majority of the world’s population will, in one generation, have gone from having virtually no access to unfiltered information to accessing all of the world’s information through a device that fits in the palm of the hand.”

Identity

“Identity will be the most valuable commodity for citizens in the future, and it will exist primarily online.”

Anarchy

“The Internet is the largest experiment involving anarchy in history. Hundreds of millions of people are, each minute, creating and consuming an untold amount of digital content in an online world that is not truly bound by terrestrial laws.”

Loss of privacy

“The impact of this data revolution will be to strip citizens of much of their control over their personal information in virtual space, and that will have significant consequences in the physical world.”

Dissonance between technology and geopolitics

“In the months following our [Schmidt and Cohen's] trip [to Iraq], it became clear to us that there is a canyon dividing people who understand technology and people charged with addressing the world’s toughest geopolitical issues, and no one has built a bridge.”

Virtual reality

“In this book we aim to demonstrate ways in which the virtual world can make the physical world better, worse or just different.”

Personalization and customization

“The key advance ahead is personalization. You’ll be able to customize your devices—indeed, much of the technology around you—to fit your needs, so that your environment reflects your preferences.”

Humans, not machines control our destiny

“This is a book about technology, but even more, it’s a book about humans, and how humans interact with, implement, adapt to and exploit technologies in their environment, now and in the future, throughout the world. Most of all, this is a book about the importance of a guiding human hand in the new digital age. For all the possibilities that communication technologies represent, their use for good or ill depends solely on people. Forget all the talk about machines taking over. What happens in the future is up to us.”

Don’t say (or type, or “like”) anything you don’t want on the front of the NY Times

“Since information wants to be free, don’t write anything down you don’t want read back to you in court or printed on the front page of a newspaper, as the saying goes. In the future this adage will broaden to include not just what you say and write, but the websites you visit, who you include in your online network, what you “like,” and what others who are connected to you do, say and share.”

 

Read more at:

http://www.theblaze.com/blog/2014/03/04/10-revealing-passages-from-googles-eric-schmidt-and-jared-cohens-the-new-digital-age/

 http://www.newdigitalage.com/

 

 

A Billion Programmers

Monday, February 24th, 2014

When I first wrote programs in 1953, there was no software and few programmers. I entered programs in the computer’s binary language (octal notation) directly into the machine’s registers. And the machine was all mine: there was no operating system to allocate its resources among multiple programs or operate the input-output devices.

Then, programming aids and compliers evolved, and operating systems to allocate the computer’s resources. Programmers no longer dealt with the machine directly, but with an easier-to-use (maybe) interface. In some respects, programming got easier (anyway, it was a lot quicker) and many people learned to program.

Then the minicomputer and then the personal computer evolved, and three pivotal inventions occurred. Xerox PARC and Apple developed the desktop-icon interface, the spreadsheet was invented, and word processing was introduced. I feel that each made many more people into programmers.

The significance of the desktop, icon-based interface is obvious: as it has evolved over time, it has enabled generations of users to manage desktop and portable devices with minimal training.

I think the spreadsheet was equally important. An ordinary businessperson could populate the cells of a spreadsheet with limitless logical and mathematical processes, all invoked by the entry of a datum. Taken as a whole, a spreadsheet is a very complex and sophisticated program.

The significance of the word processor as a programming tool is less obvious, because in its basic form a word processor requires little programming by its user. But think of how it has grown. The word processor now enables its user to create and manipulate graphics of unlimited complexity, to format both virtual and hardcopy documents with three-dimensional structures, and to provide documents with interactive features, as in computer games. Surely when a user has created a word processing application with capabilities in all these areas, he/she has written a sophisticated program.

The Worldwide Web has created still more opportunities for programmers. The designers and builders of Web sites form a new, lucrative specialty. Even the people who create complex YouTube and Facebook posts are in a sense developing software.

What has happened is that human and machine languages have converged. The computer interface has become more intuitive, more symbolic and more flexible. At the same time much, if not most of the human race is learning about computer language as part of growing up. Booting up, managing and interconnecting computing devices; operating keyboards; selecting among icons; and interacting with the Internet and Web sites are activities performed by small children in all the more developed countries. All these children are being invited by the riches of the Web to try more experiments, to learn more, and to become more proficient programmers. Seamless Internet connections enable them to form teams, to cooperate, and to work together to exploit their best ideas. For instance, what will cloud programmers be able to do with shared 3-D printers?

I envy the trained programmers of the next generation, who will be able to search for inspiration among the contributions of billions of their less-trained fellows and create the software products of the future. We’ve seen nothing yet!

Approved IEEE Milestone: Birth of the 1st PC Operating System (CP/M)

Wednesday, January 22nd, 2014
Introduction:
Gary A. Kildall, PhD (1942 – 1994),  developed and then demonstrated the first working prototype of CP/M (Control Program for Microcomputers) in Pacific Grove in 1974. Together with his invention of the BIOS (Basic Input Output System), Kildall’s operating system allowed a microprocessor-based computer to communicate with a disk drive storage unit and provided an important foundation for the personal computer revolution.   This article reviews the history of CP/M and why it was so important.
A plaque to commemorate this IEEE Milestone will be installed in Pacific Grove, CA on April 25, 2014 with a brief ceremony that is scheduled to start at 2pm.  The  intended site for the plaque, a two-story Victorian residence at the corner of Lighthouse Avenue and Willow Street (Pacific Grove, CA). served as the DRI headquarters building from 1978 to 1991.  The leader of this milestone initiative was Dick Ahrons.  Brian Berg, Tom Rolander, and David Laws are working on the April 25th dedication program for this epic event.
Details on the  April 25th CP/M Milestone program will be posted as a comment when that event is formally announced.
History of CP/M:
In 1976, Gary Kildall incorporated Digital Research, Inc. (DRI) to commercialize the program and released version 1.3 of CP/M and BIOS.
CP/M was the first commercial operating system to allow a microprocessor-based computer to interface to a disk drive storage unit. CP/M played an important role in stimulating the hobbyist personal computer movement of the 1970s.  Its ability to support software programs on a wide variety of hardware configurations enabled early use of microcomputer systems from many different manufacturers in business and scientific applications. Microsoft DOS, as licensed to IBM for the original PC, was written to emulate the “look and feel” of CP/M. Thus CP/M was the forerunner of the operating systems that now power the majority of the world’s computers and led to the personal computing revolution.
The major challenge that Kildall had to overcome in the development of CP/M was the design and debugging of the “complex electronics … to make the diskette drive find certain locations and transfer data back and forth.”
BIOS, which is included in this IEEE milestone, was the first software run by a microprocessor based PC when powered on.   The BIOS software was “built into” the PC (stored in a Read Only Memory) and was the first software run by a PC when it powered on.The fundamental purposes of the BIOS were to initialize and test the system hardware components, and to load an Operating System from a disk drive (or other mass storage) into primary memory (core or semiconductor) that was accessed by the processor.  The BIOS provided an abstraction layer for the hardware, i.e. a consistent way for application programs and operating systems to interact with the keyboard, display, and other input/output devices. Variations in the system hardware were hidden by the BIOS from programs that used BIOS services.

………………………………………………………………………….

The following recollections are abstracted from pages 53 – 55 of “Computer Connections“, an unpublished autobiography that he wrote and distributed to friends and family in 1994. “Memorex … had come up with the new “floppy disk” to replace IBM punched cards. I stared at that damn diskette drive for hours on end … trying to figure a way to make it fly. I tried to build a diskette controller … but I, being mainly hardware inept … couldn’t get my controller to work. So I built an operting (sic) system program … I called it CP/M [but] I just couldn’t figure out how to make that damn disk drive work. Out of frustration, I called my good friend from the University of Washington, John Torode. He designed a neat little microcontroller and after a few months of testing that microcontroller started to work. We loaded my CP/M program from paper tape to the diskette and “booted” CP/M from the diskette, and up came the prompt *. This may have been one of the most exciting days of my life.”
Before Kildall’s development of CP/M, micro-computer manufacturers provided proprietary applications software that worked only on their own hardware. All programs had to be written from scratch to operate on each unique machine configuration. CP/M was initially designed to work on the Intel 8080 microprocessor and allowed computer systems built by any manufacturer who used that chip to run applications programs written by third-party suppliers. CP/M introduced a new element of competition into the computer marketplace that stimulated rapid growth in the use of low-cost systems in business, industry and academia and eventually in the home. According to Kildall, “CP/M was an instant success. By 1980, DRI had sold millions of copies of CP/M to manufacturers and end-users.”
Kildall’s own public account of the history of CP/M was published in Dr Dobbs Journal in 1980: THE EVOLUTION OF AN INDUSTRY: ONE PERSON’S VIEWPOINT, “Dr. Dobb’s Journal of Computer Calisthenics & Orthodontia”, Vol.5, No.1, (January 1980) (number 41), page 6-7.
Numerous popular accounts of the history of CP/M have been published in newspaper and magazine articles and in books, as well as online. Most of them focus on the fictitious story that DRI lost out to Microsoft on the IBM PC operating system decision in the summer of 1980, because Kildall had taken the day off to go flying. Kildall refutes this story in “Computer Connections” but it is probably most eloquently recounted in Harold Evans’ book on U.S. pioneers and innovators  “They Made America: Two Centuries of Innovators from the Steam Engine to the Search Engine” (2004) ISBN 0-316-27766-5.
According to that chapter (paraphrased here):
After Kildall’s wife Dorothy refused to sign IBM’s “ludicrously far reaching non disclosure agreement (NDA)”  in the morning of IBM’s visit to his home, Kildall and Tom Rolander met with IBM that same afternoon.  Once the NDA was agreed to and signed by Kildall, IBM revealed its plan.  Rolander demonstrated DRI’s MP/M-86 – the new multi-tasking Operating System for Intel’s 8086 microprocessor.  Rolander and Kildall wanted that OS to be the new defacto standard for microprocessor based OS’s as they believed multi-tasking was the wave of the future.
Negotiations began on how much IBM would pay DRI. Kildall believed they could strike a deal.  Kildall wrote, “We broke from the discussions, but nevertheless handshaking in general agreement of making a deal.”
That night, Kildall and his family went to the Caribbean for vacation.  When they returned home one week later, Gary called IBM in Boca Raton several times, but “they had gone off the air.”  IBM had gone back to Microsoft to make a deal on the latter’s PC operating system (PC-DOS which was later renamed MS-DOS).   Bill Gates allegedly told IBM that Kildall had not yet finished designing CP/M to run on a 16 bit microprocessor and that Microsoft could by itself meet IBM’s requirements.
Kildall and others claimed that PC-DOS had copied (and was therefore a clone of) CP/M.  DRI threatened to sue IBM for copyright infringement if they proceeded to use PC-DOS in the first IBM PC which was scheduled to be announced in four months (August 1981).
IBM offered to market CP/M-86 along with Microsoft’s PC-DOS (for the first IBM PC) on the condition that Kildall (DRI) would not sue IBM for infringement of CP/M copyrights.  IBM accepted that it would pay DRI a standard royalty rate (presumably for each copy of CP/M sold).
But Kildall did not know that IBM would screw them by making CP/M-86 prohibitively more expensive than PC-DOS which had a 6 to 1 price advantage – $40 vs $240.  IBM evidently had no intention of selling CP/M-86.  Kildall called IBM to request they reduce the price of CP/M, but no one called back.
Kildall wrote, “The pricing difference set by IBM killed CP/M-86.  I believe to this day that the entire (pricing) scenario was contrived by IBM to garner the existing standard at almost no cost.  Fundamental to this conspiracy was the plan to obtain the waiver for their own  PC-DOS produced by Microsoft.”
CP/M rapidly lost market share as the microcomputing market moved to the PC platform, and it never regained its former popularity. Byte magazine, at the time one of the leading industry magazines for microcomputers, essentially ceased covering CP/M products within a few years of the introduction of the IBM PC
…………………………………………….
Before CP/M there was PL/M:
Prior to the work on CP/M, Kildall consulted for Intel and National Semiconductor to help them adapt his PL/M+ compiler to their microprocessor development systems.  Unlike other contemporary languages such as Pascal, C or BASIC, PL/M had no standard input or output routines. It included features targeted at the low-level (microprocessor based) hardware, and could support direct access to any location in memory, I/O ports and microprocessor interrupt flags, in a very efficient manner. PL/M was the first high level programming language for microprocessor based computers and the original implementation language for the CP/M operating system.
+ PL/M was the first high level programming language for microprocessors. It was created by Kildall in 1972.
From the Gary Kildall chapter of the earlier referenced They Made America book:
“Intel was abuzz in 1973 with the triumph of the Intel 8008 chip, which doubled the power of its first microprocessor, and Kildall was drawn to spend more and more  time  there.  After his “eyeballs gave way”, he would  spend  the  night sleeping  in his Volkswagen van in the parking lot. He became a trader  in  an electronic  bazaar,  swapping  his software  skills  for  Intel’s  development hardware.  One morning, he knocked on the door of Hank Smith, the  manager  of Intel’s little software group, and told him he could make a compiler for the Intel 8008  microprocessor, so that his customers would not need to go  through  the drag  of low-level assembly language. Smith did not know what  Kildall  meant.
Kildall  showed how a compiler would enable an 8008 user to write  the  simple equation  x = y + z instead of several lines of low-level  assembly  language.  The manager called a customer he was courting, put the phone down and, with  a big  smile, uttered three words of great significance for the  development  of the Personal Computer: “Go for it!”
The  new  programming language, which  Kildall called  PL/M,  or  Programming  Language  for  Microcomputers,  was  immensely fruitful.   Intel  adopted  it,  and  Kildall  used  it  to  write   his  own microprocessor  applications, such as Operating Systems and utility  programs.
It  was the instrument for developing the PL/I-80 compiler that he  worked  on with  Dan  Davis for three years. “Gary was very visual”, Davis told  me.  “He would  design  things more or less graphically, and then transfer  his  design into code. He even had an aesthetic about his drawings. He was very  thorough, patient  and persistent in ensuring his solutions were not only  correct,  but elegant.”  Kildall’s  reward  was  Intel’s  small  new  computer  system,  the Intellec-8.”
 ……………………………………………………………………………………..
Closing Comment:
This author knew Gary Kildall from 1973 to 1976 from his consulting work at National Semiconductor and the (1975 and 76) Asilomar MicroComputer Workshops.  Contrary to accounts of his being arrogant or aloof, he was just the opposite- gracious, polite, considerate and well behaved.  He did not come across as a hippie, but rather as a trustful academic.  At the 1975  Asilomar workshop, Kildall acknowledged and praised my work developing and demonstrating real time microprocessor applications at National Semiconductor.  He said I was doing great work there.  I felt very proud of myself that day and thanked Gary for his kind words.  IMHO, Kildall was a gentleman and a scholar, but evidently not a shrewd, cut-throat businessman.
………………………………………………..
Primary References:

IEEE Global History Network webpage for this milestone has a complete summary with links and videos:  Milestones:The CP/M Microcomputer Operating System,1974

http://www.ieeeghn.org/wiki/index.php/Milestones:The_CP/M_Microcomputer_Operating_System,_1974

IEEE Milestone Plaque citation summarizing the achievement and its significance:

“Dr. Gary A. Kildall demonstrated the first working prototype of CP/M (Control Program for Microcomputers) in Pacific Grove in 1974. Together with his invention of the BIOS (Basic Input Output System), Kildall’s operating system allowed a microprocessor-based computer to communicate with a disk drive storage unit and provided an important foundation for the personal computer revolution.”

http://www.ieeeghn.org/wiki/index.php/Milestone-Proposal:BIRTH_OF_THE_PC_OPERATING_SYSTEM_1974

Additional References:
  • CP/M and Digital Research Inc. (DRI) Web pages
  • Gary Kildall Special (Video)
  • A Short History of CP/M
  • Gordon Eubanks Oral History (Computerworld 2000)
Supporting Materials:
Numerous original documents, images, personal reminiscences, and videos contributed by employees are posted on the Digital Research Inc. page of the IT Corporate Histories Collection website hosted by the Computer History Museum at:
http://corphist.computerhistory.org/corphist/view.php?s=show&item=documents

 

Why Care Who Invented the First Computer?

Tuesday, January 21st, 2014

During January some of you might have noticed a running dialogue among historians and other interested parties about who invented the “first” computer. There was no agreement reached on the correct answer to that question. Discussions about “firsts” pop up about every five years, almost like short-lived brush fires on the side of the road as historians travel on to do their serious work. Over the past forty years I have seen articles and books, even one lawsuit on patents, over the question of “firsts.”

The interest in “firsts” is not limited just to computers, it seems to be all over the place regarding any modern technology, and it is not limited to historians, but includes those interested in patent protections, copyrights, or just plain old curiosity. It seems for many reasons “firsts” are interesting and important. In the movies it is the “aha” moment that we see when Madame Curie discovered radiation, or earlier Alexander Graham Bell made his first phone call. In recent years there have been discussions about the first PC, with suggestions that it may have been made in Eastern Europe, or that some guy did it in California in the early 1960s and didn’t tell anyone. Failure to exploit technological innovations is another source of conversation, with Xerox Park the perennial favorite for letting too many cool widgets slip by.

Many of these discussions, indeed most of them, are only interesting, not terribly relevant. While it would be great if we could identify the first person who made the first tooth brush and nail down the date of that glorious event, it is never going to happen. We cannot get agreement on when the first computer was built, although this year Colossus is ahead in the race; past contenders for that recognition have included MIT’s differential analyzer, ENIAC, EDVAC, and UNIVAC, each a champion in its own time.

The reason we cannot agree on which was the first computer, or even the first e-mail sent, or the first PC ever built is because computing and all other inventions normally emerge in an evolutionary manner. People take an existing device, and modify it so slightly; then someone else does the same thing to that modified device, and so forth, until such a point that (a) someone gives it a name and it takes off as the “first” of something or (b) when connected to something else becomes so useful that it now comes out of the shadows as if a brand new item. There were PCs in the late 1960s and early 1970s, but not until Apple and IBM were able to sell desktop computers in quantity did the notion of the PC finally take hold, with Apple doing its magic in the late 1970s, IBM in the early 1980s. Those are examples of the first type of innovation. That may be going on today with the humble light bulb as it evolves into something other than an incandescent bulb.

As for (b)—linking things together—connecting computers to telephone lines made possible online and telecommunications. Did that happen in 1940? In 1969? Or later with development of the World Wide Web? The answer is yes for each time. So, what are we to make of all this?

The takeaway is simple. All IT is really quite complex to invent, to make, to use, so much so that when you study specific examples in any detail, you realize no one individual could create all the elements involved. It was and remains an iterative process. Historians who have examined IT’s history tell that story continuously, such as Paul Ceruzzi, Tom Haigh, and Jeff Yost, to mention a few. They keep finding that “firsts” turn out to be “combinations,” “iterations,” “evolutions,” not one time spectacular events. And that is a core lesson about how computers came about and why the technology continues to evolve. Literally millions of people in over 100 countries are tinkering with hardware, software, components, computer science, uses, and ideas about information management. Bottom line, it is nearly impossible to be the “first” to invent anything as complex as a computer. Even developers of cell phone “apps” are building on each other’s prior work.

But, it still is fun to argue who invented the first computer or the first toothbrush.