IT History Society Blog
July 28th, 2014 by Alan Weissberger
Session 302-C: An Interview with Simon Sze, Co-Inventor of the Floating Gate (History Track)
Organizer: Brian A. Berg, President, Berg Software Design
Thursday, August 7 9:45am-10:50am Santa Clara Convention Center
Simon Sze, Professor, National Chiao Tung University (Taiwan)
What was the origin of the “floating gate” transistor, the foundation for all of today’s nonvolatile memory? A small group at Bell Labs thought of replacing core memory with non volatile semiconductor memory that didn’t exist at the time. A lunchtime conversation about layered chocolate or cheesecake spawned the concept of a “floating gate” layer for a MOSFET.
Come hear Simon Sze, co-inventor of the floating gate, share details of this and many other interesting stories about how storage technology has progressed, including work by Intel, Toshiba, and many now-forgotten companies.
Marketing and sales managers and executives, marketing engineers, product managers, product marketing specialists, hardware and software designers, software engineers, technology managers, systems analysts and integrators, engineering managers, consultants, design specialists, design service providers, marcom specialists, product marketing engineers, financial managers and executives, system engineers, test engineers, venture capitalists, financial analysts, media representatives, sales representatives, distributors, and solution providers.
Session Organizer: Brian A. Berg is Technical Chair of Flash Memory Summit. He is President of Berg Software Design, a consultancy that has specialized in storage and storage interface technology for 30 years. Brian has been a conference speaker, session chair and conference chair at over 70 industry events. He is active in IEEE, including as a Section officer, an officer in the Consultants’ Network and Women in Engineering affinity groups, and Region 6 Milestone Coordinator. He has a particular interest in flash firmware architecture, including patents and intellectual property.
About the Interviewee:
Professor Simon Sze, PhD is the co-inventor of floating gate non-volatile semiconductor memory which provided the basis for today’s flash devices. His invention led to such hugely popular consumer electronics as smartphones, GPS devices, ultrabooks, and tablets. Dr. Sze has also made significant technical contributions in other areas such as metal-semiconductor contacts, microwave devices, and submicron MOSFET technology. He has written over 200 technical papers and has written or edited 16 books. He is currently a National Endowed Chair Professor of Electrical Engineering at National Chiao Tung University (Taiwan). He is also an academician of the Academia Sinica, a foreign member of the Chinese Academy of Engineering, and a member of the US National Academy of Engineering. Simon spends half his time in the Taiwan, where he teaches and looks after his 99 year old uncle.
About the Chairperson / Interviewer:
Alan J. Weissberger, ScD EE is the Chair of the IEEE Silicon Valley Technology History Committee, Content Manager for the global IEEE ComSoc Community website, North America Correspondent for the IEEE Global Communications Newsletter, Chair Emeritus of the IEEE Santa Clara Valley ComSoc, and an IEEE Senior Life Member. He is a former Adjunct Professor in the Santa Clara Univ. EE Department where he established the grad EE Telecom curriculum. As a volunteer for the Computer History Museum, SIGCIS.org and ITHistory.org, he writes technical summaries of lectures and exhibits.
FMS History Session description
Register for FMS here
Following this history session (at 11am), Prof. Sze will receive the FMS Lifetime Achievment award as co-inventor of the floating gate transistor. More information here.
Questions & Issues for Simon to Discuss:
1. How did the concept of using non-volatile semiconductor memory to replace core memory evolve at Bell Labs in early 1967? Note that there were no commercially available semiconductor memories at that time and Intel didn’t even exist.
2. Please describe your floating gate transistor project, which was started in March 1967 and completed in May of that same year. What did layer cake have to do with it? What type of experiments did you do and what were the results? What did your AT&T Bell Labs boss say about the paper you wrote on floating gate and its potential use in Non volatile semiconductor memories?
3. Why didn’t Bell Labs attempt to commercialize floating gate or other research related to MOSFETs? After all, they were the #1 captive semiconductor company in the U.S. supplying components to Western Electric and later AT&T Network Systems for decades.
4. Why was the floating gate transistor so vital to NVMs like EPROMs and (later) Flash? History shows that Intel, SanDisk and Toshiba made NVM components based on that technology, but many years after it was invented. How did that happen?
5. 1967 was your best year – even better than years you saw others commercialize your floating gate invention. Please (briefly) tell us why.
6. Describe your relationship with floating gate co-inventor Dawon Kahng who was of Korean descent. How did you two get along- at work and personally? Were there any other Bell Labs co-workers or bosses that impacted your career or life?
7. On a broader scale, what was the work environment like at Bell Labs in the 1960s and how did it change during your 27 years there?
8. You left Bell Labs in 1990 to become a full time professor in Taiwan where you graduated from National Taipei University before you pursued your advanced degrees in the U.S. 24 years you are still a Professor there as well as at Stanford University where you got your PhD. You’ve also taught numerous guest lectures and courses in other countries such as England, Israel, and mainland China. Please tell us about your academic career, including why you decided to study semiconductor physics at Stanford in the early 1960s and your experience as a Professor and Guest Lecturer.
9. You’ve been very successful as a prolific author of books, chapters, papers, etc. Your textbook on the Physics of Semiconductors is a classic. Tell us about the methodology you used to publish research and textbooks and a few other books/chapters/papers you are especially proud of.
10. You’ve said that Moore’s law hit a wall in 2000, but moved ahead due to advances in making Flash memories. Could you please elaborate on that for us and tell us how long you think Moore’s law can keep going. NOTE: Moore’s law only applies to MOS LSIs- not bipolar or analog components. Up till 2000, Moore’s law was driven by advances in DRAMs.
11. You have a “long wave” theory on the pervasiveness of electronics called the “Cluster Effect” which looks far out into the future. What’s in store for us there- in particular, when Moore’s law ends.
12. What advice would you give to aspiring technology researchers, engineers, authors & educators?
Q & A
June 30th, 2014 by Alan Weissberger
This CHM conversation (with NY Times moderator John Markoff asking the questions) was more about the challenges faced by Ms Arati Prabhakar, PhD then it was about DARPA. It would’ve been very appropriate for a Women in Engineering meeting. However, there were several important topics related to Ms Prabhaker’s two terms of employment at DARPA, which we’ve attempted to capture in this event summary article. Note the addendum on Silicon Valley looking to recreate its past via companies establishing innovation labs.
Brief Backgrounder of Arati Prabhakar:
Growing up as an Indo-American in Lubbock, TX was quite challenging, but not nearly as difficult as being a woman in the EE curriculum at Texas Tech. After obtaining a BSEE from Texas Tech, MSEE and PhD in Applied Physics from Cal Tech, Arati Prabhakar started working on Silicon Gallium Arsenide (GaAs) projects at DARPA in 1986. Ms Prabhakar later started the Microelectronics Office there during her first tour of employment. It’s now called the Micro Systems Technology Office (MTO). After leaving DARPA in 1993, she became the Director at NIST and then worked for a number of Silicon Valley firms, before returning to DARPA in July 2012 to become its 12th Director.
GaAs Projects at DARPA- then and now:
- GaAs was of interest to DARPA in the mid 1990′s, because of its “radiation hardened”properties.
- Bell Labs tried to build a 16K bit memory out of GaAs material.
- GaAs was said to have higher electron mobility than traditional semiconductors, but that didn’t turn into a competitive advantage.
- GaAS did blossom in RF devices used in the microwave world. It is also used in advanced radar systems (see below).
Author’s Note: Advances in traditional MOS VLSI/ULSI technology precluded GaAs being used for many promising potential applications (e.g. high speed communications). That’s because it was more expensive than MOS, used more circuit board space, and consumed more power.
- GaAS arrays are used today to build advanced radar systems for military aircraft and ships.
- The GaAs power amp in cell phones (which communicates with cell towers) traces back to GaAs research done decades ago at DARPA.
- GaAs will also be used in the electronics within cell towers (Arati did not say how).
- Gallium Nitride technology has come into full fruition and will be used in the next generation of military radar.
What has DARPA Done and What Are They Doing Now?
Arati Prabhakar noted the following points during her conversation with NY Times John Markoff and selected questions from the audience that followed:
- We are at the end of Moore’s law and semiconductor companies are well aware of that.
- The semiconductor industry has become totally globalized, especially manufacturing (Note: outside of Intel, most ICs are made in China, Taiwan or South Korea).
- Geopolitical threats have refocused a lot of U.S. government research to counter-terrorism.
- Most powerful technologies developed at DARPA disrupt and change the way the military works. That may cause intransigence in accepting new technologies progressed by DARPA. “Here comes DARPA again…” some military people might say.
- Over several decades, DARPA has built a compelling track record which has earned them more respect and credibility from the U.S. military.
- It’s not easy for DARPA to shift from stealth technology development or to move to precision guided weapons or infra-red night vision military systems.
- DARPA’s Robotics Challenge is a contest that had its first trials last December. The winner will be decided in the finals which will take place “in about another year (2015).”
- Such a “DARPA Challenge” is great way to interact and build technology for a greater community.
- Ground robotics is an incredibly difficult challenge. Bandwidth was being squeezed on and off to simulate an environment where there was no communications. As a result, the robot couldn’t count on consistent communications (with the host computer).
- DARPA invested $$$$ in Boston Dynamics which Google has now bought. DARPA was very pleased with that. “It’s a very promising sign,” Arati said.
- There’s a long history of DARPA making investment in technologies to show what’s possible. As the technology matures, private capital gets involved in the next stage.
- Robotics will take massive private investments and real markets to become commercially viable. It has tremendous potential to help the military on unmanned missions.
- A robot developed by a Japanese start-up company named Schaft (now owned by Google) won the DARPA Robot Rescue challenge last year. Along with seven of the other top-scorers, Schaft can now apply for more DARPA funds to compete in next year’s finals.
- DARPA’s budget is only 2% or 3% of all U.S. government R&D. [That's amazingly less than what one might expect from the organization that created the ARPANET- the precursor to the Internet].
- What is R&D? At DARPA, it’s building proto-types and basic research into new areas that will open up technology opportunities.
- How fragile are research technologies and industrial policies? It all depends on the follow on use in the military and commercial worlds.
- “The brain is a new (research) area for DARPA,” said moderator John Markoff. (Arati said below that DARPA has been investing in brain technologies for some time- so not really “a new area”).
- “Biology is intersecting with physical science and information technology,” said Arati. It has enormous potential to be the foundation for a whole new set of powerful technologies. DARPA’s interest is in turning biology into technologies.
- DARPA has been investing in neural technologies and brain function research for some time. One application is to get prosthetics to U.S. veterans that have lost their limbs.
- The human brain controls how our limbs move. Understanding neural signaling that leads to motor control is a DARPA goal. Three videos were shown to illustrate several DARPA neuroscience initiatives. The first showed a prosthetic arm controlled by a human’s brain. The videos can be seen along with the entire program on the CHM YouTube channel. See link below.
- DARPA is trying to figure out how the U.S. can be the most productive user of new technologies and what areas of technology the U.S. should be thinking about developing.
- DARPA has invested in biological technologies for the last 20 years, starting with biological defense systems.
- DARPA has created a Biological Technologies Office (BTO) to foster, demonstrate, and transition breakthrough fundamental research, discoveries, and computer science for national security.
- Amazing things are happening in neuroscience and neuro-technology as well as in synthetic biology. DARPA wants to turn those into scalable engineering practices (both in terms of time and production cost) so that they can be commercialized.
- The ability to sequence and synthesize DNA is on an aggressive cost curve (although not a DARPA story) and that’s an ingredient in the synthetic biology world.
- Arati is inspired by the progress she’s seen in lots of engineering disciplines that together have potential to unleash important new trends in biology.
- But she’s chastened about how little we understand in biological involved complexity, especially when compared to IT and semiconductors.
- Biology technologies are highly adaptable, but intractable when compared to transistors or lines of code.
- Biology related research examples at the BTO include: synthetic biology, brain work, fighting infectious diseases.
- Arati said that DARPA’s two main jobs were:
1] Core mission is to pursue advanced technologies for use in the U.S.
2] Obligation to engage and raise technology issues so they get into the public eye, and that a broader community can then decide how society uses the new technologies.
- DARPA has a very substantial cyber-security portfolio. DARPA is not responsible for operational security in the U.S.
- “Patch and pray” is what we have (for operational national security in the U.S.) and we’re trying to do that ever faster. DARPA wants to figure out technologies that give us a future with respect to cyber security.
- DARPA’s focus is on the related security technologies that can be used for society to have a secure life and have the foundations for security in and away from home.
- There are many attack vectors, because of the vastness of our information environment. Therefore, there can’t be a single silver bullet security solution. Rather, a layered set of many different security technologies are needed, which could be combined to thwart various security threats.
- DARPA is working to scale formal methods to build “meaningful size Operating Systems (OS’s) that can be proven to be correct for specified security properties.”
- DAPRA recently flew a drone in the courtyard of the Pentagon with an OS that has some properties that are unhackable now. “This could be the beginnings of a something that’s a big dream,” Arati said.
- Current state of GPS: It’s cheap and easy to gauge your position once the satellite is up in the sky. Military is addicted to it. But new position, navigation and timing systems are also needed.
- Future geo-location timing and navigation systems will be based on layered systems and “atom physics.” The latter involves cooling atoms by shining lasers at them so that their atomic properties can be tapped.
- The challenge is how to get these new geo-positioning and timing technologies from a huge room with researchers, to a shoe box size unit in a submarine. A key objective is to minimize or eliminate drift in timing or position.
- Quantum components in the way we smell have been researched at DARPA , but quantum computing “is not an active area” (as many thought). Arati was not sure if anything is going on in quantum communications at DARPA.
- Completely breaking complexity of massive platform military system is a DARPA goal. “They are un-Godly expensive, complex, and tightly coupled.” The result might be visible in next generation fighter planes, in drones and what a soldier carries on his body.
- DARPA has created the Mining and Understanding Software Enclaves (MUSE) program to reduce software complexity.
- MUSE seeks to make significant advances in the way software is built, debugged, verified, maintained and understood. Central to its approach is the creation of a community infrastructure built around a large, diverse and evolving corpus of software drawn from the hundreds of billions of lines of open source code available today. Automating assembly of code to do higher level functions is a goal.
- Today, 2/3 of R&D investment is from private enterprises, rather than the 50/50 private/public sector split of several years ago.
- Private R&D investment is growing faster than GDP [Note that doesn't take much with GDP growing at <2% for last 5 years since recession "ended" in June 2009]
- Role of government is shifting when it comes to R&D investments. The federal government is no longer the prime source of research, as it was for years and decades. We now have incredibly innovative industries, Arati said (this author strongly disagrees).
- Our ecosystem (the U.S. government, universities, and industry) is healthy. We have adapted to change after change and she is optimistic that will continue in the future.
- How universities pursue research and what’s going to drive the educational system are key questions U.S. must address.
- “Universities have always been a way that enables DARPA to get great things done,” Arati said to wrap up the program.
CHM YouTube Channel Video: The video of this CHM event can be viewed here.
Comments from Richard Weiss, DARPA Director of Public Affairs (received July 4th via email):
I am surprised at the number of errors in your blog, especially since much of the relevant information is on DARPA’s website.
I don’t believe Arati would have said that one of DARPA’s two primary missions is “Obligation to engage and raise technology issues so they get into the public eye, e.g. how U.S. society uses the new technologies.”
DARPA is not a policy shop – it is a tech projects shop. So while DARPA does take seriously so-called ELSI obligations when a tech development raises societal questions, that is not a “primary mission” of DARPA.
It is incorrect (perhaps just sloppy sentence construction) to say that “Substantial cyber-security portfolio at DARPA is subject to “patch and pray.”” Certainly our portfolio is not “subject to” patch and pray. Our portfolio aims to improve upon the nation’s current reliance on “patch and pray” when it comes to cyber security.
Author’s Response: The post has been updated to reflect some of Mr. Weiss’ critiques (after carefully listening to the archived event webcast to verify accuracy of same). Clarifications were also added (e.g. who said what) to avoid misunderstandings.
It’s my strong opinion that a reporter’s job is to accurately “report” what was said at an event. An event summary is not a research paper where the company’s website is to be checked for accuracy or completeness or to validate/confirm what was said during the program.
Finally, you can check the archived event webcast to verify that Arati did say: “(It’s DARPA’s) Obligation to engage and raise technology issues so they get into the public eye…..”
Addendum: Excerpts from NY Times article-
Silicon Valley Tries to Remake the Idea Machine (Bell Labs)
This superb article chronicles the decline of research amongst established silicon valley companies which have bought start-ups rather than invest in their own research labs (like the long gone AT&T Bell Labs or Xerox PARC). But a resurgence is underway, starting with Google X.
A few excerpts from the article:
“The federal government now spends $126 billion a year on R. and D., according to the National Science Foundation. (It’s pocket change compared with the $267 billion that the private sector spends.) Asian economies now account for 34 percent of global spending; America’s share is 30 percent.”
“Most of the insurgent tech companies, with their razor focus on advancing the Internet, were too preoccupied to set up their own innovation labs. They didn’t have much of an incentive either. Start-ups became so cheap to create — founders can just rent space in the cloud from Amazon instead of buying servers and buildings to house them — that it became easier and more efficient for big companies to simply buy new ideas rather than coming up with the framework for inventing them. Some of Google’s largest businesses, like Android and Maps, were acquired. “M. and A. is the new R. and D.” became a popular catchphrase.”
“But in the past few years, the thinking has changed, and tech companies have begun looking to the past for answers. In 2010, Google opened Google X, where it is building driverless cars, Internet-connected glasses, balloons that deliver the Internet and other things straight out of science fiction. Microsoft Research just announced the opening of a skunk-works group called Special Projects. Even Bell Labs announced this month that it is trying to return to its original mission by finding far-out ways to solve real-world problems.”
“Instead of focusing on basic science research, “we’re tackling projects that advance science and solve significant problems,” says Regina Dugan, the former director of the Defense Advanced Research Projects Agency (Darpa), who now runs a small group inside Google called Advanced Technology and Projects. “What this means is you’re not compromising this idea of doing really important and interesting science and this sense of it really mattering.” To put a finer point on it, Astro Teller, who oversees Google X, told me: “We are not a research center. We think of ourselves as a moonshot factory, and the reasons for using that phrase is the word ‘moonshot’ reminds us to be audacious, and the word ‘factory’ reminds us we have to industrialize it in the end.””
April 7th, 2014 by James Cortada
I consider this set of 150 products announced on April 7, 1964, to be the most important introduced by an American company in the 20th century. And I am not alone in that view. How we used computers around the world was shaped directly by these machines and software, including your cell phone. From IBM’s perspective, the firm doubled its revenues in 36 months and went on to be the dominant computer company for a generation. For the computer industry, it grew at 19-20% compound rates over the next five years. The architecture of this technology proved so pervasive that (a) computer scientists are still struggling with how to break through to a new paradigm or way of doing computing, (b) all mainframes still use software written by IBM in the 1960s, and (c) it made possible the hiring of perhaps the most unique IBMer in the 1970s – ME!! Only with the massive diversification in the kinds of employees made possible by the success of this system did it make sense for IBM to bring in the diversified workforce that it did, including two people with Ph.D.s in history – ironically both trained in Spanish history. The other fellow, also into sales, was an expert on the 1920s and 1930s, graduated out of the U of New Mexico, and worked in Detroit. He was hired a year before me.
And that is the rest of the story. Happy Birthday S/360!
March 10th, 2014 by Alan Weissberger
On March 3, 2014, Eric Schmidt and Jared Cohen (co-authors of The New Digital Age) engaged in a stimulating conversation with Facebook’s COO Sheryl Sandberg. The event took place at the Computer History Museum (CHM) as part of the museum’s Revolutionary series (see description below). This very interesting and wide ranging discussion, was mostly related to the promise and perils of the digital revolution, especially the Internet as it impacts the developing world.
Eric Schmidt is the Executive Chairman and former CEO of Google. Jared Cohen is the Director of Google Ideas and a former adviser to Secretaries of State Condoleezza Rice and Hillary Clinton. During the opening remarks, Schmidt jokingly said to Sheryl Sandberg: “You’ve done all right for yourself since leaving Google.”
The New Digital Age‘s release in paperback provided the impetus for this CHM program. This event was part of the CHM’s acclaimed Revolutionaries speaker series, featuring renowned innovators, business and technology leaders, and authors in enthralling conversations often with leading journalists.
There is good and bad in the many ways online technology is changing life around the globe, according to Schmidt. The Internet empowers people in developing countries that mostly use smart phones for (wireless broadband) access, providing new opportunities for education, business, entertainment, news, etc. But there are very real problems with regulating its use. “With no central leaders in the world today, the Internet causes a huge control problem for free speech, morality and copyright protection,” he said. “When does the Internet stop (i.e. get turned off) and bad stuff happen in countries like Syria and Ukraine/Crimea that are engaged in civil war?”
Cohen said that smart phones were confiscated by the military regime in Somalia and a friend of someone he met there was shot and killed for having pictures unfavorable to the regime on his smart phone. More details in a bullet point below.
Here are a few key points made by the speakers:
- On-line privacy talk may come before the sex talk for future generations of children.
- Digital data permanence may spark the need for identity insurance and that’s somewhat scary.
- Cyber-safety is a responsibility we bear globally for all, especially the non-tech savvy users. Cyber-security requires agility and should not be compromised.
- Schmidt: Suggests WiFi towers (with broadband backhaul) be built in 3rd world countries to provide ubiquitous Internet access and empower individuals living there. That would have a positive impact on economic growth in the developing world.
- Schmidt: There is a race between technology and humans, but humans are still debating last decades problems.
- Schmidt: Technology has reduced the number of jobs, contributing to higher youth unemployment.
- Schmidt: Displacement of manufacturing jobs by technology is an enduring trend which is a threat to current workers that have repetitive jobs (which might be done by a machine/robot).
- Schmidt: We need some “social safety net” for displaced workers. Todays solutions are not good enough!
- In the future, a totally automated home -with many artificial intelligent devices- will likely happen.
- Sandberg: Women are 25% less likely than men to be on line, which puts them at an economic disadvantage. (It wasn’t clear if that was for the U.S., developing countries or the entire world).
- Sandberg: Micro-lending in the developing world has money managed by women. They need smart phones to do that as there may only be wireless Internet access.
- Sandberg: Giving inexpensive smart phones to women would solve many family problems. Hundreds of millions of women would be empowered on-line. Men would not be able to block women’s economic advancement in that case.
- Cohen: In Syria, a wave of online videos couldn’t immediately stop repeated chemical weapons attacks on civilians. He also said troops have operated armed checkpoints where they forced people to turn over their cellphones for review.
- The Google execs said that Internet technology made it easier for people like Julian Assange of Wikileaks and former National Security Agency contractor Edward Snowden to turn government secrets into public controversies. That’s not always desirable, the authors say, arguing that governments need to keep some secrets for national security.
- Schmidt: There are near monopoly Internet service providers in the developing world and competition is needed to get cost effective Internet access.
- Schmidt: Getting wireless networks upgraded for faster access, lower cost per user and capability to carry more traffic is a big challenge for developing countries (especially when there is no competition to drive that transition).
- Schmidt: Maybe there are limits to what the Internet can do or be.
Backrounder: The New Digital Age:
In research for their book, Schmidt and Cohen traveled to more than 35 countries, including some of the world’s most volatile and repressive societies. There they met with political leaders, entrepreneurs, and activists to learn firsthand about the challenges they face. They tackle some of the most interesting questions about our future: how will technology change privacy and security, war and intervention, diplomacy, revolution and terrorism? How will technology improve our lives? What new disruptions can we expect?
For the new paperback edition, Schmidt and Cohen added a new “afterword” to address a number of events, including the wave of revelations about government spying on Internet users, that played out after the book was first published in April 2013. The new section also responds to criticism from Wikileaks founder Julian Assange, who accused the co-authors of uncritically embracing U.S. foreign policy and of glossing over the threat that vast centralized databases pose to individual privacy and freedom.
Addendum: Most highlighted passages from “The New Digital Age”
“On the world stage, the most significant impact of the spread of communication technologies will be the way they help reallocate the concentration of power away from states and institutions and transfer it to individuals.”
“By 2025, the majority of the world’s population will, in one generation, have gone from having virtually no access to unfiltered information to accessing all of the world’s information through a device that fits in the palm of the hand.”
“Identity will be the most valuable commodity for citizens in the future, and it will exist primarily online.”
“The Internet is the largest experiment involving anarchy in history. Hundreds of millions of people are, each minute, creating and consuming an untold amount of digital content in an online world that is not truly bound by terrestrial laws.”
Loss of privacy
“The impact of this data revolution will be to strip citizens of much of their control over their personal information in virtual space, and that will have significant consequences in the physical world.”
Dissonance between technology and geopolitics
“In the months following our [Schmidt and Cohen's] trip [to Iraq], it became clear to us that there is a canyon dividing people who understand technology and people charged with addressing the world’s toughest geopolitical issues, and no one has built a bridge.”
“In this book we aim to demonstrate ways in which the virtual world can make the physical world better, worse or just different.”
Personalization and customization
“The key advance ahead is personalization. You’ll be able to customize your devices—indeed, much of the technology around you—to fit your needs, so that your environment reflects your preferences.”
Humans, not machines control our destiny
“This is a book about technology, but even more, it’s a book about humans, and how humans interact with, implement, adapt to and exploit technologies in their environment, now and in the future, throughout the world. Most of all, this is a book about the importance of a guiding human hand in the new digital age. For all the possibilities that communication technologies represent, their use for good or ill depends solely on people. Forget all the talk about machines taking over. What happens in the future is up to us.”
Don’t say (or type, or “like”) anything you don’t want on the front of the NY Times
“Since information wants to be free, don’t write anything down you don’t want read back to you in court or printed on the front page of a newspaper, as the saying goes. In the future this adage will broaden to include not just what you say and write, but the websites you visit, who you include in your online network, what you “like,” and what others who are connected to you do, say and share.”
Read more at:
February 24th, 2014 by Frederick Withington
When I first wrote programs in 1953, there was no software and few programmers. I entered programs in the computer’s binary language (octal notation) directly into the machine’s registers. And the machine was all mine: there was no operating system to allocate its resources among multiple programs or operate the input-output devices.
Then, programming aids and compliers evolved, and operating systems to allocate the computer’s resources. Programmers no longer dealt with the machine directly, but with an easier-to-use (maybe) interface. In some respects, programming got easier (anyway, it was a lot quicker) and many people learned to program.
Then the minicomputer and then the personal computer evolved, and three pivotal inventions occurred. Xerox PARC and Apple developed the desktop-icon interface, the spreadsheet was invented, and word processing was introduced. I feel that each made many more people into programmers.
The significance of the desktop, icon-based interface is obvious: as it has evolved over time, it has enabled generations of users to manage desktop and portable devices with minimal training.
I think the spreadsheet was equally important. An ordinary businessperson could populate the cells of a spreadsheet with limitless logical and mathematical processes, all invoked by the entry of a datum. Taken as a whole, a spreadsheet is a very complex and sophisticated program.
The significance of the word processor as a programming tool is less obvious, because in its basic form a word processor requires little programming by its user. But think of how it has grown. The word processor now enables its user to create and manipulate graphics of unlimited complexity, to format both virtual and hardcopy documents with three-dimensional structures, and to provide documents with interactive features, as in computer games. Surely when a user has created a word processing application with capabilities in all these areas, he/she has written a sophisticated program.
The Worldwide Web has created still more opportunities for programmers. The designers and builders of Web sites form a new, lucrative specialty. Even the people who create complex YouTube and Facebook posts are in a sense developing software.
What has happened is that human and machine languages have converged. The computer interface has become more intuitive, more symbolic and more flexible. At the same time much, if not most of the human race is learning about computer language as part of growing up. Booting up, managing and interconnecting computing devices; operating keyboards; selecting among icons; and interacting with the Internet and Web sites are activities performed by small children in all the more developed countries. All these children are being invited by the riches of the Web to try more experiments, to learn more, and to become more proficient programmers. Seamless Internet connections enable them to form teams, to cooperate, and to work together to exploit their best ideas. For instance, what will cloud programmers be able to do with shared 3-D printers?
I envy the trained programmers of the next generation, who will be able to search for inspiration among the contributions of billions of their less-trained fellows and create the software products of the future. We’ve seen nothing yet!