Last February I had the privilege of attending a conference on “Imagining Outer Space,” held in Bielefeld, Germany. I have been to many conferences on the history of rocketry and space travel, and on the social and cultural implications of the Space Age, but none of them were as stimulating as this conference was. A theme present in almost every paper was that, at least for space travel, one’s imagination precedes reality. It was not surprising that the writings of the late Arthur C. Clarke, whose imagination ranged through all corners of the cosmos, inspired many of the participants in this conference. (Clarke died shortly after the conference ended.) Clarke imagined a space age that scientists and engineers have been building, in pieces, for the last 60 years or so. But I am not writing merely to add to those who would praise Clarke’s vision, and who see themselves as turning his wildest predictions into reality. I want to point out that, in at least one instance, either he got it wrong, or else we haven’t fully understood what it means to imagine a future. One of the most famous fictional computers in the cinema is “HAL,” the computer (and inadvertent star) of the Stanley Kubrick movie 2001, a Space Odyssey, a movie based on a Clarke short story. But need I remind everyone that computers have not evolved that way? Instead of one (or a few) super-intelligent computers, we have a swarm of cell phones, PDAs, laptops, gps receivers, satellite radios, mainframes, iPods, supercomputers, and workstations scattered in every corner of the world, linked to one another (some of the links are not quite there, but we are close). The combination of these devices and their operators yields an intelligence that far surpasses HAL’s, albeit without that creepy voice. As Richard Gabriel pointed out a couple of years ago, Google already passes the Turing Test: ask it a question in plain English, and you’ll get a very good answer most of the time. Is that answer from a computer or human being? Does it matter? Clarke is well-known for his seminal 1945 paper, in which he suggested that a set of three communications satellites placed in geosynchronous orbits (now called “Clarke orbits”) could cover the world with instantaneous radio communications. But look a little closer at his suggestion: he thought there would be a need for only three, and that the system would transmit at most a few channels of basic radio. Well, that orbit is now packed cheek by jowl with satellites. And there are now communications satellites (such as Iridium) in medium and low orbits as well. The satellites handle telephone, television, radio, navigation, timing, weather, signals intelligence, reconnaissance, Internet traffic—the list goes on. At some level this swarm of spacecraft merges with the swarm of Internet devices on Earth, described above. What has been happening in computing is a set of three interrelated phenomena: (1) Computers are getting smaller. (2) Computers are getting more powerful, and (3) Computers are not computers unless they are interconnected with one another. With a few exceptions, this was not predicted by “mainstream” science-fiction writers, and many of us today still do not quite grasp the implications of these trends. Back to the theme of the Imagining Outer Space conference: a lot of the participants discussed travel to the outer solar system and to other stars, even galaxies. The participants’ imaginations ran free. But the technical challenges of getting human beings even to Mars, never mind Jupiter or an extra-Solar planet, are daunting. Toward the end of the conference I suggested a possible solution: download the contents of our brains onto a swarm of small, light-weight, inter-connected workstations, and send them out in all directions through our solar system and beyond. It is too hard to send people; why not send our consciousness instead? I thought I was being provocative; then someone in the audience pointed out that Arthur C. Clarke had described this possibility in one of his lesser-known short stories, too. I would be happy to give him credit, even name the enterprise after him. I only hope that we can make the first steps in my lifetime.