WHAT’S NEXT FOR TECHNOLOGY, BUSINESS, SOCIETY, AND YOUR CAREER.
In September 2001 PC MAGAZINE issued a special edition to celebrate the first twenty years of the PC and to speculate on what the next twenty years might bring.
John Heilemann interviewed some of industry’s key leaders and has presented a thoughtful, disturbing, and chillingly accurate view of the future.
If your life touches or is touched by technology, risk management, or ethics, you will find this article of interest.
Key thoughts:
- The next technology wave will be the synergy of high tech at the atomic and subatomic levels.
- “Intelligence Amplification” will achieve IQ levels of 300 within 30 years for a select group of people. And that will have enormous class implications that may strain Democratic societies.
- Regenerative Medicine will be a major specialty of medicine.
- The biggest political challenge in this new century is the conflict between the secular and the sacred---between secular societies and religious societies. As technology advances, this conflict becomes more severe.
“In the next few decades, people will kill each other in large numbers as a direct result of the advancement of science.”
What are YOUR reactions?
Send reactions to lstybel@stybelpeabody.com and we will publish them at the end of the piece.
STYBEL PEABODY LINCOLNSHIRE
“Helping Companies Manage The Senior Executive Assignment Cycle:”Ô retained search, coaching, and helping key executives find new chapters in their professional lives.
www.boardoptions.com
Tel. 617 371 2990
September 4, 2001
PC MAGAZINE
Second Coming
By John Heilemann
It happened quickly; few saw it coming. Over the past 20 years, the PC, the Internet, and the protean enterprises that arose around them propelled an economic transformation as sweeping as any since the Industrial Revolution. Together these changes were responsible for the lion's share of industrial and service-sector growth. They allayed the fear that U.S. competitiveness was in steep decline. They sparked a wave of corporate restructuring, spurred improvements in productivity, and ushered in what even traditionalists like Fed chairman Alan Greenspan acknowledged as a new economy. And by collapsing distance and traversing national borders, they helped turn Marshall McLuhan's notion of the world becoming a global village from theory into practice.
Along the way, computing crept from the fringes to the center of popular culture. By the peak of the Internet boom, it often seemed as if geeks and gadgets had become our national pastime—the focus of an enthusiasm so intense it bordered on abject fetishism.
Then came the crash. In the wake of the past year's litany of horrors, the public's infatuation with all things digital has faded. On Wall Street and on Main Street, in the press and even in Silicon Valley, a feeling has set in that the information revolution has played itself out, or at least has entered a period of prolonged abeyance.
Summing up the new conventional wisdom in a recent article in The New York Times, David Brooks, author of the book Bobos in Paradise, wrote, "Suddenly, it doesn't really matter much if the speed of microprocessors doubles with the square root of every lunar eclipse (or whatever Moore's Law was)." He concluded, "Of course, people are still using computers.... What's gone is the sense that the people using the stuff are on the cutting edge of history, and everyone else is road kill."
However extreme Brooks's assessment may seem, it contains an element of truth. An epoch has ended, but the era ahead promises something quite different. For the past 20 years, the information revolution has been about computing itself; for the next 20, the main event will be the breakthroughs that computing makes possible. The escalating power of computers will be felt on the desktop and in the cloud. Yet its effects in the office and at home will be dwarfed by what happens in the lab, where the intermingling of computing with next-wave technologies (genomics, biotech, nanotech, robotics) is destined to unleash advances so profound that they will amount to what the futurist Peter Schwartz calls "a second scientific revolution."
The scale of these advances is nearly unfathomable. In the words of Bill Joy, Sun Microsystems' chief scientist, they will "open up the opportunity to completely redesign the world—for better or worse." As the digital revolution unfolded, its benefits so manifestly outweighed its costs that the costs seemed barely worth worrying about. In the decades ahead, that will no longer be true. The dilemmas posed by the second-order revolutions won't be simply economic but ethical, moral, even spiritual; they will cut to the core of what it means to be human. And they will compel us not only to look beyond the lessons learned thus far in our entanglements with technology but also to reexamine some of our most deeply held convictions.
As Stewart Brand, author and founder of The Well, puts it, "If people think that coping with the changes caused by computing has been hard, just wait'll they see what's coming next."
To catch a glimpse of what might be coming, the only place to go is Silicon Valley. True, the carnage wrought by the past year's tech wreck has been severe; the landscape is littered with fallen start-ups, and even the industry's stalwarts lie wounded. Yet it may not be all that grim. After raking in unprecedented profits in the 1990s, the venture capitalists are perched on an ungodly pile of cash—totaling nearly $100 billion. But now they are turning their attention back to start-ups that build actual technology rather than paper-thin "business models." As sanity slowly returns to the Valley, the prospect of a revival (not soon, but soon enough) is surprisingly solid.
John Doerr, for one, remains optimistic. A partner at Kleiner Perkins Caufield & Byers, Doerr is the Valley's most prominent venture capitalist. Having established his reputation by bankrolling Internet darlings like Amazon.com and Netscape, he is now betting millions more that the Net's adolescence will be a bigger deal than its splashy infancy. He criticizes Wall Street's dismissal of the Web as grossly premature, protesting, "We're just a few milliseconds after the Big Bang." In the next five to ten years, he believes, the Internet will metamorphose into the Evernet—"the always-on, high-speed, ubiquitous, multiformat Web."
While Doerr's faith in the Net is undimmed, his focus is no longer monomaniacal. He now thinks there are two new revolutions afoot, both built squarely on the back of the microchip. "The Human Genome Project is the warm-up act for a broader set of technologies—proteomics, cloning, developmental biology—that we must and will develop," he says. "The other area is distributed clean power, transportation, and water for the tens of millions of people globally who live on less than a dollar a day." Each of these sets of technologies, Doerr contends, could be as important as the Net and maybe more profitable.
Around the time I spoke with Doerr, I also called on Intel's chairman, Andy Grove. Where Doerr is a cheerleader and a salesman, Grove is a skeptic, a cynic, a hard ass. Yet like Doerr, Grove remains bullish about the Internet, arguing that it is likely to change the world more in the next five years than it has in the past five, as international adoption catches up to America. Also like Doerr, Grove expects an even more explosive change to occur at the intersection of the life sciences and the information sciences—a change "in the area of genetics, molecular biology, and the like, which would be inconceivable without very powerful, highly connected, and available computers. The impact on drug development, health care, and human life is difficult to imagine."
Grove continues, "If this change happens as extensively as some people think it will, the consequences could be far more important than the enabler. Compared with directly altering life and death, computers and the Internet don't seem like such a big deal. But without them, this genetics/molecular-biology stuff would never get off the ground."
Until recently, these sorts of predictions—optimistic but not wild-eyed, stretching out not much more than a decade ahead—were as far as most people in Silicon Valley were willing to go. With good reason: It was widely assumed that Moore's Law would break down around 2010. But thanks to rapid leaps in molecular electronics (in which individual molecules and atoms serve as circuit elements in place of lithographically drawn transistors), the emerging consensus is that chips will continue their march up the Moore's Law curve for another 30 years.
"By 2030," says Joy, his voice a queasy mixture of awe and dread, "we'll be building machines that are a million times more powerful than today's PCs." A pause. "A million is a very big number."
Molecular electronics is a new subfield of nanotechnology: the science of manipulating matter at the atomic level. Since nanotech was first introduced to the mainstream in the mid-1980s by scientist Eric Drexler, it has held out the potential of letting us snap together the basic building blocks of nature to create a future of almost magical abundance—one in which the human immune system could be augmented to wipe out most diseases and most products could be manufactured at a cost close to zero. By Drexler's account in his book Engines of Creation, nanotech could ignite "changes as profound as the Industrial Revolution, antibiotics, and nuclear weapons all rolled up in one."
A few years ago, Drexler's claims strained credulity among all but a cadre of borderline mad scientists. But not anymore. "Nanotech is coming along faster than anyone except the most rabid true believers expected," says Brand. "Now you've got nanostudies centers at top universities all over the country. Already they've come up with carbon nanotubes, which have these incredible properties of being, like, a hundred times stronger and a hundred times lighter than steel. Oh, and by the way, depending on how you twist them, they can be used as insulators, conductors, or semiconductors. So basically any material you make out of this stuff can be computational."
Overlapping and mutually reinforcing, nanotech, biotech, and computing share certain characteristics that set them apart from most other technologies that have preceded them. For one thing, they are—literally and physically—smaller. "We are entering the era of the tiny," says Brand, "where all the action takes place at the molecular, the atomic, and eventually the subatomic scale." For another, they all possess an unusual (and unsettling) feature: They are autocatalytic; that is, self-accelerating.
"This is genuinely new," Brand says. "The last century was transformed by three technologies: television, the telephone, and jet airplanes. All three sped up society in various ways, but none had the property of speeding up itself. So along comes what looks like just another transformative technology—computing—but it turns out to have this peculiar trait baked in called Moore's Law. And the reason Moore's Law keeps being true is that the first thing you do with each generation of denser chips is use them to make even denser chips. It's also the reason computer technology is the dominant, pace-setting technology that everything else is always sprinting to keep up with. Then along comes biotech in the context of computing, and you get Craig Ventner cracking the human genome in, like, two weeks. You get biotech self-accelerating just as much as computing. And you get nanotech doing the same thing—along with all the stuff that comes after that."
The mere thought of what comes after that is enough to make most reasonable people feel as though their heads might explode. (Autocatalytic, indeed.) Yet scientists now seem to agree broadly that the biotech and nanotech revolutions, in that order, will be in full swing by the end of this decade. So what in God's name will 2010 to 2020 bring?
To get a bead on that distant horizon, I visited Peter Schwartz, one of the rare professional futurists who doesn't exude the ripe scent of charlatanism. Trained academically as a rocket scientist, Schwartz worked for years in the planning division at Royal Dutch/Shell and then went on to found the Global Business Network, a renowned strategy consulting firm.
Recently, Schwartz has taken on a project with DARPA (Defense Advanced Research Projects Agency) to scour the globe in search of research initiatives that might produce major upheavals in the future. "I've been asked to set my sights 20 or 30 years out," he says with a grin, "to the place where the line between science and science fiction gets blurry."
A good start is an article published in 1993 (and well-known among futurists) by Vernor Vigne, a mathematician and computer scientist at San Diego State University. Extrapolating from Moore's Law, Vigne reached the conclusion that machines would become sentient around 2030. "Shortly after that," Schwartz elaborates, "machines would become supersentient. And shortly after that—poof! We're obsolete."
Having concocted a scenario in which machine intelligence was guaranteed to exceed human intelligence, Vigne offered a modest proposal: Humans should devise a means of boosting their own brainpower, so they would have a fighting chance.
"Intelligence amplification" may sound far-out, but it's by no means a fantasy. "It turns out that there are several feasible pathways to do it," says Schwartz. "We're talking IQs of 300-plus!" Schwartz describes some of the ways in which technicians 20 or 30 years from now might endow a so-so brain with the quality of genius. He spoke of employing chemicals or electrical stimuli to goose our capacities for sensory perception and memory, of "smart drugs," of "real-time neural ballistic computation," and of the quest for an "interface between silicon and organics."
Schwartz races from one brain-bending idea to the next: from "complex adaptive matter" to "tether propulsion" and from quantum computing to an incipient subfield of biotech known as regenerative medicine, whereby biochemical treatments such as stem-cell injections might induce a damaged body to repair itself. "My guess is that in 30 or 40 years, we'll no longer do surgery, except in trauma cases," Schwartz predicts. "And people will look back on the medical practices of today and say, 'They cut patients open? How barbaric!'"
Finally, Schwartz turns to the breakthroughs that are about to burst forth across all the core sciences—particularly physics. Pointing to recent discoveries on the cosmic scale (that the universe is expanding at an accelerating rate) and at the atomic and subatomic levels (that neutrinos are not what we've long thought they were), which cast doubt on firmly established verities in physics, Schwartz declares, "There is a high likelihood that we are currently in a situation similar to a century ago, just before Einstein and Bohr put forth their ideas about relativity and quantum mechanics, which stood the Newtonian universe on its head."
Despite their diversity, Schwartz's prophecies share two qualities. First, all are made feasible by giant steps in computing. "Maybe the biggest oncoming revolution of all is the convergence of biology, physics, and chemistry at the nano scale," Schwartz says. "The only reason it's even conceivable is that computers let us mess around, with precision, with objects that are far too small for physical manipulation."
The truth is plain to see. In Schwartz's view, "All of science is now information science." And all revolutions are now, at bottom, information revolutions.
The second common quality is that none of Schwartz's predictions fill him with unalloyed glee. That might not seem surprising, unless you recall that Schwartz was responsible for "the long boom"—not just the phrase but the book-length scenario behind it. In his book, Schwartz sketched a future where technology would spawn a 25-year run of blessed peace and bountiful prosperity, which would in turn "transform our world into the beginnings of a global civilization—a new civilization of civilizations that will blossom through the coming century."
While Schwartz still professes faith in the long boom, his optimism about the future, once so pristine it verged on Panglossian, is now far more measured. When asked about the challenges posed by the radical changes science is about to thrust upon us, Schwartz's reply is cold:
"The biggest political challenge in this new century is the conflict between the secular and the sacred—between secular societies and religious societies. And it's one that science and technology will only exacerbate. Cloning, life extension, genetic manipulation, superintelligence, sentient robots—this stuff has a way of really freaking people out, because it touches on fundamental issues of human identity. What is a human? Are we God-endowed or just chemicals? If I succeed in growing a cell out of chemicals, what does that say about God? If I can manufacture an iris or something even more beautiful, what does that say about God? These are the sorts of questions we'll confront. The issues will be profound. And the conflicts will be life-and-death."
Schwartz takes a breath.
"In the next few decades, I do believe people will kill each other in large numbers as a direct result of the advancement of science."
Think what you will about Schwartz's chilling conclusion, the gravity of his concerns reflects a core truth: Both the promise and the perils of the emerging 21st- century technologies far exceed those the information revolution has tossed up so far.
"In Washington, D.C., the debates we've had on information technology—on privacy, copyright, antitrust—have often seemed intense," says Tom Kalil, one of the Clinton administration's top technology advisers. "But when you think about whether genetic engineering might lead to a caste system, with classes of genetic haves and have-nots, or to someone creating designer pathogens to infect millions of people with a deadly disease—when you think about that, the issues of whether people should be able to download music for free or companies should be allowed to put cookies in your browser seem kind of—well, small."
No one has done more than Bill Joy to make vivid and haunting the dangers of the 21st-century technologies. In a 15,000-word essay in the April 2000 issue of Wired, Joy examined the risks posed by the unchecked development of genetic engineering, robotics, and nanotech. His fear was fueled by the fact that all three technologies were not only self-accelerating but also self-replicating, and that all were in the process (due to Moore's Law) of being democratized. The suggestion is that the new technologies would be uniquely hazardous—prone to new classes of accident and abuse. In the 20th century, the most fearsome technologies, such as nuclear, biological, or chemical weapons, required rare raw materials, highly classified information, and industrial facilities. In the 21st, all that will be needed is a computer and some brainpower.
For Joy, the potential consequences seem dire indeed: at worst, human extinction. Genetics and nanotech could produce a plague that wipes out either the species or the biosphere around us; farther off in the future, robotics might lead to a race of self-replicating superintelligent machines, which eventually will come, à la Vernor Vigne, to supersede us. Faced with this array of harrowing scenarios, Joy concluded that the only feasible path to salvation is "relinquishment: to limit development of the technologies that are too dangerous by limiting our pursuit of certain kinds of knowledge."
Joy's prescription was bound to be as controversial as his diagnosis, for it seemed that one of our most eminent scientists was calling for an immediate and draconian abandonment of broad areas of science. But that wasn't quite right, as Joy tells me. "What I said and what I mean is that we should stop doing things we deem too dangerous to do. Which is really just a tautology. It's a call for a rational collective dialog and an assessment of the costs and benefits of the new technologies. To say we should not do that which we judge too dangerous to do is really just a definition of sanity, right?"
If Joy's aim was to provoke a dialog, he succeeded magnificently and at the same time failed miserably. Joy's warnings have inspired a voluminous, impassioned, and articulate reaction. In government and academia, among scientists and civilians, in America and all over the world, the appetite for his thoughts has been virtually insatiable—everywhere, that is, except in Silicon Valley. There the reaction to Joy's argument has ranged from knee-jerk rejection ("he's just wrong") to mute silence. This was no surprise to Joy, but dispiriting nonetheless. "It's one thing to be aware that the industry is myopic and short-sighted," he says, "but it's another to have this kind of demo."
Myopia and short-sightedness may be part of the problem, but something deeper is also at work. As Brand explains it, "One of the big changes in the past 20 years is that basically all science and tech have become commercial science and tech." This development has been part and parcel of an era where the triumph of the free market was vindicated on a global scale, and the animal spirits of capitalism have raged as exuberantly as at any time in history. The system has produced unsurpassed innovation and wealth, but it has some glaring weaknesses. "You could say that capitalism is just a couple-hundred-year-old mechanism for speeding up science," Joy says. "But capitalism and the free market are not very good at saying 'pause,' let alone 'stop.'"
Politics, by contrast, is quite proficient at both; to no small extent, our system of government was designed explicitly to slow things down. So perhaps it's not surprising to learn that some of Joy's most receptive readers and most constructive critics have hailed from the realm of politics and public policy. Not long after his piece was published, Joy flew out to Harvard University to talk with professors at the Kennedy School of Government. The sessions were animated, thoughtful, intellectually rigorous. And although some of the faculty and researchers took issue with one or another of Joy's points and some differed with him about the scale of the risks or the timeframe for action, everyone respected the urgency and the quality of his crusade.
"We've been dealing with technologies of mass destruction for decades now," says Graham Allison, the former dean of the Kennedy School and now a professor specializing in national-security matters. "The kinds of threats Bill is talking about are unique in some ways, but in other ways they're not. Biotech may very well turn out to be as great or greater a threat in the first half of the 21st century as nukes were during the second half of the 20th. And the sooner we start thinking about how to deal with that in a rational and careful way, the better chance we'll have of averting the worst."
For some ardent free marketeers, Joy's visions and the audience he has earned among the governing classes have been a cause for alarm. George Gilder, the conservative technopundit, accuses Joy of having "unveiled the 21st century's leading rationale for anticapitalist repression and the revival of statism—a tonic for beleaguered socialists, a program and raison d'être for a new New Left." He predicts that Joy's ideas "will propel the Techno-Left into the vanguard, allowing it to absorb the Greens and become the main adversary of freedom and faith in this century."
Put aside the histrionics and the tone of hysteria and it's clear that Gilder and his allies are on to something. The reaction to Joy's philosophy outside the technology and science establishment—like the public's alarm over the imminent possibility of human cloning or the movement in Europe against genetically modified foods—is a sign that the recent shift in sentiment toward technology runs deeper than mere skepticism. Among educated people in many walks of life (including the tech world), there is a growing sense that, as Joy puts it, "we are being propelled into this new century with no plan, no control, no brakes," and that commercialism and the free market have no solutions to offer.
"The market has won the economic argument, and that's irreversible," says Schwartz. But there are plenty of other essential arguments, and here the market seems to have nothing of interest to say. "This is just one really obvious example," adds Kalil, "but it's nothing but a cop-out to say 'let the market decide' if we're going to do genetic engineering on the germ line.
As such questions increasingly come to the fore and as the inability or the unwillingness of the technology industry to address them becomes increasingly evident, government may step in to fill the void. For his part, Schwartz predicts that "a reassertion of the power of the state," though perhaps not by means of old-style regulation, is "almost inevitable." Brand agrees. "I can't help but think that some set of moral/ethical/political frameworks will become a part of the debate to a much greater degree than they are now, if for no other reason than there is just so much weird shit going on!"
The weirdness, of course, has barely begun. In the next 20 years, the digital revolution will enter its next phase, as computing and the Net help spawn second-order revolutions. These revolutions will offer nearly unimaginable rewards, but they will also raise issues that, as Brand says, "go to the essence of who we are as humans, as societies, as a civilization."
Until now, the questions posed by technology were raised and discussed mainly by technologists, and to a lesser extent by the industrialists who employ them. In the decades ahead, that insularity seems certain to be shattered, as the questions grow so urgent and overwhelming that the public has no choice but to address them head-on. Indeed, the widening and deepening of the national conversation over technology and its consequences is likely to be one of the less obvious but more notable changes held in store by the next 20 years—and maybe the most heartening. For just as the old saw has it that politics is too important to be left to the politicians, there is no doubt that we are entering an age where science is too important to be left to the scientists.
|