• SHARE

Cassandra will always be with us. I don’t mean whiners pining for a simpler time, halcyon days, community, blah blah blah. No, I mean voices warning of future dangers visible to anyone with the foresight, intelligence and time to follow a thought to its logical conclusion.

Cyberia logo

Almost two centuries ago, Mary Wollstonecraft Shelley ruminated on man’s hubris and the opportunities afforded by science and produced a humdinger of a novel. It has danced through our collective imagination ever since, offering a ready label for anything that seemed to overstep the proper bounds of human activity. “Frankenstein” yielded a few sleepless nights, some great flicks (and a few dogs), but didn’t much shape human behavior, judging from the stream of animals being cloned these days.

About the last person to take her warnings seriously is doing life-plus in a maximum-security lockup in the United States. Say what you will about Ted Kaczynski, a k a the Unabomber — and make sure “murderer” and “criminally insane” are somewhere in there — but the logic that drove him is, sadly, not the product of a feverish imagination.

Just ask Bill Joy, whose essay in the April issue of Wired, “Why the future doesn’t need us,” has caused an uproar among the techno-literati. (OK, it’s a muffled uproar.) Joy is no Luddite. He is a software engineer and cofounder of Sun Microsystems.

That means he knows well the technology and the processes by which it is developed and deployed. And, as he dryly puts it, “my personal experience suggests we tend to overestimate our design abilities.”

The problem for Joy is that the critical technologies of the 21st century — robotics, genetic engineering and nanotechnology — are fundamentally different from previous “new technologies.” They have the ability to be self-replicating. “A bomb is blown up only once — but one bot can become many and quickly get out of control.”

A short list of the dangers includes robot super-species deciding that we flesh-and-blood types are a nuisance; genetic engineering that goes awry (think of penicillin-resistant bacteria that lurk in hospitals); or nanobots that spread like pollen and have no natural enemies.

Worse still, awesome new computing powers and market globalization permit individuals to do what was formerly the province of governments. One person now has more computing power in his or her laptop than NASA did when men first went to the moon. That means the possibilities of malfeasance — remember Aum? — and pure mistakes are hugely increased.

Joy reaches a simple, chilling conclusion: “We are on the cusp of the further perfection of extreme evil.”

You don’t get too many statements like that from software engineers. In fact, about the only people using such vocabulary are evangelists, contemplating eternal damnation. Which is, unfortunately, where Joy is leading — in a manner of speaking.

“This is the first moment in the history of our planet when any species by its own voluntary actions has become a danger to itself as well as to vast numbers of others.” He has talked to experts and concludes that the possibility of us wiping ourselves off the planet is from 30 percent to greater than half.

Can we do anything about this bleak future? Joy is not encouraging. For the most part, as a species we aren’t real good at saying no. Joy calls it “our bias toward instant familiarity and unquestioning acceptance.” If you were promised quasi-immortality on the condition that you gave up skin for silicon, what would you do?

Apart from science-fiction writers, who was willing to wrestle with dystopia? Scientists focus on the task at hand and discouraging them from pressing the boundaries of their discipline — and knowledge itself — well, it just isn’t human. Yet, writes Joy, “with each of these technologies, a sequence of small, individually sensible advances leads to an accumulation of great power and, concomitantly, great danger.”

Developing safeguards or shields isn’t the answer, at least not when accidents and unintended consequences are part of the problem. The very process of creating protection could create problems just as bad as those we were trying to avoid.

Joy figures “the only realistic option is setting limits on the pursuit of certain kinds of knowledge.” That sounds outrageous. It should: It repudiates the foundation of human civilization as well as the drive that is an essential part of our humanity. Joy concedes he’d be morally obligated to abandon his life work if he felt it would make the world a more dangerous place: “I can now imagine such a day may come.”

Unfortunately, I don’t think that is enough. Joy draws inspiration from the unilateral U.S. decision in 1972 to give up its biological weapons. He considers that “a shining act of relinquishment.” Perhaps. But it is worth noting two things. First, recent revelations show the Soviets ignored the U.S. move, and only accelerated their biological-weapons programs. (See M.F. Perutz’ recent article in the New York Review of Books for proof that Joy is not paranoid. The scientists never quite grasped the consequences of what they were doing.)

Second, when the U.S. had the option a few years ago of destroying the last batches of smallpox it had in federal laboratories — and thereby eradicate that disease from the planet — it hedged. Washington argued that it needed a batch to make vaccines, plainly uncomfortable with the idea of unilateral disarmament. The theory was pure deterrence: Even if the genie was out of the bottle, the bottle stayed on the shelf. No one threw it away.

Joy finds solace and inspiration in the words of Henry David Thoreau. “We will be rich in proportion to the number of things which we can afford to let alone.” It’s a nice touch, but the problem is he’s up against Stewart Brand: “You’re either part of the steamroller or part of the road.”

It may sound glib, but don’t forget two key parts of the Cassandra tale: She was right — and she was doomed to be ignored.

— Brad Glosserman