The end of the world is back again. Hey, I know I've missed it these past few months. Once Y2K turned out not to be a global disaster - thank you, IT people everywhere! - it was back to the same old life-goes-on grind. But now Bill Joy, one of the founders of Sun Microsystems, is afraid that robotics, genetic engineering and nano-technology could destroy humanity in our lifetime.
Or, more to the point, in Bill Joy's lifetime.
Joy's essay "Why the Future Doesn't Need Us", published in the April issue of Wired magazine, quotes Theodore "Unabomber" Kaczynski, the Dalai Lama, Arthur C. Clarke, Thoreau, Nietzsche and some very smart, very technical guys Joy knows personally. His fear is plain, and his argument is simple: technology isn't safe anymore. Self-replicating robots and gene-spliced organisms pose a new and far greater danger than anything we've seen before.
They're more threatening than nukes or other weapons of mass destruction, Joy insists. And we need a new dialogue, a new ethic, a new brotherhood to keep these threats from destroying us.
You can read the essay yourself on the Web at www.wired.com/wired/archive/8.04/joy.html. It's very earnest. It's very heartfelt. It's very frightened. And it's very, very wrong.
Not that part about needing a serious discussion of the dangers of technology and the importance of ethics and community. We certainly need all that. We've needed it for every world-changing technology in history.But this isn't the end of the world. Oh, it could be - just as the nuclear arms race or one of the deadly bugs and chemicals we've cooked up for years could end it all. We've seen other species wiped out. We're not immune to extinction. We could do ourselves in. But it hasn't happened yet.
Naturally, some people told Joy exactly that. "Many other people," Joy writes, "who knew about the dangers" told him that these problems aren't new and that his arguments are "already old hat".
And Joy is clearly baffled that they aren't as alarmed about these threats as he is. "I don't know where these people hide their fear," he writes.
How can they not worry that in 30 years, highly productive intelligent machines might idle all human workers? Maybe because people have feared automation for centuries. But today, with more technology and higher productivity than any other nation on earth, we still work longer hours, and more jobs go unfilled. Maybe the risk of everyone being put out of work by technology isn't so large after all.
How can they not fear that genetically engineered plants and animals could wreak ecological havoc? Maybe because we've wreaked plenty of ecological havoc for decades without gene splicing, from kudzu that tears down telephone poles to antibiotic-proof bacteria to toxic wastes working their way up the food chain. And despite the problems, we're still here.