"bill_joy_-_why_does_the_future_not_need_us" - читать интересную книгу автора (Joy Bill)

long-term - four years ago he started the Long Now Foundation, which is building
a clock designed to last 10,000 years, in an attempt to draw attention to the
pitifully short attention span of our society. (See "Test of Time,"Wired 8.03, page
78.)

So I flew to Los Angeles for the express purpose of having dinner with Danny and
his wife, Pati. I went through my now-familiar routine, trotting out the ideas and
passages that I found so disturbing. Danny's answer - directed specifically at
Kurzweil's scenario of humans merging with robots - came swiftly, and quite
surprised me. He said, simply, that the changes would come gradually, and that
we would get used to them.

But I guess I wasn't totally surprised. I had seen a quote from Danny in
Kurzweil's book in which he said, "I'm as fond of my body as anyone, but if I can
be 200 with a body of silicon, I'll take it." It seemed that he was at peace with
this process and its attendant risks, while I was not.

While talking and thinking about Kurzweil, Kaczynski, and Moravec, I suddenly
remembered a novel I had read almost 20 years ago -The White Plague, by Frank
Herbert - in which a molecular biologist is driven insane by the senseless murder
of his family. To seek revenge he constructs and disseminates a new and highly
contagious plague that kills widely but selectively. (We're lucky Kaczynski was a
mathematician, not a molecular biologist.) I was also reminded of the Borg ofStar
Trek, a hive of partly biological, partly robotic creatures with a strong destructive
streak. Borg-like disasters are a staple of science fiction, so why hadn't I been
more concerned about such robotic dystopias earlier? Why weren't other people
more concerned about these nightmarish scenarios?

Part of the answer certainly lies in our attitude toward the new - in our bias
toward instant familiarity and unquestioning acceptance. Accustomed to living
with almost routine scientific breakthroughs, we have yet to come to terms with
the fact that the most compelling 21st-century technologies - robotics, genetic
engineering, and nanotechnology - pose a different threat than the technologies
that have come before. Specifically, robots, engineered organisms, and nanobots
share a dangerous amplifying factor: They can self-replicate. A bomb is blown up
only once - but one bot can become many, and quickly get out of control.

Much of my work over the past 25 years has been on computer networking, where
the sending and receiving of messages creates the opportunity for out-of-control
replication. But while replication in a computer or a computer network can be a
nuisance, at worst it disables a machine or takes down a network or network
service. Uncontrolled self-replication in these newer technologies runs a much
greater risk: a risk of substantial damage in the physical world.

Each of these technologies also offers untold promise: The vision of near
immortality that Kurzweil sees in his robot dreams drives us forward; genetic
engineering may soon provide treatments, if not outright cures, for most
diseases; and nanotechnology and nanomedicine can address yet more ills.
Together they could significantly extend our average life span and improve the
quality of our lives. Yet, with each of these technologies, a sequence of small,