"Vernor Vinge. The Coming Technological Singularity: How to Survive in the Post-Human Era" - читать интересную книгу автора

threat to the human status quo. But Drexler argues that we can confine
such transhuman devices so that their results can be examined and
used safely. This is I. J. Good's ultraintelligent machine, with a
dose of caution. I argue that confinement is intrinsically
impractical. For the case of physical confinement: Imagine yourself
locked in your home with only limited data access to the outside,
to your masters. If those masters thought at a rate -- say -- one
million times slower than you, there is little doubt that over a
period of years (your time) you could come up with "helpful advice"
that would incidentally set you free. (I call this "fast thinking"
form of superintelligence "weak superhumanity". Such a "weakly
superhuman" entity would probably burn out in a few weeks of outside
time. "Strong superhumanity" would be more than cranking up the clock
speed on a human-equivalent mind. It's hard to say precisely what
"strong superhumanity" would be like, but the difference appears to be
profound. Imagine running a dog mind at very high speed. Would a
thousand years of doggy living add up to any human insight? (Now if
the dog mind were cleverly rewired and _then_ run at high speed, we
might see something different....) Many speculations about
superintelligence seem to be based on the weakly superhuman model. I
believe that our best guesses about the post-Singularity world can be
obtained by thinking on the nature of strong superhumanity. I will
return to this point later in the paper.)

Another approach to confinement is to build _rules_ into the
mind of the created superhuman entity (for example, Asimov's Laws
[3]). I think that any rules strict enough to be effective would also
produce a device whose ability was clearly inferior to the unfettered
versions (and so human competition would favor the development of the
those more dangerous models). Still, the Asimov dream is a wonderful
one: Imagine a willing slave, who has 1000 times your capabilities in
every way. Imagine a creature who could satisfy your every safe wish
(whatever that means) and still have 99.9% of its time free for other
activities. There would be a new universe we never really understood,
but filled with benevolent gods (though one of _my_ wishes might be to
become one of them).

If the Singularity can not be prevented or confined, just how bad
could the Post-Human era be? Well ... pretty bad. The physical
extinction of the human race is one possibility. (Or as Eric Drexler
put it of nanotechnology: Given all that such technology can do,
perhaps governments would simply decide that they no longer need
citizens!). Yet physical extinction may not be the scariest
possibility. Again, analogies: Think of the different ways we relate
to animals. Some of the crude physical abuses are implausible, yet....
In a Post-Human world there would still be plenty of niches where
human equivalent automation would be desirable: embedded systems in
autonomous devices, self-aware daemons in the lower functioning of
larger sentients. (A strongly superhuman intelligence would likely be
a Society of Mind [16] with some very competent components.) Some