"Bernstein fast gcd notes" - читать интересную книгу автора

> (in fact one has improvements on euclid due to Aho Hopcroft and Ullman
> which make it O(n^(1 + epsilon))

Actually, the improvement on Euclid's algorithm was published in 1938 by
D. H. Lehmer.

Knuth in 1971 observed that Lehmer's method, in conjunction with fast
multiplication, takes essentially linear time when the parameters are
chosen sensibly.

Schoenhage in 1971 chose parameters very carefully, and showed that one
could compute the continued fraction of a ratio of n-digit numbers in
time O(M(n) log n) where M(n) is the time to multiply n-digit numbers.

The story doesn't end here, for two reasons. First, Euclid's algorithm
is critical for k[x] as well as Z; Schoenhage's method can be simplified
considerably for k[x]. Second, one can reduce the time of Schoenhage's
method by a big constant factor, especially in applications that only
need (e.g.) a selected pair of convergents.

Moenck in 1973 stated a reasonably simple gcd algorithm for k[x] using
Schoenhage's method, and claimed incorrectly that the algorithm would
also work for Z. This is where Aho, Hopcroft, and Ullman show up: they
repeated and oversimplified Moenck's algorithm, producing an algorithm
that doesn't even work for polynomials.

Brent, Gustavson, and Yun in 1980 gave several useful speedups of
Moenck's algorithm. They also pointed out (but neither stated explicitly
nor proved) a generalization of Moenck's algorithm to compute any pair
of convergents. Montgomery in 1992 independently stated and proved a
similar generalization with some of the same speedups.

---Dan