Since polynomials are an example of a particularly simple function, polynomial rings are of fundamental importance. In particular, they are the structures underpinning algebraic geometry. The data representation and algorithms for a univariate polynomial are quite different to those for multivariate polynomial. Let R[x] be a ring of univariate polynomials over the ring R. Polynomials of R[x] are generally represented by dense vectors of coefficients from R, or by bit packing when R is a very small finite field. The complexity of operations with such polynomials is generally dominated by the degree of the input polynomials.
Let R[x1,…, xn] be a ring of multivariate polynomials over R. Polynomials belonging to the ring R[x1,…, xn] are represented as a sparse list of coefficient-monomial pairs, where a monomial is represented by a list of exponents. The complexity of operations on such polynomials is generally dominated by the number of terms in the polynomials. Most multivariate polynomials which arise in Computer Algebra are sparse; that is, the number of terms in a polynomial is typically very much less than the product of the degrees in all variables. The arithmetic algorithms in the multivariate case are thus radically different to those in the univariate case.
This chapter is concerned with basic operations on polynomials such as multiplication, GCD, and factorisation. The tools for computing with ideals may be found in the section on Multivariate Polynomial Rings and Their Ideals.
Univariate polynomial multiplication based on the Fast Fourier Transform (FFT) is the critical component of all asymptotically-fast polynomial algebra. The basic idea behind the use of the FFT method for polynomial multiplication is to evaluate the degree-n input polynomials at O(n) distinct roots of unity in a suitable ring, then multiply these evaluations pointwise and finally interpolate these to obtain the polynomial product. The evaluation and interpolation steps are done by the FFT operation in time O(n log(n)) instead of the O(n2) time of the classical method.
The Schönhage–Strassen algorithm is a complete algorithm for multiplication based on the FFT method, using the ring of integers modulo 22k + 1 for a suitable integer k so that sufficient roots of unity are present. It is particularly efficient for multiplying very large integers or polynomials with large coefficients. For example, using Magma on a 3.2 GHz Xeon processor, one can multiply two million-decimal-digit integers in 0.025 seconds or two integer polynomials, each with degree 1000 and 1000-bit coefficients, in 0.026 seconds.
In characteristic 2, an algorithm of Cantor is used for polynomial multiplication, which follows the same basic approach as the FFT-based algorithms but which uses all elements of a suitable vector space as evaluation points instead of roots of unity. For example, on a 3.2 GHz Xeon processor, one can multiply two degree-million polynomials in GF(2)[x] in 0.040 seconds.
All basic arithmetic operations in R[x] (such as division, GCD, resultant) are reduced in Magma to multiplication using algorithms of Borodin and Moenck from the 1970s. These algorithms have complexity O(n log(n) log(log(n))) instead of the classical complexity O(n2), where n is the degree of the input, and so are asymptotically-fast. Furthermore, these algorithms work very well in practice even for moderately small degree (in the hundreds), so this is not just a theoretical result.
Multivariate arithmetic is based on a suite of algorithms which manage the sparse representation. Multiplication is performed by the Monagan–Pearce heap-based algorithm, saving much time and space over traditional methods.
For the calculation of polynomial greatest common divisors (GCDs), Magma has implementations of the generic Euclidean algorithm for elements of K[x] and the generic recursive subresultant GCD algorithm for elements of K[x1,…,xn]. For several important field types, there are specific optimised algorithms for computing GCDs in K[x]:
For computing the GCD of multivariate polynomials over any of the above fields, a fast evaluation/interpolation algorithm is also provided (with both dense and sparse variants) which avoids intermediate coefficient blowup.
Similarly, Magma contains generic algorithms for computing the resultant of both univariate and multivariate polynomials over any field, but there are also special multiplication-based, modular, and interpolation algorithms for the important types of computable fields.
For factorisation in K[x], where K is the finite field GF(q), there are two main algorithms. The first is the Berlekamp algorithm, which uses linear algebra and is applicable to very small K and particularly when the input polynomial is sparse. Otherwise, the general von zur Gathen–Kaltofen–Shoup algorithm is used, which is the state of the art. This is a major extension of the Cantor–Zassenhaus algorithm (which involves taking GCDs of the input polynomial with xqi – x for successive i) and includes many optimisations directly related to the FFT algorithm, such as multiplying by the inverse of various fixed moduli during the course of the algorithm with the precomputation of the FFT images of such inverses. (Note that these special components are also explicitly used in the elliptic curve point counting algorithm over finite fields.)
Magma also contains a special optimised test for irreducibility of high-degree sparse polynomials over small finite fields. In 2009, R. Brent and P. Zimmerman discovered a series of record high-degree primitive trinomials over GF(2) and A. Steel used this algorithm in Magma to verify irreducibility of the polynomials for them. A typical example is x42643801 + x3706066 + 1∈GF(2)[x] which was proven primitive in about 18 days by Magma.
Factorisation in ℤ[x] is of critical importance because it underpins all forms of polynomial factorisation in characteristic zero. The basic technique is to factor the input polynomial modulo some prime p, lift the partial factorisation to be modulo pk for suitable k via Hensel lifting (which maps to FFT-based multiplication), and finally combine the lifted factors to obtain the true factors over ℤ. The standard combination phase had exponential worst case complexity until M. van Hoeij's novel practical algorithm (2001) which uses the LLL basis reduction algorithm to solve a Knapsack problem derived from the traces of the lifted factors. Magma has an optimised version of this algorithm, originally developed by A. Steel in discussion with van Hoeij in 2001. As a consequence, the combination phase rarely dominates the computation. Factorisation in ℚ[x] is also trivially reduced to the case of ℤ[x].
Factorisation in K[x], where K = ℚ(α) is a number field, is achieved via Trager's algorithm, which relies on computing resultants (to eliminate α) and then an associated factorisation in ℚ[x]. Prior to 2001, this algorithm was impractical in many cases where the combination phase was too difficult, but again van Hoeij's algorithm for ℤ[x] has removed the difficulty; Trager's complete algorithm now runs in polynomial time and is very effective in practice. Within Magma, Trager's algorithm has also been generalised to allow factorisation in K[x] where K is any algebraic field of characteristic zero with an arbitrary number of transcendental and algebraic generators.
A general algorithm for factoring polynomials over non-perfect function fields of small characteristic (with an arbitrary number of algebraic and transcendental generators) was developed within Magma by A. Steel in 2005. Problems of inseparability make computations with such fields (e.g., GF(2)(t)) difficult. The algorithm works by lifting to purely inseparable extension fields over which the relevant polynomials become separable and Trager's algorithm can be applied. Finally, a Gröbner basis elimination technique is used to compute the result over the original field. No other Computer Algebra System has support for factorisation over such general function fields.
Factorisation in the bivariate polynomial ring R = GF(q)[x,y] is performed by an algorithm developed by M. van Hoeij and A. Steel in 2002. It uses ideas similar to the van Hoeij ℤ[x] combination algorithm with relations built on traces, but it uses direct exact linear algebra (no LLL or approximation needed). The time is dominated by asymptotically-fast Hensel lifting over power series, which maps down to multiplication in GF(qk)[[y]][x]. As an example, let f∈GF(5)[x,t] be the following bivariate polynomial (arising in the representation theory of Dickson groups):
The polynomial has very high degrees (78125 and 3750 in x, t respectively), but is extremely sparse (24 terms). Magma factors f in about 14 minutes and there are 6 irreducible factors which are dense (about 6000 terms each) and have x-degrees: 1, 15624, 15750, 15750, 15500, 15500 respectively. A typical large step during the Hensel lifting phase is reduced to multiplication of two polynomials in GF(53)[[t]][x], which itself is reduced to multiplication of two integers, each having about 45 million decimal digits and which are multiplied in 2.7 seconds. This shows how Magma can exploit FFT-based multiplication in characteristic zero even when factoring polynomials in very small prime characteristic!
General multivariate factorisation (for an arbitrary number of variables) is performed in Magma by mapping the input to bivariate images and then using sparse ideal-adic Hensel lifting. The final combination phase is in general easy, since the bivariate evaluation factorisation pattern is ‘generic’ and so typically reflects the final pattern closely; thus there is rarely a large number of combinations to consider.