The aim of the algorithm of Lenstra, Lenstra and Lovász (LLL) is to reduce bases of euclidean lattices. Since its introduction, it proved central in many areas, ranging from algorithmic number theory to public-key cryptanalysis. Given as input a basis of a euclidean lattice in ℤd with integer vectors of norms smaller than B, the LLL algorithm computes a so-called LLL reduced basis in time O(d6log3B), by using arithmetic operations on integers of lengths O(d log B). This running-time is too high to tackle lattice bases of large dimension or with large entries. Instead of the text-book LLL, one uses in practice floating-point variants, where the integer/rational arithmetic within the Gram-Schmidt orthogonalization (central and cost-dominant in LLL) is replaced by floating-point arithmetic with small mantissas.
In this talk, after some reminders on the LLL algorithm, I will describe the L2 algorithm. It is a natural floating-point variant of the LLL algorithm, that always outputs LLL-reduced bases in time O(d5(d + log B) log B). The code derived from this algorithm is the fastest available so far.