P. 1
lec2

lec2

|Views: 3|Likes:
Published by Siddhartha Nalluri

More info:

Published by: Siddhartha Nalluri on Jan 30, 2012
Copyright:Attribution Non-commercial

Availability:

Read on Scribd mobile: iPhone, iPad and Android.
download as PS, PDF, TXT or read online from Scribd
See more
See less

09/10/2013

pdf

text

original

CSE 206A: Lattice Algorithms and Applications Spring 2007

Lecture 2: Lattices and Bases
Lecturer: Daniele Micciancio Scribe: Daniele Micciancio
Motivated by the many applications described in the first lecture, today we start a
systematic study of point lattices and related algorithm.
1 Lattices and Vector spaces
Geometrically, a lattice can be defined as the set of intersection point of an infinite, regular,
but not necessarily orthogonal n-dimensional grid. For example, the set of integer vectors
Z
n
is a lattice. In computer science, lattices are usually represented by a generating basis.
Definition 1 Let B = ¦b
1
, . . . , b
n
¦ be linearly independent vectors in R
m
. The lattice
generated by B is the set
L(B) = ¦
n

i=1
x
i
b
i
: x
i
∈ Z¦ = ¦B x : x ∈ Z
n
¦.
of all the integer linear combinations of the vectors in B, and the set B is called a basis for
L(B).
Notice the similarity between the definition of a lattice and the definition of vector space
generated by B:
span(B) = ¦
n

i=1
x
i
b
i
: x
i
∈ R¦ = ¦B x : x ∈ R
n
¦.
The difference is that in a vector space you can combine the columns of B with arbitrary real
coefficients, while in a lattice only integer coefficients are allowed, resulting in a discrete
set of points. Notice that, since vectors b
1
, . . . , b
n
are linearly independent, any point
y ∈ span(B) can be written as a linear combination y = x
1
b
1
+ x
n
b
n
in a unique way.
Therefore y ∈ L(B) if and only if x
1
, . . . , x
n
∈ Z.
Any basis B (i.e., any set of linearly independent vectors) can be compactly represented
as an m n matrix whose columns are the basis vectors. Notice that m ≥ n because the
columns of B are linearly independent, but in general m can be bigger than n. If n = m
the lattice L(B) is said full-dimensional and span(B) = R
n
.
Notice that if B is a basis for the lattice L(B), then it is also a basis for the vector space
span(B). However, not every basis for the vector space span(B) is also a lattice basis for
L(B). For example 2B is a basis for span(B) as a vector space, but it is not a basis for
L(B) as a lattice because vector b
i
∈ L(B) (for any i) is not an integer linear combination
of the vectors in 2B.
Another difference between lattices and vector spaces is that vector spaces always admit
an orthogonal basis. This is not true for lattices. Consider for example the 2-dimensional
lattice generated by the vectors (2, 0) and (1, 2). This lattice contains pairs of orthogonal
vectors (e.g., (0, 4) is a lattice point and it is orthogonal to (2, 0)) but no such a pair is a
basis for the lattice.
2 Gram-Schmidt
Any basis B can be transformed into an orthogonal basis for the same vector space us-
ing the well-known Gram-Schmidt orthogonalization method. Suppose we have vectors
B = [b
1
[ . . . [b
n
] ∈ R
m×n
generating a vector space V = span(B). These vectors are not
necessarily orthogonal (or even linearly independent), but we can always find an orthogonal
basis B

= [b

1
[ . . . [b

n
] for V as follows:
b

1
= b
1
b

2
= b
2
−µ
2,1
b

1
where µ
2,1
=
b
2
,b

1

b

1
,b

1

.
b

i
= b
i

j<i
µ
i,j
b

j
where µ
i,j
=
b
i
,b

j

b

j
,b

j

.
Note that the columns of B

are orthogonal (¸b

i
, b

j
) for all i ,= j). Therefore the
(non-zero) columns of B

are linearly independent and form a basis for the vector space
span(B). However they are generally not a basis for the lattice L(B).
For example, the Gram-Schmidt orthogonalization of the basis B = [(2, 0)
T
, (1, 2)
T
]] is
(2, 0)
T
, (0, 2)
T
. However this is not a lattice basis because the vector (0, 2)
T
does not belong
to the lattice.
We see that the set of bases of a lattice has a much more complex structure of the set
of bases of a vector space. Can we characterise the set of bases of a given lattice? The
following theorem shows that different bases of the same lattices are related by unimodular
transformations.
Theorem 2 Let B and B

be two bases. Then L(B) = L(B

) if and only if there exists a
unimodular matrix U (i.e., a square matrix with integer entries and determinant ±1) such
that B = B

U.
Proof Notice that if U is unimodular, then U
−1
is also unimodular. To see this, recall
Cramer’s rule to solve a system of linear equations: for any nonsingular A = [a
1
, ...a
n
] ∈
R
n×n
, and b ∈ R
n
, the (unique) solution to the system of linear equations Ax = b is given
by
x
i
=
det([a
1
, ..., a
i−1
, b, a
i+1
, ..., a
n
])
det(A)
.
In particular, if A and b are integer, then the unique solution to the system Ax = b has
the property that det(A) x is an integer vector. Now, let U

be the inverse of U, i.e.,
UU

= I. Each column of U

is the solution to a system of linear equations Ux = e
i
.
By Cramer rule’s the solution is integer because det(U) = ±1. This proves that U

is an
integer matrix. Matrix U

is unimodular because
det(U) det(U

) = det(UU

) = det(I) = 1.
So, we also have det(U

) = 1/ det(U) = det(U

) = ±1.
Now, let us get to the proof. First assume B = B

U for some unimodular matrix U. In
particular, both U and U
−1
are integer matrices, and B = B

U and B

= BU
−1
. It follows
that L(B) ⊆ L(B

) and L(B

) ⊆ L(B), i.e., the two matrices B and B

generate the same
lattice.
Next, assume B and B

are two bases for the same lattice L(B) = L(B

). Then, by
definition of lattice, there exist integer square matrices M and M

such that B = B

M

and B

= BM. Combining these two equations we get B = BMM

, or equivalently,
B(I −MM

) = 0. Since vectors B are linearly independent, it must be I −MM

= 0, i.e.,
MM

= I. In particular, det(M) det(M

) = det(M M

) = det(I) = 1. Since matrices M
and M

have integer entries, det(M), det(M

) ∈ Z, and it must be det(M) = det(M

) = ±1
A simple way to obtain a basis of a lattice from another is to apply (a sequence of)
elementary column operations, as defined below. It is easy to see that elementary column
operations do not change the lattice generated by the basis because they can be expressed
as right multiplication by a unimodular matrix. Elementary (integer) column operations
are:
1. Swap the order of two columns in B.
2. Multiply a column by −1.
3. Add an integer multiple of a column to another column: b
i
← b
i
+a b
j
where i ,= j
and a ∈ Z.
It is also possible to show that any unimodular matrix can be obtained from the identity
as a sequence of elementary integer column operations. (Hint: show that any unimodular
matrix can be transformed into the identity, and then reverse the sequence of operations.)
It follows, that any two equivalent lattice bases can be obtained one from the other applying
a sequence of elementary integer column operations.
3 The determinant
Definition 3 Given a basis B = [b
1
, ..., b
n
] ∈ R
m×n
, the fundamental parallelepiped as-
sociated to B is the set of points
T(B) = ¦Σ
n
i=1
x
i
b
i
: 0 ≤ x
i
< 1¦ .
Note that T(B) is half-open, so that the translates T(B)+v (v ∈ L(B)) form a partition
of the whole space R
m
. This statement is formalized in the following observation.
Observation 4 For all x ∈ R
m
, there exists a unique lattice point v ∈ L(B), such that
x ∈ (v +T(B)).
We now define the determinant of a lattice.
Definition 5 Let B ∈ R
m×n
be a basis. The determinant of a lattice det(L(B)) is defined
as the n-dimensional volume of the fundamental parallelepiped associated to B:
det(L(B)) = vol(T(B)) =

i
|b

i
|
where B

is the Gram-Schmidt orthogonalization of B.
Geometrically, the determinant represent the inverse of the density of lattice points in
space (e.g., the number of lattice points in a large and sufficiently regular region of space
A should be approximately equal to the volume of A divided by the determinant.) In
particular, the determinant of a lattice does not depent on the choice of the basis. We will
prove this formally later in this lecture.
A simple upper bound to the determinant is given by Hadamard inequality:
det(L(B)) ≤

|b
i
|.
The bound immediately follows from the fact that |b

i
| ≤ |b
i
|.
We prove in the next section that the Gram-Schmidt orthogonalization of a basis can
be computed in polynomial time. So, the determinant of a lattice can be computed in
polynomial time by first computing the orthogonalized vectors B

, and then taking the
product of their lengths. But are there simpler ways to compute the determinant without
having to run Gram-Schmidt? Notice that when B ∈ R
n×n
is a non-singular square matrix
then det(L(B)) = [ det(B)[ can be computed simply as a matrix determinant. (Recall that
the determinant of a matrix can be computed in polynomial time by computing det(B)
modulo many small primes, and combining the results using the Chinese reminder theorem.)
The following formula gives a way to compute the lattice determinant to matrix deter-
minant computation even when B is not a square matrix.
Theorem 6 For any lattice basis B ∈ R
n×m
det(L(B)) =

det(B
T
B).
Proof Remember the Gram-Schmidt orthogonalization procedure. In matrix notation, it
shows that the orhogonalized vectors B

satisfy B = B

T, where T is an upper triangular
matrix with 1’s on the diagonal, and the µ
i,j
coefficients at position (j, i) for all j < i. So,
our formula for the determinant of a lattice can be written as

det(B
T
B) =

det(T
T
B
∗T
B

T) =

det(T
T
) det(B
∗T
B

) det(T).
The matrices T, T
T
are triangular, and their determinant can be easily computed as the
product of the diagonal elements, which is 1. Now consider B
∗T
B

. This matrix is diagonal
because the columns of B

are orthogonal. So, its determinant can also be computed as the
product of the diagonal elements which is
det(B
∗T
B

) =

i
¸b

i
, b

i
) = (

i
|b

i
|)
2
= det(L(B))
2
.
Taking the square root we get

det(T
T
) det(B
∗T
B

) det(T) = det(L(B)).
We can now prove that the determinant does not depend on the particular choice of the
basis, i.e., if two bases generate the same lattice then their lattice determinants have the
same value.
Theorem 7 Suppose B, B

are bases of the same lattice L(B) = L(B

). Then, det(L(B)) =
det(L(B

)).
Proof Suppose L(B) = L(B

). Then B = B

U where U ∈ Z
n×n
is a unimodular ma-
trix. Then det(B
T
B) = det((B

U)
T
(B

U)) = det(U
T
) det((B

)
T
B) det(U) = det((B

)
T
B)
because det(U) = 1.
We conclude this section showing that although not every lattice has an orthogonal
basis, every integer lattice contains an orthogonal sublattice.
Claim 8 For any nonsingular B ∈ Z
n×n
, let d = [ det(B)[. Then d Z
n
⊆ L(B).
Proof Let v be any vector in d Z
n
. We know v = d y for some integer vector y ∈ Z
n
.
We want to prove that v ∈ L(B), i.e., d y = B x for some integer vector x. Since B is
non-singular, we can always find a solution x to the system B x = d y over the reals. We
would like to show that x is in fact an integer vector, so that dy ∈ L(B). We consider the
elements x
i
and use Cramer’s rule:
x
i
=
det ([b
1
, ..., b
i−1
, dy, b
i+1
, ..., b
n
])
det(B)
=
d det ([b
1
, ..., b
i−1
, y, b
i+1
, ..., b
n
])
det(B)
= det ([b
1
, ..., b
i−1
, y, b
i+1
, ..., b
n
]) ∈ Z
We may say that any integer lattice L(B) is periodic modulo the determinant of the
lattice, in the sense that for any two vectors x, y, if x ≡ y (mod det(L(B))), then x ∈
L(B) if and only if y ∈ L(B).
4 Running time of Gram-Schmidt
Is this section we analyze the running time of the Gram-Schmidt algorithm. The number
of arithmetic operations performed by the Gram-Schmidt procedure is O(n
3
). Can we
conclude that it runs in polynomial time? Not yet. In order to prove polynomial time
termination, we also need to show that all the numbers involved do not grow too big. This
will also be useful later on to prove the polynomial time termination of other algorithms
that use Gram-Schmidt orthogonalized bases. Notice that even if the input matrix B is
integer, the orthogonalized matrix B

and the coefficients µ
i,j
will in general not be integers.
However, if B is integer (as we will assume for the rest of this section), then the µ
i,j
and
B

are rational.
The Gram-Schmidt algorithm uses rational numbers, so we need to bound both the
precision required by these numbers and their magnitude. From the Gram-Schmidts or-
thogonalization formulas we know that
b

i
= b
i
+

j<i
ν
i,j
b
j
for some reals ν
i,j
. Since b

i
is orthogonal to b
t
for all t < i we have
¸b
t
, b
i
) +

j<i
ν
i,j
¸b
t
, b
j
) = 0.
In matrix notation, if we let B
i−1
= [b
1
, . . . , b
i−1
] and (ν
i
)
j
= ν
i,j
this can be written as:
(B
T
i−1
B
i−1
) ν
i
= −B
T
i−1
b
i
which is an integer vector. Solving the above system of linear equations in variables ν
i
using
Cramer’s rule we get
ν
i,j

Z
det(B
T
i−1
B
i−1
)
=
Z
det(L(B
i−1
))
2
.
We use this property to bound the denominators that can occur in the coefficients µ
i,j
and orthogonalized vectors b

i
. Let D
i
= det(B
i−1
)
2
and notice that
D
i−1
b

i
= D
i−1
b
i
+

j<i
(D
i−1
ν
i,j
)b
j
is an integer combination of integer vectors. So, all denominators that occur in vector b

i
are factors of D
i−1
. Let’s now compute the coefficients:
µ
i,j
=
¸b
i
, b

j
)
¸b

j
, b

j
)
=
D
j−1
¸b
i
, b

j
)
D
j−1
¸b

j
, b

j
)
=
¸b
i
, D
j−1
b

j
)
D
j

Z
D
j
and the denominators in the µ
i,j
must divide D
j
.
This proves that all numbers involved in µ
i,j
and b

i
have denominators at most max
k
D
k

k
|b
k
|
2
. Finally, the magnitude of the numbers is also polynomial because |b

i
| ≤ |b
i
|,
so all entries in |b

i
| are at most |b
i
|. This proves that the Gram-Schmidt orthogonaliza-
tion procedure runs in polynomial time.
Theorem 9 There exists a polynomial time algorithm that on input a matrix B, computes
the Gram-Schmidt orthogonalization B

However they are generally not a basis for the lattice L(B). . 0) and (1.1 b∗ 1 2 b∗ = bi − i ∗ j<i µi. then the unique solution to the system Ax = b has the property that det(A) · x is an integer vector. .lattice generated by the vectors (2. However this is not a lattice basis because the vector (0. 4) is a lattice point and it is orthogonal to (2. Theorem 2 Let B and B′ be two bases..b∗ 1 1 bi .b∗ 1 b∗ .. . if A and b are integer. i. recall Cramer’s rule to solve a system of linear equations: for any nonsingular A = [a1 . . (0. These vectors are not necessarily orthogonal (or even linearly independent).. 2)T . 0)) but no such a pair is a basis for the lattice. 2). By Cramer rule’s the solution is integer because det(U ) = ±1. |b∗ ] for V as follows: n 1 b∗ = b1 1 b∗ = b2 − µ2..j = b2 . . |bn ] ∈ Rm×n generating a vector space V = span(B). We see that the set of bases of a lattice has a much more complex structure of the set of bases of a vector space.. (1. 2)T does not belong to the lattice.b∗ j b∗ . .1 = where µi.. 2 Gram-Schmidt Any basis B can be transformed into an orthogonal basis for the same vector space using the well-known Gram-Schmidt orthogonalization method. an ]) . ai+1 .. Matrix U′ is unimodular because det(U) det(U′ ) = det(UU′ ) = det(I) = 1. and b ∈ Rn . Then L(B) = L(B′ ) if and only if there exists a unimodular matrix U (i. det(A) In particular. For example. . let U′ be the inverse of U.j bj where µ2....an ] ∈ Rn×n .e. Now. Each column of U′ is the solution to a system of linear equations Ux = ei . 0)T . the Gram-Schmidt orthogonalization of the basis B = [(2.g. Can we characterise the set of bases of a given lattice? The following theorem shows that different bases of the same lattices are related by unimodular transformations. . (0. 2)T ]] is (2. a square matrix with integer entries and determinant ±1) such that B = B′ U. but we can always find an orthogonal basis B∗ = [b∗ | . the (unique) solution to the system of linear equations Ax = b is given by xi = det([a1 .e. 0)T . This proves that U′ is an integer matrix. Suppose we have vectors B = [b1 | . b∗ for all i = j). then U−1 is also unimodular. This lattice contains pairs of orthogonal vectors (e. ai−1 .b∗ j j . Therefore the i j (non-zero) columns of B∗ are linearly independent and form a basis for the vector space span(B).. Note that the columns of B∗ are orthogonal ( b∗ . b. Proof Notice that if U is unimodular. To see this. UU′ = I. .

the fundamental parallelepiped associated to B is the set of points P(B) = {Σn xi · bi : 0 ≤ xi < 1} .e. there exist integer square matrices M and M′ such that B = B′ M′ and B′ = BM. it must be I − MM′ = 0. Since vectors B are linearly independent. or equivalently. It is also possible to show that any unimodular matrix can be obtained from the identity as a sequence of elementary integer column operations.. and it must be det(M) = det(M′ ) = ±1 A simple way to obtain a basis of a lattice from another is to apply (a sequence of) elementary column operations. bn ] ∈ Rm×n . i. there exists a unique lattice point v ∈ L(B). (Hint: show that any unimodular matrix can be transformed into the identity. In particular.. i. MM′ = I. 3 The determinant Definition 3 Given a basis B = [b1 . we also have det(U′ ) = 1/ det(U) = det(U′ ) = ±1. Elementary (integer) column operations are: 1. . Combining these two equations we get B = BMM′ . In particular. It is easy to see that elementary column operations do not change the lattice generated by the basis because they can be expressed as right multiplication by a unimodular matrix. Then.. both U and U−1 are integer matrices... B(I − MM′ ) = 0. Swap the order of two columns in B. so that the translates P(B)+v (v ∈ L(B)) form a partition of the whole space Rm . 3. and B = B′ U and B′ = BU−1 . such that x ∈ (v + P(B)). by definition of lattice. the two matrices B and B′ generate the same lattice. . Multiply a column by −1. Add an integer multiple of a column to another column: bi ← bi + a · bj where i = j and a ∈ Z. and then reverse the sequence of operations. We now define the determinant of a lattice. as defined below. It follows that L(B) ⊆ L(B′ ) and L(B′ ) ⊆ L(B). This statement is formalized in the following observation. det(M) · det(M′ ) = det(M · M′ ) = det(I) = 1. det(M′ ) ∈ Z. assume B and B′ are two bases for the same lattice L(B) = L(B′ ). 2. that any two equivalent lattice bases can be obtained one from the other applying a sequence of elementary integer column operations.So.e.) It follows. First assume B = B′ U for some unimodular matrix U. i=1 Note that P(B) is half-open. Since matrices M and M′ have integer entries. let us get to the proof. Observation 4 For all x ∈ Rm . Next. Now. det(M).

. Proof Remember the Gram-Schmidt orthogonalization procedure.Definition 5 Let B ∈ Rm×n be a basis.) The following formula gives a way to compute the lattice determinant to matrix determinant computation even when B is not a square matrix. The matrices T. The determinant of a lattice det(L(B)) is defined as the n-dimensional volume of the fundamental parallelepiped associated to B: det(L(B)) = vol(P(B)) = i b∗ i where B∗ is the Gram-Schmidt orthogonalization of B. and combining the results using the Chinese reminder theorem. So. it shows that the orhogonalized vectors B∗ satisfy B = B∗ T. So. But are there simpler ways to compute the determinant without having to run Gram-Schmidt? Notice that when B ∈ Rn×n is a non-singular square matrix then det(L(B)) = | det(B)| can be computed simply as a matrix determinant. Theorem 6 For any lattice basis B ∈ Rn×m det(L(B)) = det(BT B). TT are triangular. The bound immediately follows from the fact that b∗ ≤ bi . and then taking the product of their lengths.g. our formula for the determinant of a lattice can be written as det(BT B) = det(TT B∗T B∗ T) = det(TT ) det(B∗T B∗ ) det(T). i) for all j < i. where T is an upper triangular matrix with 1’s on the diagonal. the determinant of a lattice does not depent on the choice of the basis. In matrix notation. So. the determinant represent the inverse of the density of lattice points in space (e. i We prove in the next section that the Gram-Schmidt orthogonalization of a basis can be computed in polynomial time. and the µi. and their determinant can be easily computed as the product of the diagonal elements. the determinant of a lattice can be computed in polynomial time by first computing the orthogonalized vectors B∗ .j coefficients at position (j. its determinant can also be computed as the product of the diagonal elements which is det(B∗T B∗ ) = i b∗ . b∗ = ( i i i b∗ )2 = det(L(B))2 . We will prove this formally later in this lecture. This matrix is diagonal because the columns of B∗ are orthogonal. Geometrically. Now consider B∗T B∗ . A simple upper bound to the determinant is given by Hadamard inequality: det(L(B)) ≤ bi . which is 1. i .) In particular. the number of lattice points in a large and sufficiently regular region of space A should be approximately equal to the volume of A divided by the determinant. (Recall that the determinant of a matrix can be computed in polynomial time by computing det(B) modulo many small primes.

. .. We consider the elements xi and use Cramer’s rule: xi = det ([b1 . The number of arithmetic operations performed by the Gram-Schmidt procedure is O(n3 ). We want to prove that v ∈ L(B)... in the sense that for any two vectors x. det(L(B)) = det(L(B′ )). then x ∈ L(B) if and only if y ∈ L(B).. if x ≡ y (mod det(L(B)))... bi−1 .. i.. B′ are bases of the same lattice L(B) = L(B′ ). bi+1 . In order to prove polynomial time . dy.e. Then.Taking the square root we get det(TT ) det(B∗T B∗ ) det(T) = det(L(B)). Theorem 7 Suppose B. Can we conclude that it runs in polynomial time? Not yet. Since B is non-singular. y. we can always find a solution x to the system B · x = d · y over the reals. bi−1 . . Then det(BT B) = det((B′ U)T (B′ U)) = det(UT ) det((B′ )T B) det(U) = det((B′ )T B) because det(U) = 1.e. We know v = d · y for some integer vector y ∈ Zn . d · y = B · x for some integer vector x. . .. y. i.. bn ]) ∈ Z We may say that any integer lattice L(B) is periodic modulo the determinant of the lattice..... if two bases generate the same lattice then their lattice determinants have the same value. Proof Suppose L(B) = L(B′ ). so that dy ∈ L(B).. bi+1 . bi−1 ... Proof Let v be any vector in d · Zn . Then d · Zn ⊆ L(B). We conclude this section showing that although not every lattice has an orthogonal basis. let d = | det(B)|. We can now prove that the determinant does not depend on the particular choice of the basis. y.. Then B = B′ · U where U ∈ Zn×n is a unimodular matrix. 4 Running time of Gram-Schmidt Is this section we analyze the running time of the Gram-Schmidt algorithm. We would like to show that x is in fact an integer vector. bn ]) det(B) d · det ([b1 . . . every integer lattice contains an orthogonal sublattice. bi+1 .. bn ]) = det(B) = det ([b1 . Claim 8 For any nonsingular B ∈ Zn×n .

In matrix notation.j ∈ T ·B det(L(Bi−1 ))2 det(Bi−1 i−1 ) We use this property to bound the denominators that can occur in the coefficients µi. This proves that the Gram-Schmidt orthogonalizai tion procedure runs in polynomial time. b∗ bj j Dj−1 bi . if B is integer (as we will assume for the rest of this section). bi−1 ] and (νi )j = νi. This proves that all numbers involved in µi. So.j this can be written as: (BT · Bi−1 ) · νi = −BT · bi i−1 i−1 which is an integer vector. bi + j<i νi. Solving the above system of linear equations in variables νi using Cramer’s rule we get Z Z . we also need to show that all the numbers involved do not grow too big. if we let Bi−1 = [b1 . The Gram-Schmidt algorithm uses rational numbers.j and B∗ are rational. Let’s now compute the coefficients: µi. Theorem 9 There exists a polynomial time algorithm that on input a matrix B.j . Finally. b∗ j j bi . From the Gram-Schmidts orthogonalization formulas we know that ∗ bi = bi + j<i νi.j bt . so we need to bound both the precision required by these numbers and their magnitude. Dj−1 b∗ Z j ∈ Dj Dj and the denominators in the µi. = νi. the orthogonalized matrix B∗ and the coefficients µi.j = = = bi . Since b∗ i is orthogonal to bt for all t < i we have bt . computes the Gram-Schmidt orthogonalization B∗ . Notice that even if the input matrix B is integer. so all entries in b∗ are at most bi . Let Di = det(Bi−1 )2 and notice that i Di−1 · b∗ = Di−1 · bi + i j<i (Di−1 νi. the magnitude of the numbers is also polynomial because bi ≤ bi . .j will in general not be integers.j bj for some reals νi.j and orthogonalized vectors b∗ .termination. then the µi. bj = 0. However.j )bj is an integer combination of integer vectors. all denominators that occur in vector b∗ i are factors of Di−1 . . . This will also be useful later on to prove the polynomial time termination of other algorithms that use Gram-Schmidt orthogonalized bases. b∗ j ∗ . b∗ j Dj−1 b∗ .j must divide Dj . .j and b∗ have denominators at most maxk Dk ≤ i 2 ∗ k bk .

You're Reading a Free Preview

Download
scribd
/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->