This topic should not be confused with the unrelated definition of a lattice as a partially ordered set with a meet and join. Mathematicians sometimes use the same word for different things.

By default, the lattice points in R3 are the points with integer coordinates. These form a regular grid throughout 3 space. The base cell is a cube, which is repeated again and again to fill the entire volume of space.

Stretch the grid in the direction of the x axis, and the base cell becomes a rectangular box. In fact each cell becomes a rectangular box, i.e. a copy of the base cell. Next, tilt the grid to the right, as though the y axis was pivoting about the origin. The floor of each cell is now a parallelogram. Finally tilt the z axis, and each cell of the lattice is a parallelotope. It's basically a pushed over box.

After stretching and tilting, the lattice comprises the integer multiples of three vectors in 3 space. If the three vectors lie in a plane, i.e. the box has been squashed flat, then the lattice doesn't fill the entire space. This is not a true lattice. The three vectors must be linearly independent.

Generalize this to n dimensions for the definition of a lattice in n space. The lattice is the integer multiples of n independent vectors. Notice that a lattice is defined relative to a containing space. The vector [1,0], and the integer multiples thereof, is a perfectly good lattice for the x axis, but not for the xy plane, since it does not cover the plane.

I like to think of a lattice as scaffolding that supports the surrounding space. Each region is anchored by a lattice point.

Let 1 and the square root of 2 generate a free Z module inside the reals. Free, because no integer multiple of the square root of 2 can equal an integer multiple of 1. However, the generators are not linearly independent over R, hence this is not a lattice for the real line. And it doesn't look like one either, because the free group G is dense in the reals. Linear combinations of 1 and sqrt(2) can be found everywhere. Instead of a discrete set of regularly spaced points, we have a smear.

In contrast, embed G in the two dimensional field extension produced by adjoining the square root of 2 to the rationals. If sqrt(2) is plotted on the y axis, G becomes a grid once again. The lattice consists of integer multiples of 1 cross integer multiples of sqrt(2), defining a pattern of rectangles in the plane. When Q is brought in, the entire space Q[sqrt(2)] appears. The trick is graphing the adjoined element in another dimension, which we naturally do when the new element is complex, such as the square root of -2, but the square root of 2 can also be plotted on the y axis, whence G actually looks like a lattice. This trick does not lead us astray, because the elements really are independent in Q. No rational multiple of sqrt(2) is equal to a rational multiple of 1. Since they are independent over the field Q, there is no harm in graphing them on independent axes.

A lattice can be based on n linearly independent complex vectors in n dimensional complex space. The vectors are multiplied by complex (gaussian) integers, and added together. Picture the same points in real space with twice as many dimensions. Each vector in Cn becomes a vector in R2n when multiplied by 1, and another vector in R2n when multiplied by i. If these vectors are linearly dependent over the reals, put the coefficients together, using i where necessary, and the original vectors were linearly dependent over the complex numbers. Therefore the new base vectors in real space are independent. Furthermore, the span of these real vectors, in real space, using integer coefficients, equals the span of the original complex vectors in complex space, using complex integers as coefficients. The lattice is exactly the same; we have simply found a representation using real vectors in real space with integer coefficients. This is sometimes more convenient.

A sublattice of a given lattice is a submodule that is also a lattice. The multiples of 7 form a sublattice of the integers within the real line. Recall that we analyzed the structure of the ring Z adjoin sqrt(-3) by viewing it as a sublattice of the eisenstein integers. You may want to review these rings, and others in chapter 5; these are hands-on examples of lattices in the plane, and even in 4 space.

The covolume of a lattice is the determinant of its generators. Since the vectors are linearly independent, this is nonzero. Remember that the determinant gives the volume of the cell spanned by these vectors, also known as the base cell. Naturally every other cell in the lattice has the same volume. Why then is this not the volume of the lattice? Because the lattice goes on forever. It is in some sense infinite, just like space itself. The base cell is a quotient space, perhaps space mod the lattice, thus the volume of the base cell is the covolume of the lattice. That's the reasoning behind the terminology.

Let T be a sublattice of S. Each generator of T is a unique linear combination of generators of S, using elements of R. (R is usually the integers, though R could be the gaussian integers, as described in the previous section, or even other rings.) Think of t1, the first vector of T, as the sum of aisi, where ai are coefficients drawn from R. Let a1 through an be the first row of a matrix P. This row, times the matrix S, gives the top row of T, the first vector of the lattice T. Let the second row of P build t2, and so on through tn. Thus P*S = T, using matrix multiplication.

Given a point x in T, described as a linear combination of its generators, perhaps the sum of biti, replace T with P*S, whence bT = bPS, and bP gives the coefficients of x within the larger lattice S. P is a transformation matrix wherein the points of T are represented in S. As a sanity check, consider the first point t1 in the basis of T, with b1 = 1 and the remaining coefficients 0. Multiply by P, and bP yields the top row of P, which is the representation of t1 through S.

Since T = P*S, the covolume of T is det(P) times the covolume of S. By definition, the index of a sublattice within a larger lattice is the quotient of the two covolumes, which is, in our example, the determinant of the transforming matrix P. This is an element of R. In real space, it represents the number of cells of S that fit into a cell of T, although the cells may not pack perfectly. Consider the following example. Let S be the unit lattice in the plane, and let T be generated by [2,0] and [1,1]. This is a parallelogram of area 2. You can't place two whole squares inside this parallelogram, but you can if you cut them up and rearrange the pieces. More on this later.

What happens when S = T? Assume S and T contain the same lattice points in space. They are the same set. A transformation P represents the basis of T in terms of the basis of S, that is, T = P*S. At the same time, there is a matrix Q such that S = Q*T. Combine these to get T = PQT. PQ is the identity matrix. Therefore P and Q are inverses, and their determinants are units in R.

With this result in hand, the covolume of a given lattice, as a set in space, and without regard to a particular basis, is well defined up to associates. A change of basis might multiply the covolume by a unit, but that is all. For instance, if v is one of the vectors in the basis, use -v instead of v, and the covolume is multiplied by -1, which is a unit. This is why we take the absolute value of the covolume of a lattice in real space to get the volume of each cell.

Conversely, assume the index of T in S is a unit. The matrix P that implements the change of basis has an invertible determinant. Use cofactors to find Q, the inverse of P. This is a matrix with elements of R, divided by the determinant of P. This determinant is a unit, hence the matrix lies in R. The vectors of T can be used to span the vectors of S, and the lattices are the same. In summary, a lattice is equal to its sublattice iff the index is a unit, iff the two covolumes are associates in R. The sublattice is merely a change of basis.

After a change of basis, the cell could look quite different, but if it defines the same lattice, it has the same volume. Returning to our squares in the plane, let an alternate representation use the vectors [1,0] and [100,1]. This is a really long and skinny parallelogram, yet it has the same area as the unit square, and defines the same lattice.

For any sublattice T of a lattice S, viewing both as sets, the index, which is the quotient of the covolumes, is well defined up to a unit. This because each covolume is well defined up to a unit. It is, as we saw above, the determinant of the transforming matrix P.

Let T be a sublattice of S, perhaps a proper sublattice, and let P be the transforming matrix. Thus the elements of P lie in the ring R. Let d be the determinant of P. We saw that d is a unit iff S and T are the same lattice. Let Q be the inverse of P, thus Q comprises elements of R with a denominator of d. We usually describe T in terms of the vectors of S, but one can describe S in terms of T. Start with S = QT, building the base cell of S. Each row of S, each vector in the base cell of S, is a linear combination of the vectors of T, using coefficients in R/d. These can be recombined, using coefficients from R, to give every point in the lattice S, therefore each point of S is a linear combination of the vectors of T, using coefficients in R/d. The next section examines the points of S that live in the base cell of T.

Let T be a sublattice of S. The cell count of T, relative to S, is the number of lattice points of S that lie in the base cell of T. The preposition "in" must be carefully defined. The base cell does not include its roof or far walls. Specifically, x ∈ S is in the base cell of T if the coefficients of x, using the vectors of T, are all in the half open range [0,1). If x is the sum of aiti, then each ai is ≥ 0 and < 1. This assumes the lattices live in real space. In other words, R is the reals, and we can meaningfully assert 0 ≤ ai < 1.

With ai bounded below 1, and a multiple of 1/d, there are only d possibilities. This holds for each ai, hence there are at most dn points of S in the base cell of T. The cell count is a well defined finite number.

Since t is a sublattice of S, the top row of T belongs to S. This is extracted using coefficients a1 = 1, and ai = 0 for i > 1. Similarly, the second vector of T is attained by setting a2 = 1, and so on. This allows any point of S to be translated into the base cell of T. To illustrate, assume d = 10, and x has the coordinates 1.3, 2.7, and -5.1, relative to the basis T. Subtract the first vector of T from x to get another point of S. Now the first coefficient is 0.3, which is between 0 and 1. In the same way, subtract twice the second row of T, and add 6 times the third row. This effectively reduces the coefficients mod 1. Our original point x corresponds to 0.3, 0.7, 0.9 in the base cell.

Two distinct points x and y in the same cell of T correspond to two distinct points x0 and y0 in the base cell of T. This practically follows from the definition of "same cell". The integer components of the coefficients that build x and y are the same, that is what we mean when we say they are in the same cell of T. Reduce these coefficients mod 1, leaving the fractional components. This carries x to x0 and y to y0. The points of S, in any given cell of T, are in one to one correspondence with the points of S in the base cell of T, in a canonical manner, as determined by translation.

It is sometimes convenient to map the entire space, and the lattice S, into another space Rn, through a linear transformation e(), implemented by a matrix E. Once this is done, the new generators of S are linearly independent, creating a new lattice S′.

S′ = S * E

The covolume of S′ is the covolume of S times the determinant of E. This is just the product rule for determinants.

If T is a sublattice of S, carry T along as well, creating T′. Show that T′ is a sublattice of S′, and the index, the ratio of covolumes, is unchanged. Linear maps preserve index.

The representation of a point x in S, as a linear combination of t1 through tn, is unchanged in the image. In other words, x′ = a1t1′ + a2t2′ + … antn′. We are merely multiplying the original linear equation by E on the right. Therefore a linear transformation preserves the cell count.

x = a * T

x′ = x*E = (a*T)*E = a * (T*E) = A * T′

We can now prove index equals cell count. Transform space so that S becomes the standard lattice, the unit cubes in n space. In other words, S is the set of points whose coordinates are integers, and the vectors of S are the unit vectors. The covolume of S is 1, the volume of the unit cube, and the covolume of T is the volume of its base cell, which is the determinant of T. This is also the index of T in S.

The algebra in the following paragraphs is inspired by an idea that is fairly intuitive. Step back and look at a very large volume of space. If each cell has volume d, and if each lattice point anchors a volume of 1, and if each cell has the same number of lattice points, then each cell contains d lattice points. Any other cell count would lead to a volume inequality.

Each cell of T is closed and bounded, with a maximum diameter of l. Round l up to the nearest integer.

Let the volume of the base cell be d, which is the determinant of T.

Consider two large cubes, one inside another, centered at the origin. The outer cube has coordinates running from -p to p, where p is a half integer. This has volume h, which is equal to the number of lattice points inside. Move the walls inward by l to find an inner cube, with volume g, and g lattice points inside.

There can be at most h/d complete cells inside the outer cube. There may be less, if some cells poke out the sides. This is an upper bound on the complete cels of T within the outer cube.

Remove any partial cells from the outer cube, i.e. any cells that poke out of the walls, and the remaining cells completely cover the inner cube. The volume of these cells adds up to more than g. There are at least g/d cells in the outer cube.

The cells entirely in the outer cube are bounded between g/d and h/d. The number of lattice points consumed by these cells is between g and h. Take quotients, and the lattice points per cell are bounded between g/(h/d) and h/(g/d), or d(g/h) and d(h/g).

For large p, the ratios g/h and h/g approach 1 from either side. The number of lattice points in a cell is an integer, as is d. The ratio is eventually held between d-½ and d+½, and it has to equal d. Thus the number of lattice points in any cell is equal to d, which is the determinant of T, or the covolume of T, or the volume of the base cell. The cell count equals the index.

The same is true in our original space, prior to the transformation e(). The number of points of S in each cell of T is the index of T in S.

Here is a different proof that can be generalized to more situations (besides real space). Again, S is the standard lattice, so we don't have to worry about that. T is a sublattice, with an index equal to its covolume, and a collection of points of S that live in the base cell of T. Start with T rectangular, where ti is an integer multiple of the ith coordinate. The cell count inside this box is the product over all the lengths ti, which is the determinant of the diagonal matrix, which is the covolume of T. So far so good.

Given any matrix of vectors, apply elementary row operations, until the matrix becomes diagonal. At each step, the determinant and the cell count are multiplied by the same factor. At the end, when the vectors define a rectangular box, the cell count equals the determinant. Retrace your steps and the same holds for our original lattice T. Now let's have a closer look at those steps.

Swapping two rows, i.e. two coordinates, is an elementary row operation that negates the determinant. Thus the determinant does not change except for a unit. The base cell doesn't change either; it is spanned by the same vectors. We are merely changing the representation of a point x in the base cell, swapping the coefficients along with the vectors. Thus the cell count has not changed.

Scale a vector by an integer j. This places j copies of the original base cell next to each other in a row to make a new, longer base cell. Remember that lattice points in every cell correspond to lattice points in the base cell. Thus there are j times as many lattice points in this new base cell as there were in the original. The determinant, and the cell count, are both multiplied by j.

The third elementary row operation subtracts a multiple of one row from another, but I'm going to place some restrictions on this operation. Suppose two rows have, in the jth column, entries of a and b, and we want to remove b fromm the second row. Multiply the first by b, and the second by a, then subtract the first from the second. The entry has disappeared, and we're on our way to an upper triangular matrix, and then a diagonal matrix. We only need subtract one vector from another.

This operation does not change the determinant. It takes some geometric intuition to see why the cell count is unchanged. Picture two vectors on the floor, spanning a parallelogram. For convenience, let one of the vectors run along the x axis. A third vector rises up at an angle, building a parallelotope. Subtract the first vector, along the x axis, from the third. This slides the roof over to the left, taking the walls along for the ride. Suppose a point x was inside the cell, but after the roof moves, x is now outside the cell. When the right wall slides across x, leaving it out in the cold, the left wall touches the point x-v1, a translate of x. Points leave and enter the moving cell in lockstep. When the shift is complete, the cell count is the same.

This is easy to visualize, but it relies on continuity, and that doesn't generalize to other rings and fields. To get around this, build a 100% algebraic proof. The roof suddenly snaps into its new position with no continuity of motion. Let c1 be the original cell and let c2 be the new base cell. Some points are in both c1 and c2, and we don't have to worry about those. Some points are in c1-c2, and some points are in c2-c1. Let x be one of these points, and represent x using the basis of c1, and the basis of c2. Call these representations r1 and r2. The entries in r1 and r2 agree, except for the first component. This will differ, depending on the location of x in c1. If x is on the floor, then the first component doesn't change. Thus x is in both c1 and c2. If x is on the ceiling, 1 has been added to the first component. It use to lie in [0,1), and now it lies in [1,2). Thus x is in c1, and not in c2; but its translate, y = x-v1, whose first component starts out in [-1,0), lies in c2, and not in c1. (I realize that the ceiling is not technically part of the base cell, not part of c1 or c2, but I'm describing it for illustrative purposes.)

What if x is somewhere between the floor and the ceiling? Halfway up the parallelotope, halfway along v3, ½ is added to the first component. If the first component of r1 lies in [0,½), then x is in both cells. If this entry lies in [½,1), then x is in c1-c2. No problem, because y is in c2-c1. Similar reasoning holds for any x between the floor and ceiling. ε is added to the first component, and one lattice point leaves iff another point enters. x is in [1-ε,1) iff y is in [-ε,0). Lattice points correspond, and the cell count, like the determinant, is unchanged.

Each row operation does the same thing to the determinant and the cell count, therefore the cell count equals the determinant, which is the covolume of the lattice, or the index of one lattice inside another.

The discriminant of a lattice is the square of its covolume. This is well defined up to squared units. In real space, the discriminant is a positive number.

A shifted lattice is a lattice that has been shifted by a fixed vector in space. For instance, start with the standard lattice in the plane and add [½,½] to each point to shift the grid.

The linear transformation of a shifted lattice is another shifted lattice.

Consider a lattice S in Rn, using integers as coefficients. Let x be a point in S, and suppose other points of S approach x arbitrarily. In other words, x is a cluster point of S. Subtract x from everything, and 0 becomes a cluster point of S.

Let b1 through bn be a basis for S. Normally we multiply these basis elements by integers to create S, but if real numbers are used, the basis defines a linear transformation from Rn onto itself. Let f implement this linear transformation, and let g be the inverse. Both functions are continuous.

The unit ball in Rn is closed and bounded, hence compact. Apply g, and the result is compact, hence closed and bounded. Therefore the preimage of the unit ball, under f, is bounded. It contains a finite number of cubes, and a finite number of points with integer coordinates. These map to a finite number of points of S contained in the unit ball. This holds for a ball of any radius. Therefore any bounded region (which can always be placed inside a larger ball) intersects S in a finite set. If 0 is a cluster point of S, the unit ball intersects S in an infinite set, and that is a contradiction. Therefore S has no cluster points.

Consider an open ball about the origin that contains any nonzero point of S. One of the points of S is closest to the origin - call it y. Place a similar ball around any point x in S, and the points in that ball are translates of the points about the origin. In other words, x+y is the lattice point closest to x. The minimum distance from 0 to y applies throughout the lattice. No two points are closer than |y|, and every point has at least two neighbors |y| distance away, namely x±y.

Since the points of S can be isolated inside open balls of radius ½|y|, the topology of S is discrete. Furthermore, the points of S do not approach some other point x in space, where x is not in S, for two such points would be within |y| of each other. S has no cluster point anywhere in space.

Conversely, assume a set S of points closed under addition and subtraction admits no cluster points. It is enough to say 0 is not a cluster point. Some ball about 0 contains only 0, and so every point of S consumes a finite volume, and a bounded region can only contain a finite number of points from S. This means S does not approach any other point x in space.

Let b1 through bn be a maximal set of linearly independent vectors drawn from S. If this basis spans S then we have a lattice. If it does not span some point x, then translate x so that it lies inside the unit cell, the parallelotope determined by these vectors.

Write x uniquely as the sum of aibi, where the coefficients are real numbers in [0,1).

Suppose one of the coefficients, say a1, is irrational. Multiples of x translate back to distinct points in the unit cell, having distinct values for a1. This is an infinite number of points in a bounded set, which is a contradiction. Therefore all the coefficients of x are rational.

Express the coefficients as a1 through an over a common denominator d. In the following, all coefficients are integers, the coefficients do not have a common factor, and each ai is in the range 0 to d-1.

dx = a1b1 + a2b2 + a3b3 + … anbn

Suppose a prime p divides d and a1. This means there is some other coefficient, say a2, that is not divisible by p, else all coefficients could be reduced. Let g be the gcd of d and a1, and let h = d/g. Divide through by g and find a formula for hx. The first coefficient remains an integer, while some of the other coefficients, such as a2/g, are rational numbers (not integers). Translate hx to the base cell, and the first term drops to 0. In other words, hx lies on the floor of the base cell. In 3 space, x might have been in the middle of the parallelotope, but hx is on the floor, somewhere in the parallelogram.

Repeat the above procedure in a lower dimension. A point not spanned implies another point not spanned in a sublattice of the original. Eventually we run out of dimensions, and at that point, d is relatively prime to all the coefficients on the right. Let's make this assumption and move on.

dx = a1b1 + a2b2 + a3b3 + … anbn (d and ai coprime)

Since d and a1 are coprime, a1 is invertible mod d. If ca1 = 1 mod d, let y = cx. Translate y back to the base cell, and the first coefficient is 1/d. Thus y is an unspanned point in our base cell.

Adjust the set of linearly independent vectors by replacing b1 with y. Evaluate dy and find b1 plus a linear combination of b2 through bn. In other words, y and b2 through bn span b1. The original lattice is spanned, plus y.

Try to picture this with 3 vectors in 3 space. Let b2 and b3 define a rectangle on the floor, with b1 perpendicular to both. This is a simple rectangular lattice. If d = 21, a new unspanned point y lies inside this box, very close to the floor. In fact, 21y lies on the ceiling of this box, or a nearby box in the first layer. The projection of y onto the floor is not known; it could be near any of the four corners, or close to the middle of the rectangle. When b1 is replaced with y, a new layer of cells appears. The floor of each cell is unchanged, but the new cells are shorter, and they may be pushed over relative to the original cells. We have squashed them down and push them around. But here is the key: if a new point z is present in the base cell spanned by y and b2 through bn, then z, or one of its horizontal translates, is in the original cell spanned by b1 through bn. It happens to be near the floor, but that's not important, as long as the new point z implies an unspanned point in our original cell. It does, because the new lattice is a superlattice of the original.

If the process of finding new unspanned points continues forever, the original cell based on b1 through bn contains infinitely many points, and this is a contradiction. The process must terminate, and the basis that results generates all of S. Therefore S is a lattice in real space.

Of course S could be a lattice of a subspace of Rn if the basis has fewer than n vectors.

In summary, S is a lattice of Rn, or a subspace thereof, iff S is closed under addition and subtraction, and the points of S do not approach any point in Rn.

The same result holds for complex space. This is because a lattice in complex space becomes a lattice in real space, and the topology, i.e. the notion of distance, is the same.

It is possible to seed the basis with a vector b1, and retain b1 in the final basis, provided there is no other lattice point between b1 and the origin. For example, b1 might be a minimal nonzero lattice point. If an unspanned point in the above iterative procedure produces the equation dx = a1b1 + a3b3 + a7b7, wherein a1 a3 and a7 are coprime to d, then select either a3 or a7 to build y. The only time you can't do this is when the equation is simply dx = a1b1, and in that case there is a lattice point between b1 and the origin. Therefore b1 persists in the final basis for S.

The intersection of two or more lattices in real space, or complex space, is closed under addition and subtraction, and has no cluster points, and is therefore another lattice. It may however be a lattice in a lower dimension. Let 1 generate a lattice in R1, and let π generate another lattice in R1. Their intersection is 0. This is a lattice in zero dimensions.

Here is a more typical example. The multiples of 2, and of 3, form two lattices in R1. Their intersection is the multiples of 6, another lattice in R1.

Assume two n dimensional lattices consist of rational vectors. Run a linear transform so that one basis becomes the standard orthonormal basis, and its lattice becomes the set of points with integer coordinates. The basis of the second lattice becomes a set of vectors with rational components. These are integer components over a common denominator d. Multiply by d and find n linearly independent vectors from the second lattice that have integer coordinates - thus they also belong to the first. The intersection includes n independent vectors, and is an n dimensional lattice.

Turning to abstract spaces, let the coefficients of a lattice come from a pid R. Each lattice is a free R module, and their intersection is the submodule of a free R module, which is free. It has a basis, and is a lattice in a possibly lower dimensional space.

Two shifted lattices may not intersect at all. If they do, shift both lattices by the point c in common. Now they are based at 0, and the lattices intersect in another lattice. Shift this back to c and find the intersection of the two parent lattices.

Given a lattice in Rn, there is a nonzero vector x with minimum length, as shown earlier. In this section we will place a bound on |x|, based on the covolume v and the number of dimensions n.

Let c(n) be the function (4/3)(n-1)/4. This is an exponential function, but the base is close to 1, so it grows slowly. The bound on |x| is now c(n) × v1/n.

When n = 1, the bound is v, and yes indeed, |x| ≤ v. In fact |x| = v. So the theorem is true for n = 1.

Look in 2 dimensions, a lattice in the plane. Let x be [1,0], and the other vector u points up to a row of points that runs parallel to the x axis. Add or subtract an integer from u so that it is closest to the y axis within its row. This does not change the lattice at all. With x minimal, |u| is at least 1. Since u is the closest point to the y axis in its row, u makes an angle of no more than 30 degrees with the y axis. Shrink u down to a length of 1, thus having the same length as x. This decreases the covolume but does not change the length of the closest nonzero lattice point. The smallest covolume occurs when the angle of u and the y axis is exactly 30 degrees. This is the familiar sixth roots of 1 in the complex plane. The covolume is sqrt(3)/2. Since there are 2 dimensions, take the square root to get the fourth root of 3 over the square root of 2. c(2) is the fourth root of 4/3. Multiply these two factors together to get 1, which is the length of x. The bound is once again precise, giving the minimum length of the closest nonzero lattice point in the plane as a function of covolume.

Proceed by induction on n. Rotate the lattice in Rn so that x runs along the first coordinate. Project the lattice into the hyperplane perpendicular to x. This subtracts some real multiple of x from each of the other basis vectors, and then throws x away.

Look at the two determinants, and verify that the original covolume and the covolume of the projected n-1 dimensional lattice differ by a factor of |x|.

By induction there is a vector w with |w| ≤ c(n-1) × v1/(n-1). w is the projection of some y. Assume y is closest to the hyperplane, either in front or behind. Write y = w+lx, where l is between -½ and +½.

By minimality, |x| ≤ |y|. Use the pythagorean theorem to bound |y| below the square root of |w|2 + ¼|x|2. If |w|2 is less than ¾|x|2, then |y| is too small. Therefore |x| ≤ sqrt(4/3)|w|.

Plug in the bound on |w|, which is assumed by induction.

|x| ≤ sqrt(4/3) × c(n-1) × (v/|x|)1/(n-1)

|x| ≤ (4/3)½ × (4/3)(n-2)/4 × (v/|x|)1/(n-1)

|x| ≤ (4/3)n/4 × (v/|x|)1/(n-1)

|x|n/(n-1) ≤ (4/3)n/4 × v1/(n-1)

|x| ≤ (4/3)(n-1)/4 × v1/n

|x| ≤ c(n) × v1/n

That completes the inductive step; the formula holds for all n.

If you are given an n dimensional lattice, don't expect to find the minimal lattice point any time soon. The problem is np-complete. There is no efficient solution. Any computer program will take exponential time as a function of n. More on this topic later.

Given a lattice in Rn and an index h, how many sublattices have index h? This is the same as asking how many sublattices have covolume v, where v is h times the covolume of the original lattice. The answer is finite.

When n, the number of dimensions, is 1, there is only one such lattice, namely the multiples of v along the real line. Proceed by induction on n.

Given n, and an original lattice L, and a supposed sublattice K having covolume v, K has a minimal nonzero lattice point that exists within a sphere about the origin, as shown above. This must also be a point of L. There are finitely many points of L within this sphere, so choose one and call it x. This is the first point in a basis for K. For convenience, let this run along the x axis, as described above.

Project the lattice L, and its sublattice K, into the hyperplane perpendicular to x. For instance, in three space L and K are projected into the yz plane, where they form a lattice L′ and a sublattice K′ in 2 dimensions.

As a consequence of the projection, the covolume of K is divided by x. If there are no lattice points between 0 and x, then the covolume of L is also divided by x. However, if p is the closest point to the origin along the x axis, then the covolume of the lattice is divided by p. This will decrease, by a factor of x/p, the index of K′ in L′. In any case, a sublattice of L′ of a certain index is implied, and by induction there are finitely many sublattices to choose from.

Select K′ in L′, from finitely many choices, and give it a basis. This basis, along with X, implies a basis for K. Let b be one of the basis vectors. Remember that b is the projection of a point in L, in n space. It comes from a point between b-½x and b+½x. Since there are no cluster points, this segment offers finitely many possibilities. The same holds for all the other basis vectors. This implies finitely many bases for K, as determined by x and K′. Put this all together and there are finitely many lattices of covolume v containing L. That completes the inductive step, and the proof.

Consider a lattice in Rn with covolume v. If a compact set S is convex, and symmetric about 0, with volume ≥ 2n×v, then S contains a nonzero lattice point. This is the point lattice theorem, by Minkowski.

It is enough to demonstrate the point lattice theorem for volumes > 2nv. Suppose the theorem holds for strict inequality, but not for equality. Find a convex shape, symmetric about 0, with volume equal to 2nv, that contains no other lattice points beyond 0. Magnify this shape by 1+ε, expanding it away from 0. For every ε, the resulting shape holds a nonzero lattice point. Let ε approach 0 and find infinitely many points in a bounded region. This is a contradiction. Hence we will prove the result for any shape whose volume exceeds 2nv, and that will suffice.

Start at 0 and select any direction indicated by a unit vector u. If +3u is in S, then so is -3u. That's what we mean by symmetric about 0. Also, the entire segment from -3u to +3u is in S. That's what we mean by convex.

Let r(u) be the radius of S as a function of direction. Contract S towards the origin by cutting r(u) in half. Thus the volume of S is reduced by 2n, but it is still larger than v.

Cut S into pieces by using the hyperplanes of the lattice. In other words, build the base cell of the lattice and let it tile space. Each cell that intersects S cuts out a piece of S. Translate this piece back to the base cell at the origin. Thus S may be cut into many pieces, and all those pieces are carried back to their corresponding locations in the base cell.

Although there might be a lot of pieces, the number of pieces is finite. This is because S is bounded. It is contained in a large bounded region in Rn, and the number of cells in such a region is finite.

If S were cut into infinitely many pieces, with volumes approaching 0, it might be possible to put those pieces back together to make something smaller than S. It's counterintuitive, but you can reduce the volume of a shape just by cutting it up and reassembling the pieces. However, nothing like that can happen here, because the number of pieces is finite. The volume of S is indeed the sum of its pieces.

Suppose these pieces do not overlap in the base cell. They may share a border, but they don't overlap. The volume consumed by these pieces exceeds v, even though they live in a cell of size v. This is a contradiction, hence at least two pieces overlap.

There are points x and y in S that map to the same point in the base cell. Thus y-x is w, a vector in our lattice. Since S is symmetric it also contains -x. Let z = -x and write y+z = w. The midpoint of y and z, which is (y+z)/2, equals ½w. By convexity, this midpoint lies in S. Finally, expand S by 2, resurrecting the original shape. Now S contains y+z, or w, which is a nonzero lattice point.

If the volume is any smaller, the theorem fails. Consider the integers within the reals. A closed segment of length less than 2, symmetric about the origin, contains no other lattice points.

This section is a bit abstract, but is used in algebraic number theory.

Let F be a field, and let R be F[t], the ring of polynomials in t with coefficients in F, or perhaps F[[t]], the ring of power series in t with coefficients in F. Let K be the fraction field of R, quotients of polynomials in t, or the formal laurent series in t.

Let S be a regular R lattice in Kn. Although the ring is rather unusual, we can still demonstrate uniformity among the cells, and relate the cell count to the covolume.

Let x be any point in Kn. Let's locate the cell that contains x, then pull x back to the base cell. Write x in terms of the basis of S. In other words, x is the sum of cisi, where ci lies in K and si is an n dimensional vector in Kn that comprises S. If ci looks like an improper fraction, a quotient of polynomials perhaps, where the numerator has degree at least as large as the denominator, write it as a mixed fraction, a polynomial plus a proper fraction whose denominator exceeds the numerator. If ci is a laurent series, separate it into a power series and a linear combination of reciprocal powers of t. Do this for each coefficient, and separate the whole parts from the fractions. The polynomials or power series, analogous to integers, can be subtracted away. This moves x back to the base cell.

If x and y are distinct points in the base cell, represented through coefficients on S, subtract them to find a new vector. Verify that the difference between two proper fractions of polynomials gives another proper fraction of polynomials. That is, a/b - c/d gives numerators ad and bc with lower degrees than the denominator bd. The same holds for the difference between reciprocal powers of t. This is not an element of S, and y cannot be a translate of x. Each vector of fractional coefficients corresponds uniquely to a point in the base cell, and to a corresponding point in every other cell.

A lattice in real space could consist of real vectors, rational vectors, or integer vectors. In the same way, step back and assume the vectors of S are taken from R. If a point x in a cell corresponds to x0 in the base cell, then one is a lattice point, having all its entries in R, iff the other is a lattice point. In other words, lattice points correspond, and the base cell determines the lattice points in every cell.

If x is in the base cell, multiply by anything in F and the result is in the base cell. The coefficients on S are still fractional. Thus the points in the base cell form an F vector space. Furthermore, a lattice point, whose coordinates come from R, is still a lattice point. Thus the lattice points in the base cell form an F vector space. In this context, let the cell count be the dimension of the lattice points in the base cell, when viewed as an F vector space, rather than the cardinality of these lattice points.

To get us started, let S be a rectangular box, where the ith vector runs along the ith coordinate. For instance, the first vector might equal 3t2 + 5t + 1 in the first component, and zero elsewhere. When translating x back to the base cell, divide the first coordinate by 3t2 + 5t + 1 and take the remainder. The lattice points in this box have a first coordinate that is a linear polynomial in t, giving F2 possibilities. Do this for all n coordinates, and x lies in the base cell, the rectangular box anchored at the origin.

Add up the degrees of the polynomials that define the rectangular box to find the dimension of the lattice vector space inside. This is the cell count. At the same time, arrange the vectors in a diagonal matrix and take the determinant. This is the product of the polynomials, and its degree is the sum of the individual degrees. The degree of the determinant equals the cell count, the dimension of the lattice vector space in the base cell. We want to show this for any lattice S, not just a rectangular box.

Apply the Second Proof, which uses elementary row operations to move from any given matrix to a diagonal matrix. At each step, the determinant and the cell count are left unchanged, or adjusted by the same factor. For example, multiply one of the vectors, say the first vector, by a polynomial or power series of degree j. The new base cell is a bunch of copies of the original base cell pasted together in a row, along the first vector. The coefficient on the first vector was fractional in the original base cell; now it can extend up to (but not including) tj as we build the new cell. There is a replicated cell, and a batch of lattice points, for 3, 7, 2t+1, 5t3+t+6, etc, up to tj. Lattice points are replicated by Fj. The original space is crossed with a space of dimension j. The dimension, and thus the cell count, increases by j. At the same time, multiplying a row by tj increases the degree of the determinant by j. The lattice dimension and the degree of the determinant change in lockstep.

The last part of the proof subtracts one vector from another. This does not change the determinant, and it must not change the cell count either. The reasoning is similar to that given in the real space proof, but this time only the ceiling gains and loses lattice points as it slides over, and that isn't even part of the base cell. Let x be a point in the original base cell, the sum of aisi, where si is a vector in Rn, and ai is something less than 1 in K, a proper fraction of polynomials or a laurent series with negative degree. I want to replace s1 with s1-s2. To compensate for this, replace a2 with a2+a1. That reproduces x. Yet a2 is still fractional, and x is still in the base cell. Thus the lattice points have not changed, and the cell count has not changed.

If F is a finite field, such as Z/p, then dimension and cardinality go hand in hand. The traditional cell count, the number of lattice points in the base cell, is p raised to the degree of the determinant.