The ring S is an extension of the ring R if R is a subring of S, and 1 in R is the same as 1 in S. Thus Z cross 0 is not a subring of Z*Z, because 1,0 is 1 in R, while 1,1 is 1 in S.
You may recall field extensions; ring extensions are similar. In fact every field extension is a ring extension. Like fields, one may adjoin a set of elements W to R to produce the ring extension denoted R[W]. If W is a finite set of indeterminants then we have the ring of polynomials with variables taken from W and coefficients taken from R.
Throughout this chapter, a ring extension S/R implies R is in the center of S. That is, R commutes with everything in S. S is often commutative as well, but let's not make that blanket assertion at the outset. I want to make u ∈ S the root of a polynomial with coefficients in R, and I don't want to worry about whether those coefficients are on the right or on the left, thus R commutes with all of S.
If the extension S/R contains u, and u is the root of a monic polynomial with coefficients in R, then u is integral. If every u in S is integral then S is an integral extension. This is similar to an algebraic extension of a field, but this time the base could be a ring, and the polynomial must be monic, i.e. having a lead coefficient of 1. If the base is a field, then any polynomial can be scaled by the inverse of its lead coefficient to create a monic polynomial with the same roots. Integral and algebraic are synonymous. We've covered that ground before, so let's return to the world of rings.
If R is Z, a number as simple as ½ is not integral. It is the root of 2x-1, which is not a monic polynomial. Suppose however that ½ is a root of some other monic polynomial p. Switch over to rational coefficients, and apply the gcd algorithm to p and 2x-1. The remainder is a constant polynomial, and since both p and 2x-1 have root ½, the remainder must be 0. In other words, 2x-1 divides p. Apply Gauss' lemma to show that p splits in Q[x] iff it splits in Z[x]. Since 2x-1 is primitive, the "other" factor already has integer coefficients. So 2x-1 times an integer polynomial yields p, and that means the lead coefficient is even, hence p is not monic. Therefore ½ is not integral over Z.
However, sqrt(2) is integral over Z, a root of x2-2.
If x is integral over R then so is cx, for any c in R. Let p(x) be the monic polynomial that proves x is integral over R. Thus p is monic, and has coefficients in R. Leave the lead coefficient alone, and multiply the next coefficient by c, and the next one by c2, and so on, out to the constant coefficient, which is multiplied by cn. The new polynomial has root cx, and proves cx is integral over R.
Something nice happens when R is a ufd. Let u be integral with monic polynomial g, and let h be a polynomial of minimum degree satisfying h(u) = 0. Divide through by the gcd of the coefficients, so that h is a primitive polynomial. Move to the fraction field of R, and let j(x) be the gcd ( g(x) and h(x). Note that u is a root of j(x). Since j divides h in the fractions of R, j divides h in R. Yet h has minimum degree, hence h = j times a constant in R. Since h has content 1, this constant is a unit. Thus h and j are essentially the same polynomial.
Remember that h became the gcd of h and g. Since h divides g in the fractions of R, h divides g in R. Since the lead coefficient of g is 1, the lead coefficient of h is a unit. Divide through, and h is monic. The minimum polynomial of u is the defining monic polynomial of u.
If f(x) has root u, where f may or may not be monic, then divide f by h, and the remainder also has root u. The remainder has a lesser degree, which is impossible. Hence the remainder is 0, and h divides f. Every polynomial with root u is divisible by the minimum monic polynomial h. This establishes a principal ideal in the ring of polynomials.
If h = j*k, then either j(u) = 0 or k(u) = 0. Since h has minimum degree, this is impossible. Therefore h is irreducible.
If f is some other irreducible polynomial with root u, then h divides f, as described above, and h = f. The irreducible polynomial associated with u is unique.
Let S be an integral extension of R, and let x be an element of R. x is a unit in R iff it is a unit in S.
One direction is obvious, so assume x is a unit in S, with xy = 1. Since y is integral over R, p(y) = 0 for some monic polynomial p. Multiply p(y) through by xn-1, and y lies in R after all. Thus x is a unit in R.
If you aren't interested in tensor products, you can skip to the next section. Let S be a ring extension of R, and tensor S with itself as an R module. The result is characterized as two copies of S with R in common. If u and v are units in S, such that uy = 1 and vz = 1, then (u,v) times (y,z) = (1,1), and (u,v) is a unit in S×S. Conversely, assume (uy,vz) is equivalent to (1,1) in the tensor product. Let h ∈ R accomplish the equivalence. Thus uyh = 1, and vz/h = 1. Multiply these together and uyvz = 1. Thus u and v are units in S.
This generalizes to a finite tensor product of S with itself. A tuple is a unit in the tensor product iff each component is a unit in S.
The same result holds for S×T when S and T are integral over R. With uyh = 1, u is a unit in S. Since h is a unit in S, it is also a unit in R, and in T. Now vz = h, which makes v a unit in T.
This generalizes to a finite tensor product of integral extensions of R. To illustrate, consider the tensor product of six rings. After normalizing by the action of R, uiyi becomes 1, as i runs from 1 to 6. Assume the first three products are multiplied by h1 h2 and h3 to reach 1. The next three products are divided by h4 h5 and h6. Clearly u1, u2, and u3 are units in their respective rings. Also, h1 h2 and h3 are units, which makes them units in R. Since h1h2h3 = h4h5h6, each of the last three factors is also a unit in R. That makes u4y4 a unit in R, whence u4 is a unit in its ring, and so on for u5 and u6.
given an algebraic element over an integral domain R, some multiple of that element is integral.
Let p(x) have coefficients an through a0 in R, with an ≠ 1; hence p is not monic. Let u be a root of p(x), the algebraic element. Build a new polynomial q(x) by multiplying ai by kn-i. This multiplies the constant by kn, and changes the lead coefficient not at all. Verify that ku is a root of q(x).
Let k = an, then divide q(x) through by an to find a monic polynomial with root anu. We need R to be an integral domain, so that anu does not become 0.
A similar result holds if p(x) is monic and its coefficients lie in the fractions of R by S, where S is a multiplicatively closed set in R with no zero divisors. In this case R need not be an integral domain. Let k be the product of the denominators of the coefficients of p(x), and build q(x) as above. Now ku is nonzero, and is a root of the monic polynomial q(x) in R.
This is an important theorem in commutative algebra, but it's not very intuitive. It says that the sum or product of integral elements is integral. Now why should that be? It makes sense with algebraic elements. Add or multiply two algebraic elements together and you're still part of a finite extension, not too far off the floor. The result should still be accessible from some polynomial in the base ring. But if u and v are roots of monic polynomials, why should u+v be the root of a monic polynomial? Why not some other polynomial with a lead coefficient of 317? I can't provide an intuitive explanation; I can only offer the proof, and hope you can glean some intuition from that.
Let t be an element in the ring extension S/R. If t is integral then the extension R[t] is generated by the powers of t, from t0 up to tn-1, where the monic polynomial of t has degree n. In other words, the ring R[t] is a finitely generated R module inside S.
If t were merely algebraic, like 2t2 - 3, we would have to replace t2, wherever it occurs, with 3/2, which is not part of the base ring. So we really need t to be integral over R.
Now for the converse. Let M be a finitely generated R module in S, where M contains R[t]. M could be R[t], as above, or it could be something larger. It could be all of S.
Furthermore, let multiplication by t map M into itself. This is true when M is R[t], or S, and perhaps several modules in between. Let the action of t be the same from either side, that is, t and M commute. This is the case when M = R[t], or when S is commutative.
Since R and t both act on M, M is an R[t] module, as well as an R module.
Since M contains R[t] contains 1, the only element from R[t] that drives all of M into 0 is 0. Using terminology from the world of modules, the annihilator of M is 0.
M is finitely generated by assumption. Give M a set of generators b1 b2 b3 … bn. This is almost a basis, in that each member of M is a linear combination of these generators, using coefficients from R, but the representation may not be unique. Still, I use the letter B to remind us that it does span, like a basis. Since M is an R module, multiplication by x in R multiplies the coefficients on the generators by x. However, multiplication by t replaces each bj with its image, which is a prescribed linear combination of the n generators. In other words, multiplication by t is described by an n×n matrix, where the jth row defines tbj. Since we've already used the letter M, I'll call this matrix Y.
To recap, you can represent an element from M as a vector v, where v holds the coefficients on the generators B. Then v*Y, using matrix multiplication, produces the coefficients of t*v. There may be other linear combinations of the generators in B that produce tv, but vY is definitely one of them.
Now subtract t from the main diagonal of Y. So Y isn't a matrix over R any more, it has become a matrix over R[t]. And vY isn't tv any more, it is tv-vt, or 0. It doesn't matter what v is, run it through Y and get 0. You don't always get a zero vector, but you do get a vector from R[t] whose elements, when multiplied by the members of B, yields 0.
Let d be the determinant of Y, which is an element of R[t].
Build a matrix Z as follows. Start with the identity matrix, then replace the first column with Zi,1 = bi. In other words, the first column contains our span.
Consider the product Y*Z, using matrix multiplication. The first row of Y is dotted with the first column of Z. Ignore the -t in the upper left of Y, and the first row of Y contains coefficients of R, that are applied to the generators in the first column of Z. But this is, by definition, the image of b1 when multiplied by t. So without the -t, the upper left entry of the product is b1t.
Now bring in -t and subtract b1t, leaving 0. Thus the upper left entry in the product matrix is 0.
Multiply the second row of Y by the first column of Z, and obtain b2t-b2t, or 0. This continues all the way down the column. I don't know what the other columns look like; it doesn't matter. The first column is 0, hence the determinant of the product is 0. Therefore, det(Y)*det(Z) = 0. The determinant of Y is d, and the determinant of Z is b1. Thus db1 = 0.
Leave the first column of Z alone, and rearrange the ones in the remaining columns. This time arrange the ones so that the jth row is missing a one. Last time the first row was missing a one. This time the determinant of Z is ±bj. Here's an example of Z with determinant b3.
Evaluate the product YZ, and once again the first column is zero. This means dbj = 0 for every j.
Remember that d is an element in R[t], and it maps every generator bj to 0. By linearity it sends the entire module M to 0. Yet only 0 satisfies 0*M = 0, therefore d = 0.
d is the determinant of Y, which is zero. Expand the determinant of Y into a polynomial in t, with coefficients in R. This polynomial is monic, and it is equal to zero. Therefore t is the root of a monic polynomial in R, and is integral.
You might think R needs to be an integral domain, so that the product of the determinants is the determinant of the product, but this result holds for all commutative rings, thanks to a rather technical proof. Thus there are no restrictions on R, other than the usual restrictions for this chapter, i.e. R is commutative and contains 1.
The corollaries are more important than the theorem itself. They require S to be commutative, so let's just say that rings are commutative for the rest of this chapter.
Let S be a ring extension of R, and a finitely generated R module. Every t in S belongs to a finitely generated R module inside S, namely S. Therefore every t is integral, and S is an integral extension of R.
Let u and v be integral elements in a ring extension S/R, and consider R[u,v]. Cross the powers of u with the powers of v to show this is finitely generated as an R module. Therefore R[u,v] is an integral extension. Use induction to generalize this to a finite set of integral elements.
If u and v are integral as above, then R[u,v] is an integral extension that contains u+v, u-v, and uv. These are all integral over R.
Let W be the set of integral elements in S/R. We just showed W is closed under addition and multiplication, hence W forms a ring. This is the largest integral extension of R inside S.
Let S be an integral extension of R, and let T be an integral extension of S. Let v be an element of T, and let p(x) be the monic polynomial that proves v is integral over S. Let a0 a1 a2 … an be the coefficients of p, taken from S. Each of these is integral over R. Adjoin these to R and find an integral extension, a finitely generated R module, which I will call G. Now G(v) is a finitely generated G module, which I will call H. Cross the generators of H over G with the generators of G over R to show H is a finitely generated R module. Thus v is integral over R, and since v was arbitrary, T is an integral extension of R. The composition of integral extensions is integral.
If W is the integral closure of R inside S, then W is itself integrally closed in S, for anything that is integral over W is also integral over R. The integral closure is integrally closed.
Let R be an integral domain, let F be the fraction field of R, let E be an algebraic extension of F, and let S be the integral extension of R inside E. Let B be a basis for E over F. Multiply each basis element by something in R, so that the basis elements are all integral, hence they all lie in S. The fraction field of S includes F, and B, hence the fraction field of S includes all of E. Since S is contained in E, the fractions of S are no larger than E. Therefore the fraction field of S is E.
Let E/F and S/R be as above. Find a basis B for E/F in S and represent the elements of S as unique linear combinations of basis elements with coefficients in F. Since B defines a free R module, tensor with F to find a free F module, i.e. an F vector space, of the same rank. The tensor product includes B, hence it is all of E. Using the infix notation of tensor product, S×F = E.
Let the ring S be integral over R. Let S be an integral domain, whence R is also an integral domain. Let F be the fraction field of R and let E be the fraction field of S. Assume S contains and is spanned by n generators b1 through bn using coefficients in F. Of course more than S is spanned, perhaps all of E. In other words, S embeds in S×F, which embeds in E.
Let x be an element of S, and build a matrix M that represents multiplication by x, relative to these generators. In other words, the first row of M contains the coefficients for b1 through bn that produce x*b1, and so on. The entries of M lie in F, and not necessarily in R. Let d be the determinant of M. If d is 0 then there is a vector y such that yM = 0. You might need to dip into the fraction field F to build y, but you can always multiply through by a common denominator, so that y lies in R. View y as an element of S, and yx = 0, which is a contradiction. Therefore d is nonzero.
Let W be the inverse of M, thus W implements division by x. Since 1 is in S, represent 1 using b1 through bn, then multiply this vector by W to find 1/x. 1/x is represented as a member of S×F. E lies in S×F, and E = S×F. This is the same result we saw above, however, in this case S need not be the entire integral ring inside E. S could be something less.
The integral closure of R in an extension S is the set of elements of S that are integral over R. This is a ring by corollary 4 above.
The ring R is integrally closed in S if R contains all the elements in S that are integral over R. Use corollary 5 to show the integral closure of R in S is integrally closed in S.
If S is not specified and R is an integral domain, then S is assumed to be the fraction field of R. R is integrally closed if it is integrally closed in its fraction field.
A ring is normal if it is integrally closed and noetherian.
Every ufd is integrally closed. Let R be a ufd with fraction field F, and let u be the root of a monic polynomial p(x) with coefficients in R. Thus x-u is a factor of p(x) in the ring of polynomials with coefficients in F. Since p is monic, it is a primitive polynomial. Apply gauss' lemma, and some multiple of x-u, times another primitive polynomial in R, = p(x). Since the right side is monic, the lead coefficients of the two factors on the left are both units. Pull this unit out and you are back to x-u, whence u is in R after all. For example, the integers are closed in the rationals.
If two extensions of R are integrally closed, their join need not be. Adjoin the square root of -1, or the square root of 2, to the integers. Either extension alone is a ufd, and integrally closed. I proved this for Z[i], and I'll prove same for Z[sqrt(2)] later on. Combine the two extensions, having basis 1, i, sqrt(2), and sqrt(2)i. This is the composition of integral extensions, and is integral over R, but is it integrally closed within its fraction field? Are we missing anything? Let w be the eight root of 1. Since w satisfies a monic polynomial, w is integral over R. w is also in the fraction field, namely (sqrt(2) + sqrt(2)i) / 2. Suppose w is in the ring extension of R, so that w is spanned by our basis, using integer coefficients. Separate w into real and imaginary components. In both dimensions, 1 and sqrt(2) must span sqrt(½). Write a + b×sqrt(2) = sqrt(½), and double it, so that 2a equals an odd multiple of sqrt(2). This makes sqrt(2) rational, which is impossible.
If S/R is a ring extension and S-R is multiplicatively closed, then R is integrally closed in S.
Let p(x) be a counterexample of minimum degree. Write this as x * q(x) = -a0. Since x is in S-R, and S-R is closed, q(x) must lie in R. This means q(x) is a smaller monic polynomial that takes x to 0, which is a contradiction.
The converse of this theorem fails, as shown by Q over Z. Since Z is a ufd, it is integrally closed in its fraction field, yet 2/3 times 3/2 is an element of Z.
Let R be an integral domain and let T be a multiplicatively closed set in R. Also, R is integrally closed in its fraction field F. Is R/T integrally closed with respect to F?
Let x be a fraction that is integral over the ring R/T. Thus xn is equal to a linear combination of lower powers of x. Look at the coefficients on these lower powers of x, and let d be the product of their denominators. Then consider (dx)n. This multiplies our expression by dn. The result is a linear combination of lower powers of dx, with coefficients in R. (We need an integral domain here, to make sure the equation doesn't drop to 0 = 0.) Therefore dx is integral over R. Yet R is integrally closed, so dx lies in R. Since d is the product of elements of T, dx/d lies in R/T. Thus x is in R/T after all, and R/T is integrally closed.
Recall that Z, the ring of integers, is integrally closed. Applying the above, Z localized about any prime p is also integrally closed.
Let S be an integral extension of R. Let H be an ideal in S with G = H∩R. Verify that G is an ideal in R. We're going to show S/H is integral over R/G, but first, some thoughts on the / operator. I'm sure you've noticed - / is heavily overloaded. Even a C++ programmer could get confused. / can mean traditional division, division in a fraction ring or fraction module, a fraction ring by a set of denominators, field extension, ring extension, quotient group, quotient module, quotient ring, or quotient space, to name a few. Sorry for the confusion.
Now where were we? The rings S and R contain the ideals H and G respectively, and the quotient ring S/H is an integral extension of the quotient ring R/G. Verify the following steps.
If x and y are in the same coset of G in R, they are in the same coset of H in S.
If x and y are in different cosets of G in R, they are in different cosets of H in S. Turn this around to prove it; assume x-y = z, where z is in H. Yet z must also be an element of R, so z lies in G, and x and y are in the same coset of G in R.
Mapping x to x carries cosets of G into cosets of H, and the map is an embedding.
One maps to one, and R/G is a subring of S/H.
Select any u in S and let p be the monic polynomial that proves u is integral. In other words, p(u) = 0. Apply the ring homomorphism S/H to u and to the coefficients of p, and the image of u in S/H is integral over R/G.
Let S/R be an integral extension as above, and let T be a multiplicatively closed set in R. Show that the fraction ring R/T embeds in the ring S/T. There are no new denominators, so distinct classes in R/T are not going to merge in S/T.
Since both rings contain T/T, or 1, S/T is a ring extension of R/T. Show S/T is an integral extension of R/T. If the numerator of a fraction is x, write p(x) = 0, where p is monic, with coefficients from R. The obvious ring homomorphism carries S into S/T. Here x is mapped to x/1, or xw/w if T does not contain 1. In any case, we can apply the homomorphism to x, and to the coefficients of p(x). Thus x/1 ∈ S/T is integral over R/T. Remember than an integral element can be multiplied by anything in the base ring and the result remains integral. Multiply by 1/j ∈ R/T, and x/j is integral over R/T. Thus S/T is an integral extension of R/T. This does not hold if T strays outside of R; an example will be given later.
If S/R is an integral extension, and S is an integral domain, then S is a field iff R is a field.
Assume R is a field. Given x in S-R, write p(x) = 0, where p is monic. Suppose p is the product of two smaller polynomials. When evaluated at x, the result is 0, and since S is an integral domain, x is a root of one of the two smaller polynomials. Therefore we can assume p is irreducible. This represents a field extension of R, hence x is invertible.
Conversely, assume S is a field. Let x be a nonzero element of R and let y be its inverse in S. Write p(y) = 0 where p is monic. Multiply through by xn-1, and y lies in R after all.
Let S/R be an integral extension, let H be a proper ideal of S, and let G be H ∩ R. We showed earlier that S/H is integral over R/G.
Assume H is a prime ideal. This means xy in H implies x is in H or y is in H. This certainly holds when we restrict to R, so G is prime in R. The quotient rings S/H and R/G are integral domains, and by the previous theorem, one is a field iff the other is a field. Thus H is maximal in S iff G is maximal in R.
Now assume G is prime in R. We want to lift this to a prime ideal H lying over G. Let T be the set R-G, which is multiplicatively closed. We showed above that S/T is an integral extension of R/T. Select a maximal ideal in S/T and let H be the numerators of this ideal. By correspondence, H is a prime ideal in S, and H/T gives the same maximal ideal back again.
Let C = H ∩ R. If a fraction in R/T and another fraction in H/T represent the same class, establish a common denominator d, and the first numerator is still in R, and the second is still in H. To equate these fractions, write d(a-b) = 0. Since 0 is in H, a prime ideal, and since d is not in H, a = b. The numerator is in both R and H, hence in C. Conversely, every fraction in C/T is in H/T and R/T. Therefore C/T = H/T ∩ R/T.
Since H/T is prime, C/T is prime. And since H/T is maximal, C/T is maximal. However, G/T is the largest ideal in R/T. This because RG is a local ring. The saturation of C has to be G. Since H is saturated, and contains C, H contains the saturation of C, which is G. Thus H contains G in R, and since H misses T, nothing more than G. H is a prime ideal lying over G.
Remember that H is maximal iff G is maximal. There are primes over primes, and maximals over maximals.
Intersection with R implements a map from prime ideals in S onto prime ideals in R. For a fixed element x in R, a prime G in R misses x iff its preimage H in S misses x. Base open sets pull back to base open sets, and contraction to R implements a continuous function from spec S onto spec R.
The preimage of G is called the fiber of G, sometimes spelled fibre. This is simply the set of prime ideals in S that intersect R in G. There is at least one, and there may be more than one. Let R be the integers, and let S be the gaussian integers, i.e. R adjoin i, where i is the square root of -1. Review the primes of S. 5 is a prime element in R, and generates a prime ideal in R. There are two primes lying over 5, generated by 2+i and 2-i. These are separate prime ideals in the fiber of 5. However, the fiber of 7 is just 7, because 7 is an inert prime.
One prime ideal in the fiber of P cannot contain another. Suppose there are two prime ideals Q1 and Q2, lying over P, such that Q1 contains Q2. Localize about P, so that S/T is integral over R/T, where T = R-P. By prime correspondence, the prime ideals Q1 and Q2 become distinct prime ideals in S/T, with one containing the other. As shown above, Q1/T intersect R/T = P/T, and similarly for Q2/T. Both ideals lie over the maximal ideal P/T in R/T, hence both ideals are maximal in S/T. One cannot contain the other, and therefore primes over P cannot contain one another.
An ascending or descending chain of prime ideals in S contracts to an ascending or descending chain of prime ideals in R. If there is any overlap, then a prime P in R has two primes Q1 and Q2 in S, lying over P, with one containing the other.
What about chains that go beyond infinity? If the union of an ascending chain of prime ideals in S is prime, or is contained in a larger prime ideal, this prime ideal in S contracts to a prime ideal in R that is the union of, or contains the union of, the foregoing prime ideals in R. It is properly larger than each one in the R chain. A similar result holds for the intersection of a descending chain of prime ideals. Use transfinite induction to show that ascending or descending chains of prime ideals (beyond infinity) in S, contract to same in R.
This does not imply chains in R lift to chains in S. Yes, P1 might contain P2 in R, and there exists Q1 and Q2 lying over P1 and P2, but that doesn't mean Q1 contains Q2.
If S is a finitely generated R module, then the fiber of P is finite. Localize about P, so that RP becomes a local ring. The primes in S, lying over P, persist in the ring S/T, and remain distinct. Furthermore, they lie over a maximal ideal in the local ring, namely PP, hence they are maximal. A finitely generated module over a local ring is semilocal, and has finitely many maximal ideals. Hence the fiber of P is finite.
If S is integral over R, the jacobson radical of R is the intersection of R and the jacobson radical of S.
Let x lie in jac(R), and let M be a maximal ideal in S. M ∩ R is maximal in R, and contains x, hence M contains x, hence jac(S) contains jac(R).
Conversely, let x lie in jac(S) and in R. Now 1-xy is a unit for each y in S, and thus for each y in R. A unit of S that lies in R is a unit of R. This places x in jac(R).
Let S/R be an integral ring extension. We saw earlier that S/T is integral over R/T if T is a multiplicative set drawn from R. However, if T is drawn from S, this may not be the case.
Let S be an integral domain, and let the prime Q in S lie over the prime P in R. If T = S - Q, then localization produces RP and SQ. If a/b is in RP, and becomes 0 in SQ, then some denominator kills a, which can't happen in an integral domain, hence RP is a subring of SQ. Here is an example where SQ is not integral over RP.
Let K be a field and let S equal K[u], where u is an indeterminant. In other words, S is the polynomials in u with coefficients in K. Let R equal K adjoin u2-1. Successive powers of u2-1 have ever higher degrees, and form a basis for R as a K vector space. Thus R covers all the polynomials whose terms have even degree.
S is a free R module, generated by 1 and u. Being finitely generated, S is integral over R. Specifically, u satisfies x2 - 1 = u2-1.
Let u-1 generate Q in S. Since S is a pid, and u-1 is irreducible, Q is prime. In fact Q is maximal, with a quotient field isomorphic to K.
Intersect with R, and P is the set of polynomials in K[u2-1] that are divisible by u-1, or those polynomials having 0 constant term. The quotient R/P is the set of constants, which is K. Thus P is also a maximal ideal, as it should be.
Let v = 1/(u+1) in SQ. Suppose v is integral over RP, with monic polynomial f(v) = 0. Multiply through by (u+1)n, and by the product d of the denominators of the coefficients of f. The result is an equation in K[u]. Remember that u+1 is irreducible/prime, and it divides every term beyond the leading term, hence it also divides the leading term. In other words, u+1 divides d. Remember how d was built. Each coefficient of f exhibits a denominator in R-P. This is a polynomial with a nonzero constant that is evaluated at u2-1. Such an expression is not divisible by u+1. The product d of these denominators is not divisible by u+1. This is a contradiction, hence v is not integral, and SQ/RP is not an integral extension.
When R is an integral domain, integrally closed is a local property. First we need to show that fractions and integral closure commute.
Let S be a ring extension of R, where either R or S could have zero divisors, and let C be the integral closure of R in S. Let T be a multiplicatively closed set in R. If T does not contain 1, toss 1 in. This doesn't change the ring S/T, or its subrings; it is merely a convenience. We can represent x by x/1, rather than xw/w for some w in T.
In an earlier section we showed that C/T is integral over R/T. We only need show that C/T includes all the integral elements of R/T. Let x/y be a fraction in S/T that is integral over R/T. If x/y is 0 then we are done, so assume nothing in T kills x. Write p(x/y) = 0, where p is monic, and has coefficients in R/T. Let d be a common denominator for the coefficients of p, and multiply through by (dy)n. This builds a monic polynomial, of the same degree, with root dx, and coefficients in R. Thus dx lies in C. Remember that d is in T, and so is y, hence x/y is in C/T. Take the integral closure, then the fraction ring, or vice versa; the result is the same.
Now let R and S be integral domains, with S an extension of R. Let T be a multiplicatively closed set in R. If R is integrally closed in S then take the integral closure, which is R, then the fractions by T giving R/T. This is the same as the integral closure of R/T in S/T. Thus R/T is integrally closed in S/T.
As a special case, set S to the fraction field of R, whence R integrally closed implies RP is integrally closed for each prime ideal P.
Now for the converse. Let C be the integral closure of R in S, and assume every localization RP is integrally closed in SP. You can jump up to C, then localize, or localize and then take the integral closure. The result is the same. Thus CP = RP for each prime ideal P.
Let x be an element of C. Select any prime ideal P, and x embeds in CP via x/1. (This is where we need S to be an integral domain.) Thus x is in every CP, and in every RP. This places x in R. Therefore R is integrally closed.
Being integrally closed with respect to an integral domain S, or with respect to the fraction field of R, is a local property. Once again it is sufficient to localize about maximal ideals, rather than all prime ideals.
Don't assume that an integral extension of an integrally closed ring remains integrally closed. For example, start with Z, which is a ufd, and integrally closed. Adjoin q, the square root of -3. Clearly Z[q] is an integral extension of Z. Let w = (q+1)/2. Note that w is not in Z[q], in fact it is in the center of the base cell of the lattice, yet w is in the fraction field of Z[q]. Also, w3 = -1, hence w is integral over Z[q]. Therefore Z[q] is not integrally closed.
The ring homomorphism h(R) into S is integral if S is integral over h(R). This follows the precedent of a flat homomorphism (mapping R into a flat ring S), a faithful homomorphism, etc.
Set h to the identity map, and an integral extension S/R becomes an integral homomorphism.
h induces a continuous map from spec S into spec R. When h is integral, this map becomes bicontinuous.
First assume h embeds R into S, and let vF be closed in spec S. Let E be the intersection of F and R. If every prime P in R containing E comes from at least one prime Q in S containing F, then the closed set vF maps onto the closed set vE, and the function becomes bicontinuous. Let's see if we can lift P up to Q containing F.
Review the procedure for lifting P up to Q. We localized about P, then selected any maximal ideal in SP. But we could start with any proper ideal, such as FP, and raise this up to a maximal ideal. This pulls back to a prime ideal Q in S, containing F, and lying over P. Each P is covered, vF maps to vE, and the function is bicontinuous.
Next let h map R into S. This is a composition of two maps, from R onto h(R), and the embedding of h(R) into S. The latter induces a bicontinuous function from spec S into spec h(R), as shown above. Since prime ideals correspond under a ring homomorphism, h also induces a bicontinuous function. Combine these, and h(R) into S induces a bicontinuous function from spec S into spec R.
Let S T and M be ring extensions of R, and let h(S) be an integral homomorphism into T. Then tensor S T and h with M. S×M becomes a ring, and an R algebra, such that multiplication in this ring is performed per component. Recall that the induced homomorphism from S×M into T×M maps (x,y) to (h(x),y). The image of (x1x2,y1y2) is (h(x1x2),y1y2), or (h(x1)h(x2),y1y2), or (h(x1),y1) times (h(x2),y2). The induced map is an R module homomorphism, an S module homomorphism, an M module homomorphism, and a ring homomorphism. We want to show this is an integral homomorphism.
Let (x,y) be a pair generator in T×M, and let p(x) be the monic polynomial that proves x is integral over S. The lead coefficient is 1; pair this with 1 to get 1 cross 1 in S×M. This is the identity element in the ring S×M. Pair the next coefficient with y, and the next one with y2, and the next one with y3, and so on. These are all elements of S×M. Evaluate this polynomial at (x,y), and pull out a common factor of yn. This gives p(x) cross yn, and since the former is 0 in T, the expression is 0 in T×M. Therefore (x,y) is integral, and the induced ring homomorphism from S×M into T×M is integral.
If M is a fraction ring of R, tensoring with M implements a form of localization. Thus localization is a special case of this theorem. If P is a prime ideal of R, h becomes an integral ring homomorphism from SP into TP.
Let S and T be integral R algebras. This means there are functions f(R) onto U, in the center of S, and g(R) onto V, in the center of T, such that S is integral over U and T is integral over V. S×T is an integral R algebra.
First, S×T is an R algebra, with addition and multiplication taking place per component. Let K be the tensor product of U and V, wherein S×T becomes a ring extension of K, using the generators of S and T and the relations of S and T, as rings over U and V. Look here for a complete characterization of S×T.
Let (x,y) be an element in S×T. Let p(x) be a monic polynomial with coefficients in U, and let q(y) be a monic polynomial with coefficients in V. Raise p and q to appropriate powers, so that both have the same degree. Both polynomials remain monic, and x is still a root of p, and y is still a root of q.
Build a new polynomial w by merging the coefficients of p and q. The ith coefficient of w is the ith coefficient of p cross the ith coefficient of q, which represents an element of K. Thus w is monic with coefficients in K.
K is a quotient ring of R, just as U and V are quotient rings of R. S×T is a K algebra, and an R algebra by mapping R onto K. Furthermore, w is monic over K. We only need show xy is a root of w, whence S×T becomes an integral R algebra.
Since multiplication takes place per component, (x,y) times (x,y) is (x2,y2). The jth term of w can be rewritten (ajxj,bjyj). Addition also takes place per component, so when the terms of w(x,y) are added up, the result is p(x) cross q(y). This is 0 cross 0, or 0. Therefore (x,y) is a root of w, and the tensor product of integral algebras is integral.
Let S/R be an integral extension. In an earlier section we showed that an ascending or descending chain of prime ideals in S contracts to a corresponding chain in R. In this section a chain in R lifts up to a chain in S. The lift is not unique, but for any such lift, the chain in S contracts back to the original chain in R. As a corollary, the dimension of R, determined by its longest chain, equals the dimension of S.
If the chain in R starts with P1, find Q1 in S lying over P1. This starts the inductive process. From here we will go up, or down, for ascending or descending chains respectively.
Assume the chain is ascending, and move to P2 containing P1. Remember that Q1 is an ideal in S, with Q1 ∩ R = P1. In an earlier section we proved there is a prime ideal Q2 lying over P2, that contains Q1. By induction, the entire countable chain lifts.
If the chain goes beyond infinity, use transfinite induction. Let U be the union of an ascending chain of prime ideals in R, and let V be the union of the overlying prime ideals in S. Of course V need not be prime; but assume U is, or U is contained in a larger prime. Since V does not contain 1 it is a proper ideal. Again, employing the earlier theorem, there is a prime ideal in S, lying over U, and containing V. Therefore, arbitrary chains in R lift to arbitrary chains in S.
Descending chains do not lift so easily. S must be an integral domain, with R integrally closed in its fraction field K. First a couple of lemmas.
Lemma 1: Integral closure Equals Radical Ideal .
Let C be the integral closure of R in S. If H is an ideal in R, let H′ be the extension of H into C. The integral closure of H in S equals rad(H′) in C. We don't usually talk about the integral closure of an ideal, this is a special case, just for this theorem.
First assume x is in the integral closure of H. This means there is a polynomial p with p(x) = 0, and lead coefficient 1, and all other coefficients in H. Since x is in S, and is integral over R, x lies in C. Move xn to one side, and what remains is an expression in x and various elements of H, which becomes an element of H′. Thus xn is in H′, and x is in rad(H′).
Conversely, let xn be a finite sum of pairwise products from H and C, c1h1 + c2h2 + c3h3 etc. Let the elements c1 c2 c3 etc act as generators for a new ring over R, somewhere between R and C. Call this ring V. Since V is R adjoin finitely many integral elements, V is a finitely generated R module.
Let w = xn. Thus w is in HV. Multiply h1c1 by any z in V, and find h1*(c1z), which is in HV. Thus w drives V into HV.
Since multiplication by w is an R module endomorphism from V into V, and since V is a finitely generated R module, the action of w can be represented by a matrix M. The representation may not be unique, but there is a matrix M which designates the image of each generator. The ith row determines w times the ith generator of V as an R module.
Let V be generated by g1 g2 g3 … gn, as an R module. Everything in V is a linear combination of these generators, using coefficients from R. If instead we use coefficients from H, the result lies in HV. Conversely, consider any element in HV. This is a sum of products xy for x in H and y in V. Represent y as a linear combination of our generators with coefficients in R. Multiply by x, and the coefficients lie in H. Add this up over all pairs xy and the coefficients still lie in H. Therefore, the R module HV is the span of the generators of V, with coefficients from H.
Since w maps V into HV, the matrix M, based on the generators g1 through gn, consists entirely of elements of H.
Let p be the characteristic polynomial of M, i.e. the polynomial whose roots are the eigen values. By cayley hamilton, p(M) = 0.
The matrix Mi represents multiplication by wi. Put the polynomial together, and anything in V times p(w) = 0. However, V contains 1. Therefore p(w) = 0. Replace w with xn, and find a new monic polynomial p(x) = 0. Since M comes from H, all coefficients (other than 1) lie in H. Therefore x is integral over H.
As a corollary, the integral closure of H in S is an ideal in C, namely rad(H′).
What happens if H = R? The integral closure of R is C, and on the other side, the extension of R into C is all of C, and rad(C) within C = C. Yes indeed, C = C, and the lemma holds true.
As an application of this lemma, let S = K, the fraction field of R. Since R is integrally closed in K, C = R. The extension of H into R is still H. Thus rad(H) is the integral closure of H. For instance, R could be the integers, and H the multiples of 9. The only prime ideal containing H is the multiples of 3, hence the integral closure of H is the multiples of 3. It's easy to see where 3 comes into the picture: x2 - 9 = 0.
Lemma 2: Keeping the Coefficients of the Minimum Polynomial inside H
Here is a second lemma, which uses the first. Let R have fraction field K, with R integrally closed in K. Let H be an ideal in R, and let x be integral over H. Thus x is the root of a monic polynomial e(x), whose coefficients lie in H.
Since x is algebraic over K, it satisfies an irreducible polynomial f(x)over K, and f(x) is a factor of e(x). Let the field extension L/K split f(x).
L is a ring extension of R. L is going to play the role of S/R in the previous lemma. Let C be the integral closure of R in L, which contains x. The first lemma says the integral closure of H forms an ideal in C. We'll need this in just a moment.
Each root of f(x) is a root of e(x), and is integral over H. Thus C contains all the conjugates of x. Show that the coefficients of f(x) are integral over H. The constant term, for instance, is ± the product of the roots of f, each root belonging to L and integral over H, and since the integral closure of H forms an ideal in C, the constant term is integral over H. The other coefficients of f are expressions in the roots of f, thus all the coefficients of f (save the lead coefficient) are integral over H. But remember, these coefficients also lie in K. Since R is integrally closed in K, these coefficients lie in R. Each coefficient of f lies in R, and is integral over H.
Invoke the first lemma again, this time with K/R as the ring extension. The integral closure of R is R, and as shown above, the integral closure of H is rad(H). Therefore the coefficients of f(x) lie in rad(H), inside R. If H happens to be prime, its own radical ideal, the coefficients of f lie in H. If H = R, the coefficients of f lie in R.
Assume H is prime, or all of R. Since f is monic, with its nonlead coefficients in H, f can be used to prove x is integral over H. We don't need e(x) any more, f will serve as the minimum polynomial for x.
Now return to the case where S is an integral domain, and an integral extension of R, and R is integrally closed in its fraction field K. Thus S ∩ K = R.
Let Q1 lie over P1, with P2 properly contained in P1. Let J be the extension of P2 into S. Since Q1 contains the extension of P1, and P1 contains P2, Q1 contains J.
Remember that there is some prime in S lying over P2, and this prime ideal contains J. Therefore J ∩ R = P2, and nothing more.
Map S, and J, into the ring of fractions with denominators in S-Q1. This is traditional localization about Q1. Then bring in the denominators in the multiplicatively closed set R-P2. Remember that denominators can be combined; call the resulting set of denominators T. J misses S-Q1, and J misses R-P2. Assume for the moment that J misses their product T. Since S is an integral domain, J/T remains a proper ideal in S/T. Drive J/T up to a maximal ideal, which is prime. This pulls back to a prime ideal Q2 in S. Since Q2 includes J, and misses R-P2, it intersects R in precisely P2. In other words, Q2 lies over P2. Since Q2 misses S-Q1, it is contained in Q1. We have lifted P2, and by induction we can lift a countable descending chain. However, we have yet to prove that J misses T. That's where the lemmas come in.
Let T1 = S-Q1, and let T2 = R-P2, so that T is the cross product of T1 times T2. Since J misses T1, J/T1 is proper in S/T1. J/T1 cannot bring in a fraction with numerator in R-P1, for that numerator is also a denominator in T1, and would pull 1 into J/T1. We have to make sure J/T1 doesn't bring in a fraction with a numerator in P1-P2. Thus J/T1 is disjoint from T2, and remains proper in S/T1/T2, which is the same as S/T. So - the goal is to ensure that nothing in J/T1 is equivalent to any fraction having a numerator from P1-P2.
Let y/z be a fraction in J/T1. Apply the first lemma to S/R. The integral closure of R in S is all of S. Thus the integral closure of P2 is equal to rad(J). Since y is in J it is in rad(J), and y is integral over P2.
Let y satisfy a monic irreducible polynomial f(y) over K. By the second lemma, f has coefficients in P2, and is the minimum polynomial for y.
Suppose y/z = x/1 for some x in R-P2. Write z = y/x in the fraction field of S.
Start with f(y) and leave the lead coefficient alone. Divide the next coefficient by x, the next one by x2, and so on to the constant, which is divided by xn. This new polynomial f′ has y/x as a root. Thus z is the root of a monic polynomial f′ over K.
Adjoining z is the same as adjoining y; they both give the same field extension over K. Thus f′ is irreducible.
Now z is still a member of S, and is integral over R. Apply the second lemma, and z is the root of an irreducible monic polynomial dividing f′, (which must equal f′), and the coefficients of f′ lie in R.
Each coefficient of f is now some power of x times its counterpart in f′, and the coefficients of f lie in P2. Since x does not lie in P2, the coefficients of f′ must. Thus z is integral over P2, and z is in rad(J). That puts z inside Q1, yet z is drawn from S-Q1. Therefore y/z cannot equal x/1, and J/T1 does not have any fractions equivalent to a fraction with a numerator from T2. That completes the proof.
What if the chain descends beyond infinity? Let the intersection of a descending chain of prime ideals in R be prime, or contain a prime. Call this prime P, and extend it into an ideal J in S. The intersection of J and R is still P. Let T1 be S minus the intersection of Qi for all the primes that have been lifted heretofore. This is a multiplicatively closed set that misses J. Let T2 be R-P, and built T as above. J/T1 does not bring in any numerators outside of P - the proof is as above. Map J/T1 to a proper ideal J/T in S/T, raise to a maximal ideal, then pull back to Q in S, lying over P, and inside all the primes of S that have come before. Transfinite induction lifts any descending chain from R up to S.
If C is the integral closure of R in S, then the same is true of the corresponding polynomial rings; C[x] is the integral closure of R[x] in S[x]. But first we need a lemma.
Lemma: The Factors Live in C[x]
Assume R and S are integral domains. If f(x) and g(x) are monic polynomials in S[x], and f*g lies in C[x], then f and g both lie in C[x].
Embed S in its fraction field, and let E be a field extension that splits f and g. Every root of f*g is integral over C, hence contained in the integral closure of C in E. Each coefficient of f (or g) is an expression in the roots, and lies in the integral closure of C in E. Yet these coefficients also lie in S, and since C is closed in S, they lie in C. Both f and g come from C[x].
This can be generalized to arbitrary commutative rings R and S. If every root of f*g is integral over C, the above reasoning holds. Adjoin each root to S as needed, to build a ring extension E. (I'll illustrate this below.) The coefficients of f (or g) are integral over C, and in S, hence in C. We only need show each root of f*g is integral over C.
Perhaps f*g already has some roots in S. If z is such a root, it is integral over C, and contained in S, and since C is integrally closed in S, z is in C. Divide f*g by x-z, using synthetic division, and all the coefficients are still in C. Repeat this for all the roots that happen to lie in S, and call the quotient w(x).
If w = 1 we are done, so let's assume w is at least quadratic. Since w has no roots in S, I'm going to adjoin one, building a ring extension, the first in a tower of extensions that will eventually become E. Call this root t, satisfying w(t) = 0. Because w is monic, this ring, call it S2, consists of polynomials in t, with coefficients in S, up to (but not including) the degree of w. The coefficients of w lie in C, but think of w as a polynomial with coefficients in S2. Now w has at least one root in S2, namely t. This root t, and any other roots of f*g that are in S2, are integral over C. Divide through by x-t, and do the same for any other roots. This builds a new monic polynomial w2 with coefficients in S2.
Adjoin the root u, via the polynomial w2, building the ring S3. u is still a root of f*g, and still integral over c, along with any other roots of w2 in S3. Divide out these roots, building a new monic polynomial w3. Repeat this process, until f*g splits in E, and all roots are integral over C. That completes the lemma.
If f(x) is in C[x], all coefficients are integral over R, and integral over R[x]. Since x lies in R[x], x is integral over R[x] by default, therefore f is integral over R[x].
Conversely, let the polynomial f lie in S[x], with f integral over R[x]. f satisfies a monic polynomial p(t), with coefficients in R[x]. Select k so that xk has higher degree than f. Let f′ = f - xk. Substitute f = f′ + xk into p(t). Expand each term using the binomial theorem, and gather the terms divisible by f′ together. What remains is a monic polynomial in R[x], starting with xkn. The rest is f′ times something, which I will call g(x). Move this to the other side of the equation. Thus (-f′) times g(x) lies in R[x], and also in C[x]. Since k is large, -f′ is monic. The product is also monic, hence g is monic. Invoke the lemma given above, and -f′ lies in C[x], whence f lies in C[x].
In summary, the integral closure of R[x] in S[x] is C[x].
As a corollary, R integrally closed in S implies R[x] is integrally closed in S[x]. Apply this to R integrally closed in its fraction field. If F is the fraction field of R, then R integrally closed implies R[x] is integrally closed in F[x]. Now F[x] is a pid, and a ufd, and is integrally closed in its fraction field F(x), the quotients of polynomials with coefficients in F. If z ∈ F(x) is integral over R[x] then it is integral over F[x], and lies in F[x], whence it lies in R[x]. Therefore R integrally closed implies R[x] is integrally closed.
By induction, the same holds for finitely many indeterminants. For instance, R[x,y] is integrally closed in its fraction field F(x,y), and so on.
Let W be an infinite set of indeterminants, and consider the ring R[W] inside its fraction field F(W). Let z be a quotient of polynomials, exhibiting finitely many indeterminants from W. Assume z is integral over R[W]. A monic polynomial p makes this happen, and together, the coefficients of p exhibit finitely many indeterminants from W. Restrict attention to R adjoin finitely many indeterminants, covering p and z. Now z is integral over R adjoin x1 through xn, which is integrally closed, hence z lies in this base ring. Therefore an arbitrary polynomial extension of R remains integrally closed.
Let S/R be an integral extension and let f(R) be a ring homomorphism into an algebraically closed field K. This map can be extended to all of S.
The image of R in K is an integral domain, hence the kernel is a prime ideal P. Find a prime Q in S lying over P. Map Q to 0; Q will become the kernel of f(S).
It is enough to map the cosets of Q into K. Recall that S/Q is integral over R/P, and both rings are integral domains. Enclose these rings in their fraction fields. Integral elements are algebraic, thus the field extension is algebraic. Extend the function f to cover the fraction field of R/P. This may bring in some additional elements of S/Q; that's ok. Further extend this field homomorphism to map the fraction field of S/Q into K. Restrict to a ring homomorphism on S/Q, and you're done.