Throughout this chapter, rings are commutative. Yes, noncommutative rings have radical ideals too, and I'll explore those in a later chapter.

Let H be an ideal of the ring R. The radical of H, written rad(H), is the intersection of all prime ideals containing H. Verify this is an ideal. If there are no prime ideals containing H then rad(H) = R. However, as long as H is a proper ideal, it lives in a maximal ideal, which is prime, hence rad(H) is also proper.

If H is already the intersection of prime ideals than rad(H) = H. It follows that rad(rad(H)) = rad(H).

Here is an equivalent definition of rad(H). The radical of H is the set of elements x such that xn lies in H for some positive n.

If H = R then every x is in H, and rad(H) = R, as it should. So assume H is a proper ideal.

Let xn lie in H, hence in every prime ideal containing H. This means x is in every prime ideal containing H, and x is in rad(H). Conversely, assume xn is never in H, and let S be the multiplicative set consisting of the powers of x. Since H and S are disjoint, some maximal prime ideal contains H and misses S, whence x is not in rad(H).

This characterization is probably the origin of the term "radical ideal", for radical means nth root, and rad(H) is the set of radicals of elements of H. However, this characterization only applies when R is commutative.

rad(H)x  y  z
Hx2 y3 z5

Set H = 0 to get the nil radical of R. This is written nil(R).

Using our alternate definition, the nil radical is the ideal consisting of all nilpotent elements.

A ring is reduced if its nil radical is 0. An integral domain has no zero divisors, no nilpotent elements, and is therefore reduced.

Since prime ideals correspond, R/rad(H) has 0 as the intersection of prime ideals, and is reduced.

Assume R has one prime ideal P. Every maximal ideal is prime, hence P is maximal and unique, and R is a local ring. Furthermore, P = nil(R). Divide by the nil radical and find a field.

Conversely, if R/nil(R) is a field, then nil(R) is a maximal ideal, which is the only prime ideal.

If every x in R satisfies xn = x, for some n > 1, then prime ideals are maximal.

Divide out by any prime ideal, and everything in the quotient ring satisfies xn = x. Remember that this is an integral domain. For x nonzero, xn-1 = 1, and xn-2 is the inverse of x. Everything is invertible, the quotient ring is a field, and our prime ideal is maximal.

Let H be the intersection of finitely many ideals in R. We will show that the radical of the intersection is the intersection of the radicals.

If xn is in H it is in each of the ideals containing H. The radical of the intersection is in the intersection of the radicals. Conversely, if some power of x is in each of the ideals, the largest power of x is in all of them, and xn is in H.

The radical of the product of finitely many ideals is the radical of their intersection.

Since the product ideal is contained in the intersection, the radical of the product is contained in the radical of the intersection. Conversely, let x lie in the radical of the intersection, hence xn is in each of the ideals. If there are k ideals, xnk is included in the product, thus the radical of the intersection is contained in the radical of the product.

As a corollary, rad(Hn) = rad(H). This because H intersected with itself n times is still H.

The set of nilpotent elements forms an ideal. We already know this, since rad(0) is always an ideal, but let's prove it anyways.

Let xn = 0 and let ym = 0. Clearly (xy)n = 0, and the binomial theorem shows (x+y)m+n = 0.

This result does not apply to a noncommutative ring. Consider the 2 by 2 matrices. Start with the zero matrix and place a 1 in the upper right, or the lower left. These two matrices are nilpotent, yet their sum has determinant -1, and is a unit.

Let x be nilpotent and y a unit in R. Since synthetic division terminates, 1+x and 1-x are units. In fact, y±x is a unit. Write it as y*(1±x/y), the product of two units.

a polynomial p(x) in R[x] is nilpotent iff all its coefficients are nilpotent.

If each coefficient is nilpotent then each term is nilpotent, and since the sum of nilpotent elements is nilpotent, (shown above), p(x) is nilpotent.

Conversely, if p(x) is nilpotent then so is its constant coefficient. Subtract a0 to get another nilpotent polynomial, this time with a1 nilpotent, and so on. This direction remains valid for the formal power / laurent series of x. However, one can build a power series whose coefficients are all nilpotent, yet the series itself is not nilpotent. Choose coefficients whose pairwise products are 0. As you march down the series, these coefficients require ever larger exponents to reach 0. Of course the base ring has to support coefficients having this behavior, but it is not hard to build such a ring, as an infinite direct product.

If we are interested in R[x,y], apply the theorem once to adjoin x, and then again to adjoin y. A polynomial in two variables is nilpotent iff all its coefficients are nilpotent. This generalizes to finitely many indeterminants, and then to an arbitrary collection of indeterminants. After all, a given polynomial only uses a finite number of these indeterminants.

The polynomial p(x) is a unit iff all coefficients are nilpotent, except a0, which is a unit.

We already showed that a unit plus a nilpotent yields a unit, so we only need look at the converse, where p is known to be a unit. Suppose p is a unit polynomial that does not fit this pattern, of least degree. Write pq = 1, and the constant term has to be a unit.

If p or q is a constant then the higher coefficients in the "other" polynomial are killed by a unit, and they must be 0. So both polynomials have degree at least 1.

Let p have coefficients a0 a1 a2 … through an, and let q have coefficients b0 b1 b2 … through bm. Here a0 and b0 are the constant coefficients, and an and bm are the lead coefficients on xn and xm respectively. Look at the highest powers of x, and anbm = 0. Assume by induction that anr+1 kills bm-r through bm. This is true when r = 0.

Expand pq and let c be the coefficient in position n+m-r-1. Assuming r is less then m+n-1, c = 0. Multiply c by anr+1. The result has to be 0. The coefficients from bm down to bm-r are killed by anr+1, and so there is only one term of c that remains, namely bm-r-1an. This too must be killed by anr+1. Hence bm-r-1 is killed by anr+2, and that completes the inductive step.

Run this through r = m-1, which is less than n+m-1. At this point all the coefficients of q, other than the constant, are killed.

Take the next step, r = m, which is still less than n+m-1. Now a power of an kills b0, which is a unit, thus an is nilpotent.

A unit minus a nilpotent is a unit. Subtract anxn to get another unit polynomial in R[x] of lesser degree. This contradicts the selectio of p.

Apply this theorem twice to show that a polynomial in R[x,y] is a unit iff its constant term is a unit, and all other coefficients are nilpotent. This generalizes to finitely many indeterminants, and then to an arbitrary collection of indeterminants.

Next look at the formal power series in R[[x]]. Assume pq = 1, and once again a0 and b0 are units. As it turns out, that's all we need. Assume a0 is a unit. Use synthetic division to divide p into 1. The resulting quotient series is the inverse of p, hence p is a unit in R[[x]].

This generalizes to a finite number of indeterminants.

If R is a field or division ring, a series over R is a unit iff its constant term is nonzero. The nonunits are all the series that start with 0. This is an ideal, generated by all the indeterminants. Call this ideal M. Since M contains all the nonunits, every proper ideal H lives in M. That makes M the one and only maximal ideal, hence the power series ring is a local ring.

If all the coefficients of p(x) kill all the coefficients of q(x), then pq = 0. The converse is not always true, but it is if q is minimal, i.e. if q is a nonzero polynomial of least degree that kills p.

If q is constant then b0 kills p, and the theorem is true. So let q have degree m for m > 0.

We know anbm = 0. Now anq kills p, and has lower degree. Yet the degree of q was minimal. Thus anq = 0, and an kills all the coefficients of q.

Since p kills q and an kills q, p minus its lead term kills q. Think of this as p1, with the lead term gone. Now an-1bm must equal 0. an-1q kills p1, and p, and as above, an-1 kills all the coefficients of q. Continue this process until all coefficients of p kill all coefficients of q.

Here is a counter example when the degree of q is not minimal. Take a field such as Z/2 and adjoin the indeterminants a b c and d. Mod out by ac, bd, and ad+bc. In other words, these expressions are set to zero whenever they appear. Now (ax+b)*(cx+d) is 0, but ad and bc are not zero.

There must then be a smaller polynomial that kills ax+b, in this case, cd.

If p(x) kills any polynomial, let q be the least polynomial killed by p, and the coefficients of p are all zero divisors.

On the other hand, it is possible to build a polynomial p whose coefficients are zero divisors, yet p is not a zero divisor. Adjoin a b c and d, and mod out by ac, bd, and cd. Each of these letters is now a zero divisor. Let p = ax+b, and suppose p is a zero divisor. Let q have least degree such that p*q = 0. Thus the coefficients of p all kill the coefficients of q. Any coefficient killed by a is a multiple of c. Any coefficient killed by b is a multiple of d. Each coefficient in q is a multiple of cd, yet cd = 0, hence q = 0, which is a contradiction. Therefore p is not a zero divisor.

The jacobson radical of R, written jac(R), is the intersection of the maximal ideals of R. Since there is at least one maximal ideal, jac(R) is a proper ideal in R.

Remember that nil(R) is the intersection of all prime ideals, and since maximal ideals are prime, jac(R) contains nil(R).

A ring is jacobson semisimple if jac(R) = 0. This implies a reduced ring, with nil(R) = 0. Of course a ring could be reduced, and not jacobson semisimple. Let R be Z2, the integers localized about 2, which is an integral domain and a local ring. There is one maximal ideal, which becomes the jacobson radical. Since 0 is a prime ideal in an integral domain, the ring is reduced, but not jacobson semisimple.

Given an element y in R, three conditions are equivalent to y lying in jac(R). The first is the definition, and the second assures the invertibility of 1-xy.

  1. y is in every maximal ideal.

  2. For every element x, 1-xy is invertible.

    Let y lie in jac(R) and suppose 1-xy has no inverse. Thus 1-xy generates a proper ideal. Let H be a maximal ideal containing 1-xy. Now H contains 1-xy and y, hence H contains 1. This contradiction shows 1-xy is invertible for every x.

  3. y kills every simple R module. Remember that a simple module is R mod a maximal ideal, which is a field.

    Suppose M is a simple R module, and y*M is nonzero. In other words, y does not kill M. Let v be an element of M with yv nonzero. Since M is simple, v generates M, and yv also generates M. There is some x satisfying xyv = v. Thus 1-xy kills v, and 1-xy has no inverse. This contradicts condition (2) above. Therefore y kills every simple R module.

    Now complete the circle from (3) back to (1). Assume y kills every simple R module. Let H be any maximal ideal. Let M be R/H, a simple R module. Remember that yM = 0. Thus y*1 lies in H. Do this for every H and y is in every maximal ideal, and in jac(R).

If S is R adjoin arbitrarily many indeterminants, jac(S) = nil(S). This is counterintuitive, since jac(R) need not equal nil(R).

Let G be any ideal in R, and let H be all polynomials with coefficients in G. Verify that H is an ideal in S, and H is the extension of G into S.

If P is a prime ideal in R containing G, let Q be the extension of P into S. Note that Q contains H. Let's show that Q is a prime ideal in S.

Let u and v be polynomials not in Q, yet uv lies in Q, and u and v have the fewest possible terms. One of the two lead coefficients lies in P, say it is the lead coefficient of u. Subtract away the lead term of u, and u still lies outside of Q, while uv remains in Q. This is a contradiction, hence Q is prime.

If u is a polynomial in rad(H) then it is in each such Q. Its coefficients are in P for each prime P containing G. The coefficients of u are in rad(G). Thus rad(H) is contained in the extension of rad(G).

Now let Q be an arbitrary prime ideal in S containing H. Contract back to R by looking at the constant polynomials in Q. This is a prime ideal P in R, containing G. P generates a prime ideal inside Q, containing H, so we may as well ratchet Q back to the extension of P. Now Q is the smallest prime ideal, in fact the smallest ideal, containing P. P maps forward to Q, maps backward to P. Let the coefficients of u all lie in rad(G). These coefficients are in every p containing G, and in each such Q, and in every prime ideal containing H, and in rad(H). In other words, the extension of rad(G) is contained in rad(H). The two sets are equal: rad(ext(G)) = ext(rad(G)).

Extedn R into the polynomials in x, as above, then extend this ring, and its ideals, into the polynomials in y. The resulting ring consists of polynomials in x and y with coefficients in R. Apply the previous theorem twice, so that R now extends into the polynomials of x and y. With ideals and prime ideals extending into x and y, radical and extension commute. This generalizes to many indeterminants.

Returning to R[x], set G = 0, and ext(nil(R)) = ext(rad(0)) = rad(ext(0)) = nil(S). The polynomials with coefficients in nil(R) form nil(S).

The jacobson radical always contains the nil radical. Show that a polynomial u that is not nilpotent, not in nil(S), is not in jac(S) either. That will prove nil(S) = jac(S).

Let u be a polynomial that is not nilpotent in S, and consider 1-xu, where x is the indeterminant of S. In an earlier section we characterized the units of S. The constant term must be a unit and all other coefficients must be nilpotent. Thus the coefficients of u must be nilpotent. This places u in the extension of nil(R), which is nil(S). Yet u does not lie in nil(S), so 1-xu is not a unit. By characterization (2) above, u is not in jac(S). That completes the proof; nil(S) = jac(S).

If nil(R) = 0, as when R is reduced, then S is reduced, and jacobson semisimple.

Power series behave differently, with jac(S) based on jac(R), rather than nil(R). Let S be the formal power series of R in finitely many indeterminants, and let u be a power series drawn from S. Again, using characterization (2), u is in jac(S) iff 1-vu is a unit for every v in S. When is 1-vu a unit? As shown earlier, 1-vu is a unit iff its constant term is a unit, iff 1-b0a0 is a unit. This occurs iff a0 is in jac(R). Therefore u is in jac(S) iff its constant term is in jac(R). This is more than an extension of jac(R); the other coefficients are unconstrained.

Let R be a reduced ring, and let S be a multiplicatively closed set in R. Each prime ideal P of R becomes a prime ideal in R/S, or all of R/S if P and S intersect. And the prime ideals in R/S pull back to prime ideals in R missing S.

Suppose R/S is not reduced. Let (a/b)n = 0 in R/S, whence uan = 0 in R. If a/b is nonzero, then ua is nonzero. Now (ua)n = 0, ua is in nil(R), and R is not reduced. This is a contradiction, hence R/S is reduced. Any localization of a reduced ring is reduced.

Conversely, assume xn = 0, whence nil(R) contains x and R is not reduced. Let P be a maximal ideal containing all the elements that kill x, and localize about P. Now (x/1)n = 0, and x/1 is nonzero, since everything that kills x lies in P. Therefore RP is not reduced.

Put this all together and reduced is a local property, relative to maximal ideals.