An earlier chapter defined the jacobson radical as the intersection of all maximal ideals; but rings were assumed to be commutative. Here is a more general definition for noncommutative rings that is backward compatible, i.e. it resurrects the previous definition when R is commutative.

The jacobson radical of R, written jac(R), is the intersection of all maximal left ideals. When R is commutative left ideals are ideals, and jac(R) becomes the intersection of maximal ideals.

We also saw, in the aforementioned chapter, 3 equivalent characterizations of jac(R). Those apply here, plus 2 more.

  1. y is in every maximal left ideal.

  2. For every element x, 1-xy is left invertible.

    Let y lie in jac(R) and suppose 1-xy has no left inverse. Thus 1-xy generates a proper left ideal. Let H be a maximal left ideal containing R*(1-xy). Now H contains 1-xy and y, hence H contains 1. This contradiction shows 1-xy is left invertible for every x.

  3. y kills every simple left R module.

    Suppose M is a simple left R module, and y*M is nonzero. In other words, y does not kill M. Let v be an element of M with yv nonzero. Since M is simple, v generates M, and yv also generates M. There is some x satisfying xyv = v. Thus 1-xy kills v, and 1-xy has no left inverse. This contradicts condition (2) above. Therefore y kills every simple left R module.

    Now complete the circle from (3) back to (1). Assume y kills every simple left R module. Let H be a maximal left ideal. Let M be the left R module consisting of the cosets of H in R, as R acts on these cosets. If M contains a proper nonzero submodule, this becomes a proper left ideal containing H, which is impossible. Therefore M is a simple left R module.

    Remember that y kills every simple left R module. This means y*1 lies in H. Thus y lies in every maximal left ideal H, and y is in jac(R).

To summarize, y is in jac(R) iff y kills every simple left R module.

Remember, a module is simple iff it is isomorphic to the cosets of H, for some maximal left ideal H, viewed as a left R module. Let's summarize again. The element y is in jac(R) iff y kills every left module R/H, where R/H represents the cosets of a maximal left ideal H.

Bringing all this together, y is in jac(R) iff y is in the intersection of the annihilators of all the simple left modules R/H.

Look at annihilators in general. If M is a left module, let K be the annihilator of M, i.e. the elements of R that take M to 0. Clearly K is a left ideal, but it is also a right ideal. If y is in K and c is in R, cM is a submodule of M, and a kills cM as surely as it kills M. Thus ac kills M and ac is in K.

Since jac(R) is the intersection of annihilators, it is a two sided ideal. This is counterintuitive. Take the intersection of all the maximal left ideals, and find a two sided ideal.

It doesn't come up very often, but let R contain a left identity e that is not 1. A maximal left ideal does not contain e, for that brings in the whole ring. The jacobson radical is still the intersection of left ideals. Let e-xy be left invertible if something times e-xy = e. Verify that (1) implies (2). For condition (3), note that ev = 0 iff Rv = 0. All scale multiples of v in M are 0. The elements killed by R are a submodule of M, and since M is simple, this is all or nothing. If eM = 0 then yM = 0, so assume e kills nothing in M, and yv is nonzero. For some x, xyv = ev (whatever ev is), e-xy kills v, and e-xy is not left invertible. That completes (2) → (3), and (3) → (1) is the same.

Return to the world of rings with 1, and develop a fourth condition for y ∈ jac(R) as follows.

  1. 1-xyz is a unit for every x and z.

    Since jac(R) is an ideal, xy, and yz, and xyz, are already in jac(R). It is enough to show 1-y is a unit for y in jac(R).

    By condition (2), (setting x = 1), 1-y is left invertible. Let u be its left inverse. Write u-uy = 1, or u = 1+uy. Since u is right invertible, 1+uy is right invertible. By condition (2), 1+uy is also left invertible. Therefore 1+uy is a unit, u is a unit, and the inverse of u, 1-y, is a unit.

    Conversely, if 1-xyz is a unit for every x and z, then 1-xy is left invertible for every x, and y belongs to jac(R).

Using condition (4), jac(R) is characterized by 1-xyz being a unit. The same condition falls out if we define jac(R) as the intersection of maximal right ideals. This yields another counterintuitive result, the left jacobson radical is the same as the right jacobson radical. That is why it is simply called the jacobson radical. It is the intersection of the maximal left ideals, or the intersection of the maximal right ideals, or the intersection of the maximal left and right ideals.

Finally there is a fifth condition for y lying in jac(R). Notice that this too is symmetric, showing once again that the left jacobson radical equals the right jacobson radical.

  1. jac(R) is the largest ideal H, such that the elements 1-y, for all y in H, are units. The same holds if H is the largest left ideal or right ideal having this property.

    Let H be any left ideal having this property. This means 1-y is left invertible, and since xy is in H for any x, 1-xy is left invertible. By condition (2), y is in jac(R), and our left ideal H lies in jac(R).

    If H is a right ideal, then 1-yx is right invertible for every x, y is in jac(R) (from the other side), and H is in jac(R).

    Conversely, jac(R) is an ideal that I will call H. You can think of it as a left ideal or right ideal if you like, but H is still a two-sided ideal. By condition (4), 1-xyz is a unit for every y in H. Set x = z = 1, and 1-y is a unit, satisfying condition (5). Jac(R) is thus the largest (left / right / two-sided) ideal satisfying condition (5).

Let H be a nil ideal, or a nil left ideal if you prefer. This means everything in H is nilpotent.

Let y be any element of H, with yn = 0. Verify that 1-y is invertible, with inverse 1+y+y2+y3…+yn-1. This holds for every y in H, so by condition (5), H lies in jac(R).

All the nil ideals, left or right or two sided, lie in jac(R).

A nilpotent element need not lie in jac(R). The n×n matrices over a field form a simple ring, with jac(R) = 0, yet there are plenty of nilpotent matrices. Any matrix, for instance, that is entirely below the main diagonal.

A ring R is jacobson semisimple if its jacobson radical is 0. Here are some examples of jacobson semisimple rings.

Let R be a vector space with basis b. Multiply each bi by 0 in turn to create maximal left ideals. These intersect in 0, hence R is jacobson semisimple.

Let R be a pid with infinitely many primes. If x is a nonzero element it has a unique, finite factorization, and belongs to finitely many prime ideals. Thus x cannot belong to all of them, and R is jacobson semisimple.

Using a similar proof, the half quaternions are jacobson semisimple. Suppose x nonzero is in jac(R), and |x| = f. f cannot be 1, else x is a unit. Let M be the maximal left ideal containing p, where p is a prime that does not divide f. R is a left pid, so M is generated by some g, with |g| dividing p2. Again, g is not a unit, so its norm is p or p2. With x in M, x is a left multiple of g, but that becomes impossible when you take norms.

A simple ring has only one proper ideal, namely 0, and this must be the jacobson radical.

Let H be a nil ideal in R. Every nil ideal lies in J, so J contains H, and has a well defined image in R/H. If R/H is jacobson semisimple, then J = H.

Use this criterion to find jac(R) when R is the n×n lower triangular matrices over a division ring. Let H be the ideal with zeros down the diagonal. Note that Hn is always 0. R/H is the n×n diagonal matrices, Which is a vector space of dimension n. That ring is jacobson semisimple, as shown above, hence H = jac(R).

Let J = jac(R), and consider the quotient ring Q = R/J. By correspondence, maximal left ideals in R map to maximal left ideals in Q, and since all maximal ideals in R contain J, this is a bijection. Also by correspondence, the quotient module R/H, where H is one of these maximal left ideals, is isomorphic to Q/H. They are isomorphic as R modules, but these particular R modules are also Q modules, since J, inside H, carries everything to 0. Therefore the simple modules of R are the simple modules of Q.

Any y in jac(Q) pulls back to something in the intersection of all the maximal left ideals of R. Yet this intersection is J, the kernel, hence Q is jacobson semisimple.

Let x in R map to a unit in Q. Write xy = 1+z, where z is in J. Since z is in J, condition (4) says xy is a unit in R. Thus the preimage of a unit in Q is a unit in R. The quotient map induces a group homomorphism from the units of R onto the units of Q. The kernel is the set of elements that map to 1, which is 1+z for any z in J.

If R is left artinian, and J is its jacobson radical, then j is nilpotent.

There are no infinite descending chains, so the powers of J stabilize at some ideal H. Suppose H is nonzero. Consider the family of left ideals U where H*U is nonzero. When U = H the result is H, so this family is nonempty.

Start with any such U and find a smaller ideal in the family, then a smaller one, until you reach the end of the finite chain. In other words, we may assume U is minimal with respect to HU ≠ 0.

Choose an element c in U with Hc nonzero. Note that Hc is a nonzero left ideal inside U. Also, HHc = Hc, hence Hc is not killed by H. Since U is minimal, Hc = U.

What does the product Hc look like? Since c is a single element of R, Hc is merely the elements of H scaled by c on the right. This is already a left ideal. We don't have to take finite sums, or anything inconvenient like that.

Since Hc = U, there is some y in H satisfying yc = c. Thus (1-y)c = 0. However, y is in J, so 1-y is a unit, and c = 0. This is a contradiction, thus H = 0, and J is nilpotent. The jacobson radical of a left artinian ring is nilpotent.

Remember that J contains all nil and nilpotent ideals. Thus J is the largest nilpotent ideal, and all nil ideals are nilpotent, with exponent bounded by the exponent of J. The exponent of J is, in turn, bounded by the chains of ideals in J, if such a bound can be established.

In contrast, let R be noetherian, Zp for example. The jacobson radical is generated by p, and is not nilpotent. In fact there are no nilpotent elements at all.

The jacobsome radical J of a ring R is nil if: R is left artinian, R is a K algebra, R is a finite dimensional K vector space, or R is an infinite dimensional K vector space with the dimension of R exceeding the cardinality of K. These are sufficient, but not necessary. The first condition was covered in the previous section, so let's look at the other three.

Let K be a field or division ring, and let R be a K algebra. This means R is a K vector space, containing a copy of K, and K is in the center of R.

Let J be the jacobson radical and let x be a nonzero element in J. We will show that x is algebraic iff x is nilpotent. Of course x does not belong to K, as those are all units, and not in any maximal ideals.

One direction is obvious, so assume x is algebraic, the solution to p(x) = 0. Normalize coefficients, so that the lowest degree term has coefficient 1. If this is the constant term then p(x) = q(x)+1, and q(x) is in J, hence p(x) is a unit, and cannot equal 0. So the lowest degree term is xn, where n > 0.

Write xn(q(x)+1) = 0. Again, q(x) lies in the ideal J, and q(x)+1 is a unit, forcing xn = 0. Therefore x is nilpotent.

If everything in J is algebraic then J is a nil ideal. In particular, when R is an algebraic extension of K, J is algebraic, and nil. Every nil ideal is in J, so J is the largest nil ideal.

To illustrate, adjoin x to the rationals, and set x5 = 0. A nilpotent polynomial has no constant term, and is generated by x. Thus x generates the largest nil ideal, and J.

If R/K is a finite K vector space, then R is left artinian as a K module, and left artinian as an R module, and by the previous theorem J is a nilpotent ideal, containing all nil ideals.

Let R be an infinite dimensional K vector space, and let the cardinality of K exceed the dimension of R. Since K is infinite, the nonzero elements K* have the same cardinality as K. Let y lie in J, and consider the elements a-y for all a in K*. Since 1-y/a is a unit, multiply by a, and a-y is a unit. The inverses of these units are also units. They cannot be linearly independent over K, for the dimension of R is too small. Select a linear combination of these inverses that sums to 0.

b1/(a1-y) + b2/(a2-y) + b3/(a3-y) + … bn/(an-y) = 0

Multiply through by the common denominator and find a polynomial in y that equals 0. Remember that K is in the center of R, which is why this expression becomes a traditional polynomial with all the coefficients on the left.

This makes y algebraic, and nilpotent, provided p is not the zero polynomial. Evaluate p at a1, i.e. replace y with a1. Each term drops out except for the first term:

b1 (a2-a1) (a3-a1) (a4-a1) … (an-a1)

The values a1 a2 a3 etc are distinct, so all the factors are nonzero, and p(a1) is nonzero. p is a nontrivial polynomial, and y is algebraic. Everything in J is algebraic, and nilpotent, and J is a nil ideal.

As an example, let R be finitely or countably generated over the reals. Finite products of adjoined generators span R, as a real vector space, and the dimension of R is countable. This is less than the cardinality of the reals, hence jac(R) is a nil ideal.

Let R = K[[y]], the formal power series in y. This is a local ring, with maximal ideal generated by y. The only nil ideal in R is 0. The jacobson radical is generated by y, and y is certainly not nilpotent. The jacobson radical is not nil, yet K can be as large as you like. Somehow the dimension of R increases along with K. Let's see how this happens.

Following the proof above, the elements 1/(a-y) have to be linearly independent for all nonzero a in K. Scale these elements to a/(a-y), and the set is still independent. If v is the inverse of a, then a/(a-y) = 1 + vy + v2y2 + v3y3 + … Suppose n of these series, for v1 through vn, can be combined to produce 0. Truncate these series at n terms, giving vectors of length n. These vectors combine to produce the 0 vector. In other words, these vectors are linearly dependent. Yet these vectors form a vandermonde matrix, which is nonsingular. This is a contradiction, hence all the elements 1/(a-y) are linearly independent, and the dimension of R is at least the size of K.

A ring R is semisimple iff it is jacobson semisimple and it exhibits dcc on its principal left ideals. (You can of course make a similar statement on the right.)

Assume R is left semisimple. Thus R is a finite direct product of left simple modules. Exclude these modules from the direct product one at a time. Each omission creates a left ideal whose quotient module is simple, hence these left ideals are maximal. They intersect in 0, hence R is jacobsom semisimple. R is also artinian, so one direction is complete.

For the converse, assume R is jacobson semisimple and left dcc. Thus every nonzero left ideal contains a minimal principal left ideal. This is in fact a minimal left ideal, for a smaller left ideal would contain another principal left ideal inside it.

Let U1 be a proper minimal left ideal in R. If you can't find one, then every nonzero element spans 1, and is invertible, and R is a division ring, which is semisimple. So assume U1 exists.

Since U1 is minimal it is simple. Thus any left ideal in R is going to include all or none of U1. Since J = 0, let M1 be a maximal left ideal that misses U1. Because U1 is simple, M1 and U1 are disjoint. (Remember that disjoint submodules share 0, and only 0.)

If U1 and M1 do not span all of R, then together they make a larger proper left ideal, which is impossible. Thus R = U1 * M1.

Inside M1, choose a minimal U2. This misses some maximal left ideal M2, so that U2 * M2 = R.

Let V2 = M1 ∩ M2. Since V2 is part of M2, it is disjoint from U2. Take any x in M1 and write it as e+f, for e ∈ M2 and f ∈ U2. Now e is in M2 and M1, hence in V2. Therefore M1 = U2 * V2. Put this all together and R = U1 * U2 * V2.

Let U3 be a minimal principal left ideal in V2, and let M3 be maximal, with U3 * M3 = R. Let V3 = V2 ∩ M3. Verify that V2 = U3*V3. Then find U4 in V3, and so on, for as long as possible.

At each step, R is the direct product U1 * U2 * U3 * …Un * Vn.

U1M1
U2V2
U3V3
U4V4
U5V5

Whenever R is the direct product of left ideals, each left ideal is principal. It is generated by the image of 1 in that particular R module. Therefore M1 V2 V3 V4 … forms a descending chain of principal left ideals, and by assumption, such a chain is finite. It must end in Vn = 0. In other words, the last Un completes R as an R module. R is the direct product of simple left R modules, and R is semisimple.

Let R be jacobson semisimple. By symmetry, R is left semisimple iff it is right semisimple. Therefore R is left artinian iff it is right artinian. In fact, principal artinian from either side implies semisimple, noetherian, and artinian from both sides.

The ring R is semiprimary if the jacobson radical J is nilpotent, and R/J is semisimple.

Recall that a ring is left semisimple iff it is right semisimple, and J is a two sided ideal, so we don't have to talk about left or right semiprimary. The ring is either semiprimary or it is not.

A semisimple ring is jacobson semisimple, and is trivially semiprimary.

Let R be left artinian with jacobson radical J. Recall that J is nilpotent. R/J is jacobson semisimple, and left artinian courtesy of R, so as per the previous section, R/J is semisimple. This makes R semiprimary. Every artinian ring is semiprimary.

If you want a semiprimary ring that is not left artinian, let R be right artinian, but not left artinian. It is still semiprimary, but not left artinian.

The Levytsky Hawkins theorem states that a left artinian ring is also left noetherian. First, a lemma about modules over a semiprimary ring.

Let R be semiprimary and let M be a dcc (or acc) R module. Let J be the jacobson radical of R.

Let M0 = M. Let M1 = J*M, the span of elements of J times elements of M inside M. Let M2 be J2*M, and so on. This descending chain of submodules stops at 0, because J is nilpotent.

Let F be a quotient module at level k in this filtration. Specifically, F = Mk/Mk+1. Note that F is an R module.

Let c be an element of F, a coset representative. Now c is spanned by elements of Jk*M, and if you multiply by J on the left, c winds up in Mk+1. Thus F is killed by J. This makes F a well defined R/J module.

R/J is semisimple, and any module over a semisimple ring is semisimple, hence F is semisimple. This makes F a direct sum of simple modules.

The submodule Mk is dcc (or acc), and so is the quotient module F. The sum is finite, and F is the finite direct product of simple modules.

Let F1 be M0/M1. This is a finite product of simple modules, and by induction, M1 has a finite composition series. Add on the modules in F1, and M has a finite composition series. This makes M both noetherian and artinian.

If R is semiprimary, and M is a left R module, M is left artinian iff M is left noetherian.

Now for the golden result. View the ring R as a left R module. If R is artinian it is semiprimary, and by the above it is noetherian. In other words, left artinian implies left noetherian.

More specifically, R is left artinian iff it is left noetherian and semiprimary.

If R does not contain 1 then the proof breaks down, and you can build an artinian ring that is not noetherian. Start with any abelian group having this property, and let the product of any two elements be 0, whence every subgroup becomes an ideal.

Consider the group of rational numbers between 0 and 1 with powers of p in the denominator. Add rational numbers mod 1, and let the product of any two numbers be 0. In an ideal H, find a fraction with the greatest power of p in the denominator, and this generates the ideal, bringing in all such fractions, and all fractions with lesser powers of p. Ideals can get larger forever, with higher powers of p in the denominator, but they can't get smaller forever.

Let H be a left ideal inside the jacobson radical of R, and let M be a finitely generated R module. If H*M = M then M = 0. This is Nakiama's lemma.

Select the fewest generators sufficient to span M.

Start with 0 and build an ascending chain of proper submodules of M. If such a chain is infinite, let U be the union of these submodules. Now U spans all of M only if it includes all the generators, and if that happens, these generators must appear somewhere in the chain. The conditions of zorn's lemma are satisfied, thus there is some T, a maximal proper submodule of M.

The quotient M/T is a simple R module. T is a left submodule, so HT lies in T. Since H maps M onto M, H maps M/T onto M/T.

Replace M/T with R/G, the left cosets of a maximal ideal G in R. We can do this because M/T is simple. Now H lies in J, which lies in G. The action of H drives everything into G. Thus H*(M/T) = 0. In other words, H drives M into T. Remember that H is suppose to map M onto all of M. This is a contradiction, therefore M = 0.

A variation of this theorem applies when H is nilpotent. M need not be finitely generated, and H need not lie in jac(R), but HM = M as before. Multiply by H on the left and H2M = HM, which equals M. Repeat this until a power of H becomes 0; whence M = 0.

Let Q be a quotient module of M, and assume HQ = Q. If M is finitely generated then so is Q. Put H in jac(r), with M finitely generated, or let H be nilpotent. By Nakiama, Q = 0, and the kernel is all of M.

Here is an example where Nakiama doesn't work. Let R be the localization of Z about p. In other words, R is the fractions without p in the denominator. This is a local ring whose maximal ideal J (also the jacobson radical) is generated by p.

Let Q be the rationals, with R acting on Q via multiplication. Since everything in Q is divisible by p, JQ covers all of Q. Nakiama fails, because J is not nilpotent, and Q is not a finitely generated R module.

Let R be left artinian, and let J be its jacobson radical. Mod out by J and look at the quotient ring Q. The maximal ideals of R and the maximal ideals of Q correspond. Q is artinian and jacobson semisimple. As shown above, Q is semisimple. Q is the finite direct product of matrix rings. As with any direct product of rings, a maximal ideal is maximal in one component and complete in all the others. Select one component, the n×n matrices over a division ring D. A left maximal ideal in this ring is the matrices whose rows span a subspace of dimension n-1. The maximal left ideals of Q have been characterized, and these lift uniquely to the maximal left ideals of R.

If Q is commutative and artinian, the matrices over a division ring collapse to a field. The only lesser subspace here is 0. There are finitely many maximal ideals, each with one of the fields omitted.

For this section, rings are commutative.

A semilocal ring has finitely many maximal ideals.

As shown in the previous section, an artinian ring is semilocal.

Of course a local ring is semilocal.

If R is local, and B is a ring with a homomorphic image of R in B, and B is a finitely generated R module, then B is semilocal. This is a bit technical, so hang on.

Just to review, R acts on B via the homomorphism into B, then multiplication within B. Thus every ideal of B is an R module. With R mapping an ideal into itself, the quotient ring is also an R module.

Let M be the maximal ideal of R, and let W be a maximal ideal of B. Let K be the quotient field B/W. Both B and K are R modules.

M*B is the span of elements in M times elements in B. R maps M into M, hence MB is an R module. B times MB is just more pairs from M cross B, hence MB is also an ideal of B.

Suppose MB is not contained in W. Hence MK is a nonzero R submodule of K. As we saw with MB earlier, multiplication by K does not change MK. Thus MK is an ideal in K. The only nonzero ideal in K is K, thus MK = K.

Now K, the quotient of B, is a finitely generated R module. Since R is local, M is the jacobson radical. Thus jac(R) maps K onto K, and by nakiama's lemma, K = 0. This is a contradiction, hence MB lies in W.

MB lies in W for every maximal ideal W. MB lies in jac(B).

The maximal ideals of B and the maximal ideals of B/MB correspond. Let V be the quotient ring B/MB. Since MB is an R module, R maps MB into itself. This makes V an R module, as well as a ring. Since M carries B into MB, V is also an R/M module.

Let F be the field R/M. Now V is an F module, or an F vector space. In fact V is a finite dimensional F vector space, since B is a finitely generated R module.

Remember that R maps into B, with 1 mapping to 1, so F maps into V, with 1 mapping to 1. The kernel of this map is an ideal, namely 0, hence F embeds in V.

Every ideal in V can be scaled by F, hence every ideal is an F vector space, with a well defined dimension. Since V is finite over F, infinite chains are not possible, and V is noetherian and artinian.

A commutative artinian ring has finitely many maximal ideals, hence V is semilocal, and B is semilocal.

Using the method above, or any other means, assume B is a semilocal ring, and mod out by its jacobson radical, so that B becomes jacobson semisimple. The product of the maximal ideals is contained in their intersection, and is 0. Also, maximal ideals are pairwise coprime. Apply the chinese remainder theorem, and B is isomorphic to the direct product of rings B/Wi. Each of these component rings is a field. Therefore a semilocal ring, mod its jacobson radical, is a direct product of fields.

Now return to the case where R is local and maps into B, and B is a finitely generated R module. We already showed B is semilocal. Mod out by the jacobson radical and the quotient is a ring, and an R module, and a direct product of fields.

Next, mod out by the maximal ideal W. This is a ring homomorphism, and an R homomorphism, and it selects one of the fields in the direct product, rather like a projection. Let K be the resulting field.

Remember that M carries B into W, hence M kills K. Thus K is an R/M module. Let F = R/M, and F maps into K. In fact F embeds into K.

Since B is finitely generated over R, K is finitely generated over F. K is a finite field extension of F, and this holds for each K across the direct product. Note, these fields need not be isomorphic to F or to each other, as shown by the following example.

Map Z/7 into the field of order 49 cross the field of order 343. Map 1 to 1, as you should. The two component fields are not isomorphic to each other, nor to F; yet each is a finite extension of F.

Let R be a subring of S. Without more information, it is difficult to correlate the two jacobson radicals. A very strange integral domain R could embed in its fraction field S, so that jac(R) is large while jac(S) = 0. Or, S could be a complicated algebra over a field R, whence jac(R) is 0 and jac(S) is nonzero.

Here are a couple of theorems when S has a particular relationship to R, as a left R module.

Assume S, as a left R module, is the direct sum of R and T. Let y lie in jac(S) and in R. Since 1 is also in R, 1-y is in R. 1-y is right invertible in S, so split the right inverse into its projections in R and T. (1-y)a + (1-y)b = 1. With 1-y in R, the first product stays in R, and the second product stays in T. These products are 1 and 0 respectively, hence a is the right inverse of 1-y in R. Hence 1-y is right invertible in R.

Multiply y on the right by anything in R, and the result is in R, and in jac(S), since jac(S) is an ideal. Therefore 1-yx is right invertible in R, and by condition (2), y is in jac(R). Intersect jac(S) with R and the result lies in jac(R).

Next let S be a finitely generated left R module, such that the generators g1 through gn commute with R. (They need not commute with each other.) Let J = jac(R).

Let M be any simple left S module. Since M is cyclic, give it the generator c, so that M = Sc. As an R module, M is generated by g1c through gnc.

Let J act on M, giving J*M, which is an R submodule of M. But is this an S submodule? Premultiply by gi, and gi commutes with J. It becomes part of M, thus JM is an S submodule of M.

Remember that M is a finitely generated R module, and J = jac(R). If JM = M, then apply nakiama's lemma, and M = 0. This is a contradiction, hence JM is a proper submodule of M. Both M and JM are S modules, and M is a simple S module, hence JM = 0.

Every y in J kills M. This holds for every simple S module M, so by condition (3), y is in jac(S). Therefore jac(R) lies in jac(S).

Let S0 S1 S2 … be an ascending chain of rings, each a subring of the next. Let R be a subring of S0, with J = jac(R). Assume each Si is a finitely generated left R module, or if you prefer, a finitely generated left Si-1 module. Let U be the union of the rings Si. As you might guess, J lies in jac(U).

Let y lie in J. Take any x in U and place x in Si for some i. By the above, y is in jac(Si). 1-xy is left invertible in Si, and in U. This holds for every x, hence y is in jac(U), and J is in jac(U).

In some cases we can push this past countability by transfinite induction. The key to the above paragraph is that J is contained in each jac(Si). If this is the case, then J is in jac(U). The other thing you need is the successor step. If you know, for some reason, that jac(Si) lies in jac(Si+1), then J lies in jac(Si) lies in jac(Si+1), and that completes the inductive step. J lies in every jacobson radical in the chain, even an uncountable chain.

Let S be the ring of n×n matrices over R. Remember that ideals in R and in S correspond. If J is jac(R), then the n×n matrices over J form jac(S).

Let y be a matrix that is everywhere 0, except for a single entry that lies in J. We will show below that y lies in the jacobson radical, using condition (2). This holds for each such y, and since jacobson radical is an ideal, the matrices over J all lie in jac(S).

Consider 1-xy for any matrix x. This gives a column with entries in J, subtracted from the identity matrix. The point of intersection is a unit in R. For convenience, premultiply by an invertible matrix that scales that row by the inverse of the unit. This will not change the left invertibility of 1-xy. Now all the diagonal entries are 1. Negate the off diagonal entries to build a left inverse, thus 1-xy is left invertible, and y lies in jac(S).

Conversely, let H be the ideal of R that corresponds to jac(S). In other words, jac(S) is the matrices over H. We showed above that H contains J. Let's show H lies in J.

Let y be the identity matrix scaled by anything in H. Let x be the identity matrix scaled by anything in R. 1-xy is a matrix that is constant down the main diagonal and zero elsewhere. Let the diagonal entries equal u. Let w be the left inverse of 1-xy, hence w, scaled by u on the right, equals the identity matrix. Thus w is also diagonal, and its entries are the left inverse of u. This makes u left invertible, and since x was arbitrary, everything in H lies in J. That completes the proof.

R is jacobson semisimple iff S is jacobson semisimple.

Continuing the above, let H be a two sided ideal of R. This defines a ring homomorphism R/H, which extends to a ring homomorphism S/H, wherein all the matrix entries are reduced mod H.

Consider GLn(R), the invertible matrices over R. These remain invertible when reduced mod H. Thus GLn(R) maps into GLn(R/H). In this case the map is a group homomorphism. Reduce mod H and multiply two matrices together, or multiply first and reduce mod H; the result is the same.

When H lies in J, the homomorphism is surjective. Let the matrix x represent an element of GLn(R/H), which means x is invertible in the quotient ring. Let y represent the inverse, thus xy is 1 plus some matrix in H. This is the identity matrix plus some matrix drawn from J. Such a matrix, 1+j, is invertible in S, hence xy is invertible, and x is invertible, and x lives in GLn(R).

A von neumann ring, also called von neumann regular or von neumann semisimple, has some x, depending on y, satisfying yxy = y. Here x is like a pseudo inverse of y.

If y has left or right inverse x, then yxy = y, and x will suffice. If y is a unit then multiply by y inverse on the left and right, and x has to ve the inverse of y.

Let H be a principal left ideal generated by y. Note that xy belongs to H, and generates y, and H, and is an idempotent. Thus every principal left ideal is generated by an idempotent.

Conversely, let y generate H, and let the idempotent e also generate H. For some x and z, xy = e, and ze = y. Write yxy = ye = zee = ze = y. Therefore, R is von neumann iff every principal left ideal is generated by an idempotent. By symmetry, the same holds for principal right ideals.

Let a left ideal H in a von neumann ring R have two generators e and f. These generate their own principal ideals, so replace them with idempotents. Now e and f are idempotent, and H is still spanned.

Set c = f-f*e. Note that c and e span f (and hence H), and c*e = 0.

Let b be the idempotent of R*c, so that b = xc. b*e = xc*e = 0. Two half orthogonal idempotents span H.

Consider e+b-eb. Premultiply by e and b, and get e and b respectively. Thus e+b-eb generates H, and is contained in H, and H is principal.

Repeat this n times and every finitely generated left ideal is principal. By symmetry, the same holds for finitely generated right ideals.

Let R be von neumann, and let J be jac(R). Let y belong to J, and by condition (4), 1-xy is a unit. Write y = yxy, hence y*(1-xy) = 0, and y = 0. R is jacobson semisimple. If R is also dcc, then combine this with jacobson semisimple, and R is semisimple.

If R is the direct product of von neumann rings, select an xi for each yi, building an x satisfying yxy = y. Thus R is von neumann. The direct sum is also von neumann, though an infinite direct sum does not contain 1.

Assume R is a matrix ring over a division ring D. A left ideal is a subspace, spanned by a basis of n vectors or less. Put these basis vectors in in a matrix e, with the remaining rows set to 0. Linear combinations of these vectors span the subspace, and left multiplication by a matrix produces a linear combination of these vectors, thus e generates the left ideal. Every left ideal in this ring is principal.

If a row is nonzero in column 1, move it to the top, using an invertible permutation matrix. This does not change the span of e. Then Use gaussian elimination to clear out the rest of column 1. If the first column is 0 then there is a row of 0's somewhere; move that to the top. Now the first row and column of e are set.

Move on to the submatrix starting at row 2 column 2. Move some row up to row 2 if possible, and clear out the rest of column 2, above and below row 2, leaving only e2,2 in column 2, or move the 0 row up to row 2. Repeat this until e is upper triangular, possibly with some gaps on the main diagonal.

Scale the rows of e so the main diagonal is 0 or 1. Then verify that e2 = e. This is an idempotent that generates our left ideal. This holds for every left ideal, hence the matrix ring is von neumann. Here is how e, a 5 by 5 matrix, might look after normalization. The number of 1's on the main diagonal is the dimension of the subspace spanned by e.

17050
00000
00190
00000
00001
00000
01500
00000
00010
00001
12104
00000
00000
00016
00000
10300
01000
00000
00010
00001

Take the direct product of such matrices, and every semisimple ring is von neumann. This ring is also acc.

Conversely, assume R is von neumann and left acc. Every left ideal is finitely generated, and since R is von neumann, every left ideal is generated by some idempotent e. Let f = 1-e, and the ring is the direct product R*e + R*f. R*e is a summand, and R is semisimple.

With R von neumann, any of the four chain conditions - left acc, right acc, left dcc, right dcc - makes R semisimple, whence all four chain conditions apply.

The rationals are a field, hence von neumann, but the integers are not. The subring of a von neumann ring need not be von neumann. However, an ideal H in a von neumann ring R, treated as a subring without 1, is von neumann. A left principal ideal in H, generated by y, implies a left principal ideal Ry, which leads to an idempotent e, which exists in H and generates H. Left principal ideals are generated by idempotents, and H is von neumann.

If S is the quotient of a von neumann ring R, pull y ∈ S back to y′ in R, find x′ in R such that yxy = y, and map this forward to S, making S von neumann.

If M is a semisimple module, let R be the ring of endomorphisms of M. Given f, we need g, such that f = fgf. Let K be the kernel of f, with complement U in M. Thus f is injective on U, with image V. Let g(V) map back onto U, the reverse of f. Let g be anything on the complement of V. Verify fgf = f, on K, on U, and on M. Thus the ring of endomorphisms is von neumann, although the selection of g is not unique.

Let M be a module over a division ring K. In other words, M is a K vector space. If U is a subspace, assign it a basis, then extend the basis to all of M. The "rest" of the basis defines V, with U+V = M. Therefore M is semisimple.

Let M = Kn, and the endomorphisms of M, or the n×n matrices in K, forms a von neumann ring. But we already proved that above.

Take an infinite direct product of matrices over K to find a nonartinian von neumann ring, which is not semisimple.

A variable is boolean if it is either true or false, often represented by 1 and 0 respectively. Thus boolean logic involves variables that are true or false, and boolean circuitry, which drives your computer and mine, is based on solid state switches that are either on or off.

A ring R is boolean if every element is idempotent. We will see that such a ring actually consists of boolean vectors, arrays of boolean variables that are either true or false.

One direction is easy. Let R be the direct product of arbitrarily many copies of Z/2. Every element is idempotent, thus a boolean ring. The direct sum also gives a boolean ring, though an infinite direct sum does not contain 1. Bring 1 back in if you like; whence every element of R has almost all its components 0, or almost all its components 1. In fact any boolean ring without 1 can be enhanced in this way. If e is idempotent then 1-e is idempotent. Multiply by f on either side and find something in the original ring, which is idempotent. Multiply 1-e by 1-f and find 1 + something int he original ring, hence another idempotent. R with 1 adjoined is boolean.

Now, without knowing the structure of R, let all the elements of R be idempotent. Using the binomial theorem, x+x = (x+x)2 = x2 + 2xx + x2 = x + 2x + x. Thus 2x = 0, and R has characteristic 2. Ad anything to itself and get 0, like a switch turning on and off.

Similarly, x+y = (x+y)2 = x2 + xy + yx + y2 = x + xy + yx + y. Thus xy = yx, and R is commutative.

Since R is a Z/2 vector space, give it a basis, and it looks like an array of switches that can be on or off. Addition is implemented by the bitwise xor operator. In some cases multiplication is the & operator, although this may depend on the basis.

If R has no zero divisors, let x and y be distinct nonzero elements, and write (x+y) * xy = 0. Since x ≠ y, xy = 0, which is a contradiction. The only boolean ring with no zero divisors is Z/2.

Mod out by any prime ideal, giving a quotient free of zero divisors, namely Z/2. This is a field, hence every prime ideal is maximal.

Let H be an ideal generated by x and y. Note that x+y+xy lies in H, and generates x and y, and H. Thus H is principal. Repeate this n times and every finitely generated ideal is principal.

Since yyy = y, R is von neumann. If R contains 1, and satisfies any of the chain conditions, R is semisimple. This was discussed in the previous section. A commutative semisimple ring is the finite direct product of fields. Let F be one of these fields, a boolean subring that is an integral domain. F has to b Z/2, hence R = (Z/2)n. This is the familiar ring of boolean vectors, and the only possible noetherian or artinian boolean ring.

A boolean ring that is finite, or a finite dimensional Z/2 vector space, is artinian, and isomorphic to (Z/2)n. Normally I would reconfirm that our ring contains 1, but a finite boolean ring always has 1. This is the case for Z/2, so proceed by induction on the dimension of the boolean vector space. Let M be a maximal ideal of R. The quotient ring is also boolean, and is, by induction, a direct product of fields of order 2. If the quotient has size 4 or more, it has a proper ideal, which lifts to an ideal between M and R. This is a contradiction, hence the quotient is Z/2. M is a smaller boolean ring, so represent it as the direct product of boolean variables g1 through gn. The trivial coset of M is represented by 0; let a represent the other coset. Complete the ring by defining a*gi for each gi.

M is an ideal, so agi is a sum of generators of M. Remember that gigj = 0 for j ≠ i. Since (agi)gi = a(gigi) = agi, agi cannot include any other gj. This holds for each j, hence agi is either 0 or gi.

If each agi = 0 then you have landed on your feet; that's what we want. However, if agi = gi, then let b = a+gi, and bgi = 0. Do this for each i, and the new generator is orthogonal to the others, and the sum of generators becomes 1 in R.

Let P be the power set of a fixed set F. Turn the elements of P, i.e. the subsets of F, into a boolean ring as follows. Let ∅ = 0 and let F = 1. Let intersection become multiplication. Let symmetric difference, i.e. the union minus the intersection, become addition.

I'll prove one of the ring properties; the others are done similarly. Let's prove addition is associative. Let A B and C be subsets of F. An element x in F could be in none, 1, 2, or all 3 of these sets. For each of these 8 possibilities, evaluate (A+B)+C and A+(B+C). Show that x is in the former iff it is in the latter. For instance, let x lie in A and B, but not C. It is in the union and the intersection of A and B, hence x is not in A+B. Since x is not in C, it is not in (A+B)+C. On the other hand, x is in B, and not B∩C, hence x is in B+C. Now bring in A, which also contains x, and x drops out. The resulting subset of F is the same, and addition is associative. Verify the other ring properties in this manner, and since B∩B = B, the result is a boolean ring.

Notice that subsets can be represented by arrays of 1's and 0's, where the ith bit indicates whether the element xi belongs to that particular subset. Now multiplication, or intersection, performs a bitwise and operation, and addition looks like an xor operation, as one would expect from a ring with characteristic 2. The power set ring has become a classic boolean ring, based on boolean vectors and bitwise operators.