Earlier chapters have explored the simple ring, having no intermediate ideals, and the simple module, having no intermediate submodules. In particular, the simple module is well characterized; it is isomorphic to the cosets of H in R, acted upon by R, where H is a maximal left ideal. This is a field when R is commutative.

Let E be the ring of endomorphisms of a simple module M. The kernel of f ∈ E is M or 0. If M then the image is 0. This is the trivial endomorphism, which is 0 in E. If the kernel is 0 them f is injective. Its image is nonzero, and has to be all of M, since M is simple. Thus f is injective and surjective, an automorphism, with an inverse function. E is a division ring.

Remember that M is isomorphic to R/H, where H is a maximal left ideal. Once this isomorphism is established, something in M is associated with 1 in R. Each endomorphism on M defines, and is defined by, the image of 1 in M. If 1 moves to y, then f(x) = f(x*1) = x*f(1) = xy. Yet the same holds for x+w, for every w in H. To avoid a contradiction, wy must lie in H. In other words, f is well defined iff Hy lies in H. With this in place, f is equivalent to right multiplication by y across the cosets of H in R.

If R is commutative then H is an ideal, and Hy automatically lies in H. The endomorphisms are isomorphic to the field M.

If R is not commutative, you can still map 1 to any integer, or anything else in the center of R. These all map H into H. Mapping 1 to 1 gives the identity automorphism. Add this automorphism to itself again and again to map 1 to various integers. Use this to show the characteristic of E equals the characteristic of R/H, which is either prime or 0. It is prime if the characteristic of R is finite.

As an example, let R be the 2×2 matrices over a division ring D. Let H be the submodule that is 0 down the left column. This is a subspace of dimension 1, in a space of dimension 2, hence a maximal left ideal. The quotient M, which is our simple module, can be represented by matrices that are 0 on the right. These are the cosets of H. Map the identity matrix into M and find 1 in the upper left. This is 1 in M, and its image determines the automorphism. Map this to y in the upper left without trouble, because Hy lies in H. In fact Hy = 0. But try mapping 1 to z in the lower left. Show that Hz does not lie in H. The only R endomorphisms of M scale M by elements of D on the right, even though M looks like D2. The endomorphisms form a division ring, as they should.

This generalizes to n×n matrices over D. Set the first column to 0 for the maximal ideal H, and set columns 2 through n to 0 for the quotient module M. Map 1 in the upper left to y in the upper left, and that's all you can do. Once again E = D.

The module M is semisimple, or completely reducible, if every submodule is a summand of M. Thus, if V is a submodule of M, them M is V cross W for some submodule W.

A simple module satisfies the definition of semisimple by default, having no submodules.

Z/p * Z/p is a semisimple Z module, while Z/p2 is not. Set V to the multiples of p in the latter, and there is no subgroup W wherein V*W resurrects the group. That would contradict the unique structure of a finite abelian group.

A ring R is semisimple if it is a semisimple left R module. The submodules of the integers Z are the various multiples of n, and none of these are summands, hence Z is not a semisimple ring.

If M is semisimple, every submodule of M is semisimple. Let U be a submodule of M, and let V be a submodule of U. We are looking for W such that V*W = U.

Since M is semisimple, M = U*S for some S. Similarly, M = V*T. Let W = U ∩ T. Suppose v + w + s = 0, where v ∈ V and w ∈ W and s ∈ S. With U and S disjoint, s = 0. Thus v + w = 0, but V and T are disjoint, so v = w = 0. V W and S are linearly independent.

If x is an element of M, x can be written as a+b, where a is in U and b is in S. Next write a as c+d, for some c in V and d in T. Then write d as e+f, for some e in U and f in S. In other words, a = c+e+f. Yet there is only one way to write a as components from U and S, namely a+0, hence f = 0, and d = e. Now d belongs to both U and T, so d belongs to W. An arbitrary element x is equal to c+d+b, from V, W, and S respectively. Therefore M is the direct product of V W and S.

If x is an element of U, write it as c+d+b, as above. Since U and S are independent, b = 0, and x = c+d, from V and W respectively. Since U is spanned by independent modules V and W, it is equal to V*W. Thus V is a summand of U. Since V was arbitrary, U is a semisimple module.

The quotient of a semisimple module is semisimple. Let V be a submodule of the quotient and let U be the preimage of V. Since U is semisimple, write U as K*S, where K is the kernel of the homomorphism. Then let T be the summand of U, whence M = K*S*T. Let W be the image of T*K. The cosets of K are uniquely represented as sums of elements from S and T. Yet the cosets of K are precisely the elements of the quotient module, and the coset representatives in S and T correspond to the elements in V and W respectively. Thus the quotient is V*W, and is semisimple.

The converse is not true, even for commutative rings. Let M = Z/p2, with kernel Z/p and quotient Z/p. Both kernel and quotient are simple, but M is not semisimple.

If M is a nontrivial semisimple module, it contains a simple submodule.

Let g be a nonzero element in M. Zero is a submodule of M that does not contain g. Take the union of an ascending chain of submodules missing g to find an even larger submodule missing g. Using zorn's lemma, let W be a maximal submodule of M that misses g.

Let T be a submodule such that T*W = M. If T is not simple than it is the cross product of submodules T1 and T2. Since W is maximal, W*T1 spans g, and W*T2 also spans g. Now W, T1, and T2 are linearly independent, so g belongs to W. This is a contradiction, hence T is a simple module.

Using the above lemma, M is semisimple iff it is spanned by simple modules.

Assume M is semisimple. Let U be the submodule of M that is spanned by all the simple submodules inside M. If U is not M, write M = U*W, whence W is a nontrivial semisimple module. By the above lemma, W contains a simple module, which is a contradiction. Therefore M is spanned by simple modules.

If U and V are submodules, and V is simple, then V is disjoint from U, or wholly contained in U. Anything else would produce a proper submodule of V.

Assume M is spanned by simple modules, and let U be an arbitrary proper submodule of M. Since all the simple modules are not inside U, let S be a simple module disjoint from U. This is the beginning of a chain of increasing sets of independent simple modules that are independent of U. At each step, bring in another independent simple module. It is independent of all the simple modules that have come before, and the new span remains disjoint from U. If you have built an infinite ascending chain, take the union to find an even larger set of independent simple modules. Their span remains disjoint from U, so we're ok. By zorn's lemma, there is a maximal set of independent simple modules that are all independent of U. Let V be the direct sum of these simple modules, i.e. their span.

By construction, U and V are independent. If they span all of M then U is a summand of M, and we are done. Suppose U*V misses x. Everything in M is spanned by simple modules, including x. Write x as a finite sum of nonzero elements drawn from simple modules. At least one of these elements is separate from U and from V, else x would lie in the span of U and V. Let the "separate" element y belong to the simple module C. Since y does not belong to U*V, C does not belong to U*V. We could add C to our collection of simple modules missing U. This contradicts the maximality of V. Thus U*V spans all of M, our arbitrary submodule U is a summand, and M is semisimple.

In summary, M is semisimple iff it is spanned by simple modules. In fact, M is the direct sum of simple modules. Demonstrate this by setting U = 0 in the above proof.

As a corollary, the direct sum of semisimple modules is semisimple. Each is a direct sum of simple modules, and the result is a direct sum of simple modules, which is semisimple.


If R is a left semisimple ring, then every left R module is also semisimple.

Let U be a cyclic R module, with a generator g. Let H be the left ideal in R that maps g to 0. Think of g as 1*g, whence x becomes xg. xg and yg are different iff x and y are different cosets of H in R. U is isomorphic to the cosets of H in R. In other words, U is a quotient module of R, and the homomorphic image of a semisimple module is semisimple. Therefore U is semisimple.

Remember that a module is semisimple iff it is spanned by simple modules. Let M be any R module. Each element of M acts as a generator, spanning a semisimple module. Yet every semisimple module is spanned by simple modules. Therefore all of M is spanned by simple modules, and M is a semisimple module.

Let M be a finite direct sum of n left simple R modules. Build a tower of submodules of M by bringing in the simple modules, one at a time. Each submodule inside the next establishes a quotient, which is the simple module that you just brought in. This is a composition series for M, hence M is noetherian and artinian.

Conversely, assume M is an infinite direct sum of simple modules. Bring them in one at a time, as before, to build an infinite ascending chain of submodules. At each step the complement, i.e. the direct sum of all the other modules, forms an infinite descending chain. M is neither noetherian nor artinian.

The semisimple module M is noetherian and artinian iff it is a finite direct product of simple modules.

If R is a semisimple ring, represent 1 as a finite sum of elements from some of the simple modules of R, which are in this case left ideals of R. To illustrate, assume 1 is [e1,e2,e3,0,0] across 5 simple modules. Let x be an element in the fourth component, and evaluate x*1. By the definition of a left module, x*1 can only be nonzero in the first three components. Thus x*1 is not x, and that is a contradiction. The representation of 1 is nonzero in every component, and that means R is a finite direct product of simple left ideals. A semisimple ring is noetherian and artinian.

With 1 spread across all the components of R, evaluate ei*1. The result has to be ei, which is 0 in every component other than i. Thus eiej = 0 for i ≠ j, and ei2 = ei. The components of 1 are orthogonal idempotents.

Multiply xi by 1 and get xi, thus xiej is 0, exept for xiei, which gives xi back again. Apply this to x*ej, where the ith component of x is xi. Remember that xiej drops out for i ≠ j. Thus right multiplication by ej is the projection of x onto xj, which is a module homomorphism from R onto its components.

It's interesting to see why an infinite direct product of simple modules does not build a semisimple ring. Let R be the direct product of infinitely many copies of Z/3, or any other field for that matter. Verify that R is a ring with 1, and the simple R modules are the components, the various copies of Z/3. At first it seems like R is semisimple. The submodules that come to mind are the direct products of some, but not all, of the components. If V is the direct product of the odd numbered components, then it has a summand, W, which is the product of the even numbered components. However, there is a submodule that you might not think of right away. Let V be the direct sum of the component rings. Let W be the disjoint summand, so that V*W = R. Let x be a nonzero member of W, with a nonzero value in the jth component. Multiply by 1j, to show that the jth copy of Z/3 belongs to W. This simple module also belongs to V, hence they are not disjoint after all. There is no summand, and R is not a semisimple module; not a semisimple ring. The direct sum of these fields makes a semisimple ring, if you don't mind the fact that it does not contain 1.

The Artin Wedderburn theorem completely characterizes semisimple rings. (As usual, rings contain 1.) Such a ring is represented as a finite direct product of simple artinian rings. Then simple artinian rings are analyzed. Each is a matrix over a division ring, as described here. Finally these simple rings are combined in a direct product to reproduce the original, semisimple ring.

If R is a semisimple ring it is noetherian and artinian, the finite direct product of simple left ideals M1 M2 M3 etc. Let 1 in R comprise ei in Mi. As shown above, each ei projects R onto Mi.

If R is commutative then each ei is the two sided multiplicative identity for its module. These idempotents create a direct product of rings. Each component ring is a simple R module, a simple Mi module, and a simple ring. Since simple commutative rings are fields, R is a finite direct product of fields. But what if R is not commutative?

The order doesn't really matter, so put isomorphic modules together into blocks. For instance, M1, M2, and M3 could be isomorphic simple left R modules, and together they span a block B1. M4 and M5 span B2, and so on. Each block will become a two sided ideal of R.
M1 B1
M4 B2

Left multiplication by x keeps each Mi within itself, as they are all left R modules. But right multiplication by x is a left module homomorphism that could map M1 somewhere else. Since M1 is simple, the kernel of this map is M1 or 0, and the image of this map is 0 or something isomorphic to M1. Since M1x is simple, it lies entirely inside or outside of B1. Suppose M1x lies outside of B1. Let S be the submodule spanned by B1 and M1x. Build a composition series of R, starting with 0 ⊂B1 ⊂S ⊆R. B1 has quotients M3, M2, and M1, and S/B1 has, somewhere in it, a quotient isomorphic to M1. That's for modules isomorphic to M1, but as a direct product of modules, R only has 3 quotients isomorphic to M1, grouped together as M1, M2, and M3. The composition series is unique, so this is a contradiction. M1x lies in B1, and the same for M2x and M3x. By linearity, B1x lies in B1, and B1 is a two sided ideal.

The same holds for B2, B3, and all remaining blocks in R.

Now consider B1 * x4. This has to lie in M4, because x4 is part of M4, which is a left R module. But it also lies in B1 as shown above. These are disjoint, hence M1 * x4 = 0. Anything in B1 times anything in B2 is 0. In fact the product of elements from any two different blocks is 0. Multiplication in R takes place per block, with no interactions between blocks. The blocks are independent from one another. R is the direct product of its ideals B1 * B2 * B3 ….

Separate 1 into e1+e2+e3 within B1, e4+e5 within B2, and so on. These are orthogonal idempotents, and they are the right identities for their respective blocks. Write 1*x = x, and expand 1 as above. Also expand x as a sum over xi. Since blocks are independent ideals, e1+e2+e3 times x1+x2+x3 lies in B1, and e4+e5 times x4+x5 lies in B2, and so on. 1*x has to equal x1+x2+x3+x4+x5+…, hence e1+e2+e3 * x1+x2+x3 = x1+x2+x3. Therefore e1+e2+e3 is the left identity for B1, and B1 is a ring with 1. The same holds for B2, B3, etc; each block is a ring. R is a direct product of rings, where each ring is a direct product of isomorphic simple left R modules.

If B1 is not a simple ring, it has a proper ideal H. Project R onto B1, and let R act on B1 from the left, hence H is a left R module inside B1. And B1 is a submodule of R, hence B1 is semisimple. Thus B1 = H*G. By jordan holder, H factors into left simple modules isomorphic to M1, and so does G. In this example there are 3 in total. Perhaps H has two and G has one. Now 1 in B1 projects onto G and H, giving eG and eH. eH projects B1 onto H, from the left or from the right, H being a two sided ideal. It pulls xH out of x, and therefore, commutes with B1. Since eG = 1 - eH, it too commutes with B1. Therefore B1 is a direct product of rings G*H.

If either G or H is not a simple ring, then repeat the process. Finally R is the finite direct product of simple rings, where each ring holds one or more of the original simple left R modules of R.

Let B be one of these rings, hence B is a simple ring, and a left artinian R module. Suppose B has a descending chain of left ideals. These are B modules, but since B is a summand of R, they are also R modules. That contradicts dcc, hence B is a left artinian ring.

B is the summand of a semisimple R module, hence B is a semisimple R module. Let U be a submodule of B, as a B module. Every B module is an R module, hence U is an R submodule. There is V such that U*V = B. B acts on V the same way, whether B is part of R or not; the rest of R is thrown away. Thus V is a B module. Each B submodule of B is a summand, and B is a semisimple ring.

Just for this paragraph, pretend like we didn't know B was part of R. A simple left artinian ring, whether it comes from R or not, is left semisimple. At this point I'm going to use a forward reference, and call upon a theorem from the next chapter. The jacobson radical of B, written jac(B), is a two sided ideal, and since B is simple this is 0. In other words, B is jacobson semisimple. A ring that is both left artinian and jacobson semisimple is left semisimple.

However you get there, B is left semisimple, and B is a finite direct product of left simple modules. If some of these component modules are not isomorphic, put like modules together into blocks as shown above. Each block is its own ideal, yet B has no proper ideals, so B is the finite direct product of copies of the same simple left B module M.

Let B comprise n copies of M. The simple module M, and the value of n, are well defined - established by jordan holder.

An endomorphism of B, as a left B module, determines, and is determined by, the image of 1. If 1 maps to y, then x maps to xy. The endomorphisms of B as a left B module are right multiplication by the various elements of B. The ring of endomorphisms corresponds to the elements of B, and is in fact isomorphic to B. In this case it is easier to analyze the ring of endomorphisms of B, and that in turn equals B.

Write B as the direct product M1 * M2 * …Mn, where each module Mi is isomorphic to M. A direct product is a product in the category of B modules. That means a B module homomorphism f, from any domain Y into B, defines, and is defined by, f(Y) into each component. Let M3 be the third component, and look at the functions from Y into M3. If Y is itself a direct product of B modules, then f can act independently on each component, and f acts on the whole by linearity. In other words, f(Y) defines, and is defined by, f on each component of Y. Set Y to B, which is a direct product of simple modules Mi, and let F map B into each component Mj, and f is essentially a B module homomorphism from Mi into Mj for each i and j from 1 to n. This is n2 module homomorphisms, which we can arrange in a matrix - but what does a module homomorphism look like?

Each module homomorphism is a map from a copy of M into another copy of M. This is an endomorphism on M. These endomorphisms form a division ring, as described at the top of this chapter. Call this division ring D. Thus each entry in the n by n matrix is an element of D.

If f and g are two endomorphisms from B into B, what does f+g look like? For each i and j, f and g establish microfunctions from Mi into Mj, which are elements of D. These microfunctions are added together by adding the corresponding elements of D. Indeed, this is how D was defined, the ring of endomorphisms on M. Therefore, functions are added together by adding their corresponding matrices.

It is not surprising that functions are composed by multiplying the corresponding matrices. Follow fg as it maps Mi into Mj. This is the ith row of the first matrix dotted with the jth column of the second. At the heart of it, matrix entries multiply together within the division ring D, as endomorphisms of M are composed. This is the key that makes it all work.

Put this all together and the endomorphisms of B are the n×n matrices over D. This in turn is isomorphic to B. The simple left artinian ring B has been characterized.

Given such a ring B, write it as a ring of matrices based on D and n. Then apply jordan holder to resurrect the same length n of its composition series, and the same simple modules, having the same endomorphisms D. There is one simple left artinian ring, up to isomorphism, for each division ring D and each positive integer n.

Finally the semisimple ring can be characterized. A semisimple ring is the finite direct product of simple artinian rings, where each component ring is the n×n matrices over a division ring.

The structure of a semisimple ring is symmetric, i.e. matrices over a division ring look the same from the left or the right. Therefore R is left semisimple iff it is right semisimple. Given this, you'll understand if I sometimes say R is semisimple, without specifying left or right.

If R is a simple ring, and x is in the center of R, then the ideal generated by x spans 1. This makes x left and right invertible, i.e. a unit. The center of R is a field.

Assume R is simple and artinian, and write R as the n×n matrices over D. A copy of D exists in R, namely the identity matrix scaled by D. The largest field inside D, i.e. the elements in the center of D, lives in the center of R. Conversely, the center of a ring of matrices is constant diagonal. This is a field within D. Therefore the center of R is the largest field within D, expressed as diagonal matrices.

Apply this to each component ring to find the center of a semisimple ring.

The Artin Wedderburn theorem characterizes simple artinian rings. Here are a couple simple rings that are not artinian.

Let R be the finite matrices over a field F, that live in an infinite grid. Rows run from 1 to infinity, and columns run from 1 to infinity, but each matrix has a finite number of nonzero entries, and fits within the framework of an n×n block for some n.

Any such matrix generates, as a two sided ideal, all the n×n matrices in its block. And any one of these generates all the matrices that have n+1 rows and n+1 columns. Any one of these generates the matrices that are n+2 by n+2, and so on. Therefore R is a simple ring.

Let Un be the left vector space that is nonzero from columns 1 to n, and zero beyond. Let Vn be the left vector space that is zero from columns 1 to n, and nonzero beyond. Un builds an ascending chain of left ideals, and Vn builds a descending chain of left ideals; thus R is neither noetherian nor artinian.

U1 is spanned by any nonzero entry in U1, hence it is a simple left R module. The same is true of matrices that are nonzero in only the second column, and only the third, and so on. Therefore R is spanned by simple modules, and is a semisimple ring. Really - a semisimple ring that is not artinian? It works because R doesn't contain 1. That would be the infinite identity matrix, which is not part of R.

Let T be the ring of matrices that have finitely many nonzero entries in every row, though each row, going down forever, could have arbitrarily many nonzero entries. Verify this is a ring. To illustrate, let the top row of A be nonzero in columns 2 4 and 6. Multiply by B, and the top row of AB becomes nonzero whenever there is a nonzero entry in rows 2 4 or 6 of B. This is the union of three finite sets, and is finite, hence the top row of AB, and every row of AB, has finitely many nonzero entries.

Thinking geometrically, each row is a point in generalized euclidean space, and each matrix corresponds to a linear transformation from Ej into itself.

The identity matrix is 1 in T, and implements the identity map on Ej.

R is a subring of T, and corresponds to the linear transformations that flatten most of Ej to 0, and operate meaningfully on a finite dimensional subspace of Ej.

Another ring, call it S, sits between R and T. Within S, every matrix is 0 beyond some finite column, but there can be something in every row. Once again S does not contain 1. These are the transformations that operate on all of Ej, but push it into a finite dimensional range. To recap, R ⊂S ⊂T, and only T contains 1.

In any of these three rings, Un and Vn are valid left ideals, hence these rings are not noetherian or artinian.

In R S or T, U1 is a simple module, as are the other column modules. R and S are spanned by these modules, but T is not. The identity matrix is not a finite sum of columns. Thus R and S are semisimple rings, but T is not. Not surprising, since T contains 1.

R is simple, so let's look for a two sided ideal in S or in T. A nonzero entry anywhere generates, courtesy of R, all the finite matrices of R. This includes the matrix that has 1 in the upper left and 0 elsewhere. Premultiply this by a matrix with anything you liike in the left column, to get the very same thing in the left column. Postmultiply this by a matrix to move our arbitrary vertical sequence to the second column, the third column, and so on. This builds all of S, thus S is a simple ring. Furthermore, S is the smallest nonzero ideal in T, and is included in every other ideal. Thinking geometrically, a map from Ej into a finite subspace of itself can be premultiplied, and postmultiplied, by any map on Ej, and the resulting map still carries Ej into a finite subspace.

What does the quotient ring T/S look like? Let M be any matrix not in S, and let v(r) be the rightmost column in row r with a nonzero entry. Premultiply by a matrix that moves the first nonzero row to the top. Because M is not in S, some row has a larger value of v(r). Move that row up to the second position. Again there is a row below with a larger v(r); move that into position 3. Continue this forever, until a matrix from T extracts the rows of M that build a strictly increasing sequence v(r).

Post multiply by a matrix that moves column v(1) over to column 1, v(2) to column 2, v(3) to column 3, and so on. Now M is lower triangular.

Premultiply by a diagonal matrix that scales the rows of M, so that the diagonal of M is 1.

The upper left block of M, 7 rows and 7 columns, is invertible, and it's inverse, call it C, is also lower triangular. Move to 8 rows and 8 columns, and extend C to be the inverse of M. This is backward compatible; the first 7 rows of C have not changed. When they are multiplied by the first 7 rows of M the result is still 1. Extend C to rows 9, 10, 11, and so on down the matrix. Since C is lower triangular, every row of C is finite, and C lives in T. Any matrix outside of S implies 1, and T/S is a simple ring.

There is another ring between R and T; I'll call it G because there are no more letters between R and S. A matrix in G has finitely many nonzero entries in every row, as inheritted from T, and finitely many entries in every column. Verify this is a ring. Take the transpose of AB, which is BT*AT. These matrices are finite in each row, and as shown above, their product is finite in each row. Thus the transpose, which is AB, is finite in every column.

G contains 1, the identity matrix.

G contains R.

Let a matrix M lie in the intersection of G and S, and suppose M is not in R. Thus M has infinitely many nonzero entries. If these are all contained in the first n columns, belonging to S, then some column is nonzero infinitely often. This pulls M out of G. Therefore S and G intersect in R.

Let M be a matrix in R, that lives in an n×n block. Premultiply by a matrix A in G. Below some row j, A is 0 in its first n columns. Thus A*M is 0 below row j. By symmetry, M*B is 0 beyond some column. The product is still in R, and R is a two sided ideal.

Mod out by R and consider the quotient ring G/R. If a matrix M is not in R it is not in S either, for G intersect S is R. So M is unbounded in its rows and its columns. Multiply by two matrices from G, one on the left and one on the right, to make M lower triangular with ones down the main diagonal. The next step is inverting M, but the inverse, C, may not lie in G. Let M be 1 on the main diagonal and -1 on the subdiagonal. M is a unit in T, but its inverse is 1 on and below the main diagonal, going down forever, and that is not in G. We can't simply premultiply by something to get 1.

I'm not sure if G/R has proper ideals, but it has subrings. Let a matrix M be in the set B if there is a bound l on the number of nonzero entries per row. Add two such matrices and add their bounds. Multiply two such matrices and multiply their bounds. Thus B is a subring.

By symmetry, there is a subring of matrices with a bound on the number of nonzero entries per column. Intersect these subrings to find a smaller subring of matrices with a bound on the number of nonzero entries per row and column. All of these sets contain 1 and R, and are proper subrings of G/R.