I have casually mentioned modules from time to time in previous chapters; let's define them now. There is of course much more to say about rings, but some of this material applies to modules and rings simultaneously. For instance, one can localize a ring, or a module. A ring or a module could be noetherian or artinian. Semisimple rings are analyzed using modules, and so on. So the time for modules is now.
A left module consists of a ring R, an abelian group G, and a function f that maps R cross G into G. Adopt the additive notation for the abelian group G, whence 0 is the identity element and + is the group operator. The ring has its usual notation, with + for addition and * (or juxtaposition) for multiplication.
To be a module, f must respect group addition, both in G and in R. In other words, f(x,a) + f(x,b) = f(x,a+b), and f(x,a) + f(y,a) = f(x+y,a). Note that f(0,a) = f(0-0,a) = f(0,a)-f(0,a) = 0. Thus 0 squashes G down to 0. Using similar reasoning, f(x,0) = 0.
If n is a positive integer, show by induction that n*f(x,a) = f(n*x,a) = f(x,n*a). Use inverses in the abelian group to generalize this to negative values of n.
Beyond this, f(x,f(y,a)) = f(xy,a). This is a form of associativity; at least it looks that way when you use multiplicative notation: (xy)a = x(ya).
If H is a left ideal in R, and a belongs to H, f(x,a) = x*a defines a left module. In other words, a left ideal in R is a left R module.
Another example: multiply cosets of the left ideal H by elements of R on the left to get another R module. If a is a cosrep of H and u is in H, then x times a+u is xa+xu, and xu is in H, so the map on cosets is well defined. If H were a two sided ideal the module would be the quotient ring R/H, but when H is a left ideal it is just a left R module.
The cosets of one left ideal inside another form a left R module. Similarly, the cosets of a submodule form a new module, but this is really the image of a module homomorphism, and I'll get to that later.
Given a ring homomorphism f(R) into S, every S module is also an R module. Let R act on group elements as f(R) would. Some algebra shows this is indeed an R module.
Any abelian group is a module over the integers, where n*a is iterative addition, or iterative addition on -a if n is negative. Since the ring is the integers, denoted Z, I will call this a Z module.
Right modules exist as well, in which f takes G cross R into G, and satisfies the analogous identities on the right. Right ideals, and cosets thereof, become right modules. If R is commutative, left and right modules are indistinguishable, and are simply called modules.
Let M be a module containing a and b, such that b = xa for some x in R. In other words, b is in the image RM. Write 1*b = 1*(xa) = xa = b. Therefore b is in the image RM iff 1*b = b. (This assumes R contains 1, but that is my default assumption.)
When 1*b = b for all b in M, the module M is a unitary module. This happens iff the image RM is all of M.
Any module M can be written as the direct product of modules U cross V, where U is unitary. Let U be the image 1*M, which is a unitary module. That is, 1*1*b = 1*b. And let V be the subset of M satisfying 1*V = 0. Verify that both U and V are submodules.
Let b = 1*a, so that b is in U, then let 1*b = 0, so that b is also in V. Thus 1*1*a = 0, and b = 0. U∩V = 0.
For any x in M, let y = x - 1*x, so that 1*y = 0, and y is in V. Now x = 1*x+y, the sum of two elements taken from U and V.
Suppose x has multiple representations in U cross V. Subtract the two representations and multiply by 1. This shows their U components must agree. Subtract these away, and their V components must also agree. The decomposition of x is unique, and M = U cross V.
Verify that elements of M are added and scaled in concert with their components in U and V. Therefore M is the direct product of U and V, as modules.
The submodule V isn't very interesting; it is squashed down to 0 by all of R. U is where the action is. Therefore I will assume all modules are unitary, unless stated otherwise.
A submodule is a module contained in another module. The submodule is a subgroup of the original module, and R carries the submodule into itself.
The intersection of submodules is another submodule. As usual, the submodule generated by a set S is the intersection of submodules containing S. If S contains one element, the submodule is cyclic. If S is finite the submodule is finitely generated.
If a module is generated by S, the module consists of all finite sums of xisi, where xi comes from the ring and si is a generator.
Let M be a cyclic module generated by the element s. Let H be the elements of R that drive s into 0. Verify that H is a left ideal, hence the action of R only depends on the coset of H in R. If x is a cosrep of H in R, associate x with xs. Members of M add as their corresponding cosets of H add in R, and by associativity, y moves xs to (yx)s. M is isomorphic to the cosets of H in R, under the action of R. If R is commutative, M = R/H.
You could probably write this section yourself.
A module homomorphism is a function from a module M into or onto another module, that respects addition and multiplication by R. The kernel K is the subset of M that maps to 0. Verify that K is a submodule of M, and the cosets of K correspond to distinct elements in the image of M.
Conversely, any submodule K of a module M determines cosets of K in M, and these cosets form the quotient module M/K. This quotient module is isomorphic to the image of M under f with kernel K.
Correspondence follows in the usual manner. Submodules of M containing K correspond 1 for 1 with submodules in the quotient module M/K, and if L contains K, M/L is isomorphic to (M/K) / (L/K). Modules are abelian groups, so you get most of this from group correspondence. You only need prove multiplication by R doesn't cause any trouble; and it doesn't.
There is something new here; module homomorphisms from U into V can be added together. Let (f+g)(x) = f(x) + g(x). Show that f+g is a module homomorphism, respecting addition and scaling by R. Thus the module homomorphisms from one module into another form an abelian group.
If R is commutative, a module homomorphism f can be scaled by c in R, so that (cf)(x) = cf(x). Apply cf to dx and get cf(dx), or cdf(x), or d(cf)(x), hence cf is another module homomorphism. We needed R to be commutative to slide d past c. Thus the homomorphisms from U into V form another R module.
Let M be a left R module. As you know, a module homomorphism from M into itself is called an endomorphism. Verify the steps below to show that these endomorphisms form a ring.
The endomorphisms form an abelian group via (f+g)(x = f(x) + g(x). This was discussed in the previous section.
The product of endomorphisms f and g is f followed by g. (Definition)
The product of f followed by g can be written f*g, or fg. (Convention)
The product fg satisfies the criteria for a module homomorphism. It is after all just the composition of two homomorphisms.
Multiplication is associative. Either way it's f followed by g followed by h.
Multiplication distributes over addition.
f*(g+h) = fg + fh and (g+h)*f = gf + hf.
The zero and identity homomorphisms correspond to 0 and 1 respectively.
The endomorphisms form a ring.
The module automorphisms of M form a nonabelian multiplicative group inside the ring of endomorphisms.
The left module M is also a right module, when acted upon by a different ring, i.e. the ring of endomorphisms. If x is in M, and f is an endomorphism, then xf = f(x).
Given two rings R and S, M is an RS bimodule if M is a left R module and a right S module, and (R*M)*S = R*(M*S).
An example of a bimodule is any left R module M, where S is the ring of R endomorphisms of M, written on the right. Since f, from S, is a module endomorphism, y*f(x) = f(yx). Write this in infix notation as: y(xf) = (yx)f, satisfying the conditions of a bimodule.
Conversely, a bimodule M is a left R module, and if f is an element of S, it rearranges the elements of M; thus a function from M into itself. Since f respects the action of R, and addition in M, it is a valid M endomorphism. Since M is a right S module, the sum f+g has to be f(x) + g(f), and fg is f followed by g. This agrees with the definition of the ring of endomorphisms. In other words, S is a subring of the ring of endomorphisms of M.
When R is a ring, R acts as a left R module, which admits a right R module S, namely the R endomorphisms of R. Each endomorphism defines, and is defined by, the image of 1. Equate the function f with the element f in R, wherein f(1) = f. The function f is the same as right multiplication by f, and yes, this is a left R module homomorphism. Adding functions is the same as adding the corresponding elements, and fg, as functions, is multiplication by fg. Therefore S is the same as R.
Don't confuse module endomorphisms with ring endomorphisms. Complex conjugation fixes 1, and swaps i and -i. This is a ring endomorphism that fixes 1, but there is no left R module endomorphism that fixes 1, other than the identity map. It's a different world.
A set of elements S in a module M is linearly independent if a finite linear combination of these elements totals to 0 only when the coefficients are all 0. As a corollary, each linear combination produces a unique element of M. If two different combinations produce x, subtract them to find a nontrivial linear combination that produces 0.
The independent set S is a basis if it spans (generates) all of M. Thus S is a basis iff each element in M is represented by a unique linear combination of members of S.
A free module is a free object in the category of modules. Show that a free module is a direct sum of copies of R. Let 1, in the ith component, act as a generator gi. Map gi to anything in a module V, and the rest of R, in that component, must follow. Since f is determined per component, it is determined for the direct sum, and the map is unique. This satisfies the definition of a free object, and it makes the direct sum a free module.
A free object in any category is unique up to isomorphism, so any free module, properly labeled, looks like so many copies of R.
Note that the free module M is both a left and a right module.
Given a free module M, the generators, as described above, form a basis. Everything in M is a unique linear combination of these generators. That's pretty much the definition of a direct sum.
A free module M has a basis, but how about the converse? Let M be a module with basis S. Let w be any element of S. Map the ring R into the module M via R*w. This is a ring homomorphism that maps onto the submodule generated by w. Since w is part of an independent set, the map is injective, hence a ring isomorphism. Now every element w in S generates a submodule of M that looks just like R. Since S is a basis, the members of M correspond 1-1 to linear combinations of the elements in S. Furthermore, addition and scaling are implemented by performing the same operations on basis coefficients. Thus M is isomorphic to the direct sum of copies of R, one copy for each basis element. M is a free module.
In summary, M is free iff M has a basis.
Let's revisit a couple of our cherished assumptions. When I say R is a free R module, I'm referring to unitary modules. Otherwise things go wrong. Let M be a module that is not unitary, with 1*z = 0. Let 1 in R map to z. If R is free then there is a module homomorphism f with f(1) = z. Thus 1*f(1) = 1*z, and f(1) = 0. This is a contradiction, thus modules must remain unitary.
We also run into trouble if R does not contain 1. Let R (commutative) have characteristic 2, generated by x and y, where xy = 0. Thus R consists of sums of powers of x and powers of y. It's not clear what unitary means without 1, but nothing in R is driven to 0 by all of R, so R looks like a unitary R module. Suppose R is free, and g is a generator, a polynomial in x and y with no mixed terms. If g has degree n greater than 1, such as x3+x+y, then set f(g) equal to x. Now f(xg) = xf(g) = x2. f carries a polynomial of degree n+1, namely xg, to x2. What is the image of x under f? Let f(x) be a polynomial h(x). You'll see that it doesn't matter if h has y terms, they're going to get squashed. While we're at it, strip the y terms off of g, which does not change xg, nor f(xg). By linearity, f(x*x) = x*h. Apply this to xg and f(xg) = g*h. Here is where any y terms of h get killed. If h is nonzero, g*h has degree at least 3, while f(xg) = x2. If h = 0 then f(xg) = 0, which is also a contradiction. The only possible generators are x, y, or x+y. Let f(x) = y, whence yf(x) = y2, and f(0) = y2, which is impossible. By symmetry y is not a generator. If g = x+y, let f(g) = g. This is compatible with the identity map, but also f(x) = g and f(y) = 0. The function is not unique, as it should be for a free module. With this in mind, I will continue to assume rings contain 1, and modules are unitary, so that R is a free R module.
The direct sum of free modules is free. Take the union of the basis elements across the board to build a basis for M.
The converse is not true. Z/6 is the direct product of Z/2 and Z/3, yet the two summands are not free Z/6 modules. Map 1 in Z/2 to 1 in Z/6, and 1+1 maps to 1+1, whence 0 maps to 2, which is impossible.
If R is a division ring, an R module is also called an R vector space. Every vector space has a basis, and is thus a free R module, with a specific dimension, which is the size of the basis. This was proved earlier.
Once a homomorphism is defined on the basis of M, it is defined on all of M. That's what we mean by a free object. Let M = Rn, n copies of R, and consider the endomorphisms of M. Write the image of g1 as a linear combination of g1 through gn, and put this in the top row of a matrix F. Let the image of g2 be the second row, and so on. The function f is now faithfully represented by the matrix F. Each member of M is a vector x of length n, i.e. the coefficients on g1 through gn, and x*F (using matrix multiplication) becomes f(x). Two functions can be added by adding their matrices, and function composition is matrix multiplication. The ring of endomorphisms of a free module Rn is the same as the ring of n×n matrices over R. The module automorphisms correspond to the nonsingular matrices. We already explored the connection between matrices and functions when R was a field, but it's true for any ring, even a noncommutative ring.
When R is a field, every R module is a free module, also known as a vector space. The size of the basis is the rank or dimension of the module. In linear algebra we call this the dimension of the vector space, and it is well defined. If a vector space has rank 3, you cannot find some other basis with 2 elements, or 4. It's 3; no more and no less. Well, the same holds for integral domains.
This is a substantial generalization, from fields to integral domains, so you might think it is difficult to prove, but it's easy once you see the trick. Embed the rings in their fractions, like the integers in the rationals, then invoke the corresponding theorems for vector spaces. The rank of the free module is fixed, because the rank of its vector space is fixed. The proof is still a bit sketchy, because we don't yet know what fractions are, but I think it's pretty intuitive, and the missing details will come later.
If this proof seems familiar, it's because you've seen it before. When R is the integers, a free R module is a free abelian group, and we already proved a free abelian group has a well defined rank, by embedding the integers in the rationals. Let's do the same thing here.
Let R be an integral domain, and let F be the fractions of R, like the rationals on top of the integers. Suppose M is a free R module with two different ranks j and k. If j and k are infinite, we're talking about direct sums, not direct products.
Since M has two different representations, there is an implicit module isomorphism between Rj and Rk. Let k be larger than j, and let e be the isomorphism from Rj onto Rk. For our purposes, e need only be an epimorphism. Naturally, e is defined by its action on the basis, and the image of each of the j basis elements is a linear combination of the k basis elements. For instance, e(x1) might be 3y1 + 15y7 - 83y22.
Embed each copy of R in a copy of F; thus Rj lives in Fj, and Rk lives in Fk. Extend e, in a natural way, to the entire vector space Fj. If d is a divisor, then e(x1/d) = (3/d)y1 + (15/d)y7 - (83/d)y22. This extends e from R to F, and by adding components together, e is defined on the entire domain Fj.
Since e respects addition in R, it does so when divisors are attached. In other words, e remains a linear map.
If c is an element of F, write c as a fraction a/b. The original function was an R module homomorphism, so e respects multiplication by a. The extension carries the divisor through from the domain to the range, thus e respects multiplication by 1/b. Put this all together and e respects multiplication by c in F. Thus e is an F homomorphism, a linear transformation from one vector space into another.
Let y/d be a fraction in one of the components in the range. The original map was onto, so something in the domain maps to y. Divide through by d to find something that maps to y/d. The extended linear transformation is onto. One vector space maps onto another vector space with a larger dimension, and that is impossible. Therefore k cannot be larger than j.
If M is a free module, it is isomorphic to Rj for some j, and j is fixed. M has a well defined rank. This was true when R was a field, and it is also true for integral domains. It is actually true for all rings, but that is several chapters ahead.
Realize that we proved something stronger here; a free R module cannot map onto a free R module of larger rank, just as a vector space cannot map onto a larger vector space. Using the very same proof, a free R module cannot embed in a free R module of lesser rank, just as a vector space cannot fit inside a smaller vector space. Build the extended map e on vector spaces as above, and show that it remains injective. Let x be a sum of fractions from various components of Fj, having a common denominator d. Multiply through by d, and find something in Rj that maps to 0. This contradicts the assumption that e was originally injective on Rj.
If the module homomorphism is injective, j is no larger than k. If the module homomorphism is surjective, j is no smaller than k. If the free modules are isomorphic, j = k. This mirrors the corresponding theorems for vector spaces, and it applies even if j or k are infinite.
Every module M is the homomorphic image of some free module F. Let the set S generate M, even if S is every element of M, and let F be a free module whose basis elements correspond to the generators in S. This correspondence defines the module homomorphism. That is, x in F maps to x in S, and the map on F follows from there.
The kernel is all the elements of F that map to 0, and as expected, this is a submodule of F.
If S is finite then F and M are finitely generated. If the kernel of the homomorphism from F onto M is also finitely generated, then M is finitely presented. Each generator of the kernel is called a relator. Thus a finitely presented module can be described using finitely many generators and relators. Review the relators / relations of a group; the concepts are the same for modules.
Here is a simple example. Let R be the ring of integers. Let 1 and q act as generators for a free module over R. At this point the module is Z*Z. Bring in one relation, 7q = 2. Now perform algebra as usual, but replace each instance of 7q with 2 as you go. The module is infinite, but it is finitely presented, with two generators and one relation. Though not obvious at first, it is isomorphic to Z, generated by 4q-1.
When a module is finitely generated, with n generators, endomorphisms can be represented by n×n matrices over R that map each generator to a linear combination of said generators. If the module M is free then matrices and endomorphisms correspond 1 for 1. This was described earlier. If M is not free then a nontrivial linear combination of generators is equal to 0. Make this the first row of the matrix, thus mapping g1 to 0; or set the first row of the matrix to 0, which also carries g1 to 0. The representation is not unique, but still every endomorphism has at least one representation as a matrix. Matrix addition and multiplication correspond to the sum and composition of their endomorphisms, respectively.
Let the ring R be the direct product of rings R1 R2 … Rn. These component rings need not be the same. Let M be a left R module. For each i in 1 to n, let Mi be the submodule produced by multiplying Ri by M. Remember that R*Ri is still Ri, hence Mi is a valid left R module.
Let x be an element in M. Write 1*x = x. (No trouble here, since modules are unitary.) Replace 1 with the sum of 1's from the component rings, and x is the sum of entries from the respective submodules. Thus the submodules span M.
If a nontrivial sum of elements from the submodules gives 0, let M1 participate in this linear combination. Thus x1 in M1 is spanned by the remaining submodules. Multiply by 1 in R1, and x1 remains x1. However, all the elements in the other modules drop to 0, hence x1 = 0. The submodules are independent, and M is the direct product of these submodules.
If each submodule Mi is a free Ri module with rank j, then write Mi = Rij. This holds for each i. Rearrange the rings, so the product is (R1R2R3…Rn)j. This makes M a free R module of rank j.
For any left ideal H, H*M is the submodule generated by the products xy, where x is in H and y is in M. The products themselves are not sufficient; you need all finite sums of these products in order to generate the submodule. Let's consider an example.
Set R = F[x,y], the polynomials in x and y with coefficients taken from the field F, say the reals. Let x and y generate H. Thus H contains the polynomials with 0 constant term. Let the module M be free with basis e1 and e2. The product submodule H*M has generators xe1, xe2, ye1, ye2, and must include the element xe1 + ye2. Suppose this is the product of some polynomial in H and something in M. The polynomial must divide both x and y, which is impossible. Thus H*M is more than just products of elements. We saw the same thing when multiplying ideals. The product H2 includes x2+y2, which cannot be the product of two polynomials taken from H, assuming the coefficients are real numbers.
To show associativity of A*B*M, where A and B are ideals and M is a module, take finite sums of products of elements from the two ideals, then apply that to the module, and compare that with finite sums of products from B*M, and apply A on the left. In either case the result is the same; all finite sums of triple products from A B and M. Note that A could be a left ideal, but B must be a two sided ideal.
Given submodules H and J of a left module M, the conductor ideal [H:J] is the set of elements y in R such that y*H lies in J. (y times everything in H lies in J.) Note that this is a left ideal.
Most books will write this [J:H], but I like to think of y driving H into J, left to right.
The annihilator of H is the conductor ideal [H:0]. This is the left ideal that drives H into 0, i.e. everything in R that kills, or annihilates, H.
The modules H and J could be ideals in a ring, whence the conductor ideal drives one ideal into another.
A prime number has two distinct positive factors, 1 and itself. A simple group has two distinct subgroups, 1 and itself. A simple ring has two distinct ideals, 0 and itself. A simple left R module M has two distinct submodules, 0 and itself. Since 0 is always a submodule of M, M must be nontrivial, with no intermediate submodules between 0 and itself.
A simple module is also called an irreducible module.
If g is a nonzero element of a simple module M, g generates M, else it would span a proper submodule. Thus a simple module is cyclic, and any nonzero element acts as generator.
I'm using the fact that M is unitary here. Since g is the same as 1*g, g generates itself, and builds a nonzero submodule, which has to be all of M. At the very least, G has to generate something.
One way to create a simple left module is to start with a maximal left ideal H in R. The cosets of H define a left R module M that is not trivial. For instance, 1 is not in H, so 1+H and 0+H are distinct cosets. Now a proper submodule of M defines a left ideal between R and H, which is impossible. Thus M is simple.
The converse is also true. If M is simple, let g be any nonzero element of M, which acts as a generator for M. Let H be the left ideal of R that maps g to 0. We saw above, in the Submodules section, that this cyclic module is isomorphic to the cosets of H under the action of R. If H is part of a larger, proper left ideal J in R, then the cosets of H in J become a proper submodule of M. Since M is simple, this can't happpen, thus M is isomorphic to the cosets of H in R, where H is a maximal left ideal.
If R is commutative, every simple R module is R mod a maximal ideal, which is a field.
R may be a simple ring, but not a simple R module, if it has intermediate left ideals. Review the matrices over a field.
When R is both a ring and a simple R module, any nonzero x generates all of R, and is left invertible. This makes R a division ring. Thus when R is a ring, it is a division ring iff it is a simple left R module, iff it is a simple right R module.
An earlier chapter explored descending chains of subgroups, where in each is a normal subgroup of the previous. Each normal subgroup implies a factor group, also called a quotient group. If the quotient groups are simple, and if the list is finite, The quotient groups form the composition series of G. For instance, D15 contains Z/15 contains Z/3 contains 0 - has the composition series Z/2, Z/5, Z/3.
Like unique factorization into primes, G has a unique decomposition series. The factor groups may appear in a different order, depending on how you build the chain, but you always get the same simple groups. The same is true of modules. Build a descending chain of submodules. Since groups are abelian you don't have to worry about "normal" here; every submodule acts as a kernel for a module homomorphism, and implies a quotient module. Each chain can be refined, until the quotients are simple. Once this is done, the composition series is unique.
The proof is exactly the same. Review Zassenhaus, and Jordan Holder, and verify that scaling on the left by R causes no trouble. That's all you need.
This assumes the chains of M are finite. If M contains infinite chains, anything goes. Consider Z, as a Z module. Submodules could be 2Z, 4Z, 8Z, etc, giving factor groups of Z/2 forever. Or submodules could be 3Z, 9Z, 27Z, etc, giving factor groups of Z/3. There is no unique factorization here.