Ask a high school student to describe algebra, and he'll tell you about variables and equations. Fair enough, but if a ring is an algebra, that has a specific meaning.
The ring S is an R algebra, relative to a base ring R, just as S might be an R module. The base ring R should be stated if it is not implied by context.
The ring S is a left R algebra if S is a unitary left R module, and the action of R can be interchanged with multiplication in S. For every a in R and every x and y in S:
a(xy) = (ax)y =x(ay)
This definition is easier to understand if we assume all rings contain 1, and ring homomorphisms map 1 to 1, which is the default assumption throughout this book. Set x = 1 in S, and ax becomes an element of S; we may as well call it f(a). This is a function from R into S. Since S is an R module, f respects addition in R. How about multiplication? In the following, a and b are elements of R, and y lies in S.
f(ab) = (ab)1 = a(b1) = af(b) = a(1f(b)) = (a1)f(b) = f(a)f(b)
(1∈R) times (1∈S) = 1 (S is a unitary module)
f(a)y = a1y = ay1 = y(a1) = yf(a)
The function f respects multiplication, as well as addition, and it carries 1 to 1. In other words, f defines a ring homomorphism from R into S. Furthermore, since f(a) and y commute, R maps into the center of S.
Conversely, let f(R) be a ring homomorphism into the center of S. This makes S an R module, and since 1 maps to 1, S is a unitary module. Use the properties of a ring homomorphism, and the fact that the image lies in the center of S, to prove that a(xy) = (ax)y =(xa)y = x(ay), and you have completed the circle. The ring S is an R algebra iff a ring homomorphism carries R into the center of S.
The image of ab in S is f(a)f(b), which is the same as f(b)f(a), or the image of ba. Any noncommutative aspects of R are left on the cuttingroom floor.
It is equivalent to think of S as a right R algebra, R still mapping into the center of S, so we may as well call it an R algebra and be done with it.
Technically, the homomorphism is part of the algebra. It is the action of R on S. Sometimes R is a subring of S, so that R embeds in S. In this case the homomorphism is the identity map.
A subalgebra is a subring of S that is also an R algebra.
If S is a division ring then S is a division algebra.
If S is a flat R module then S is a flat R algebra.
If S is a finitely generated R module then S is a finite R algebra. However, S is a finitely generated R algebra, or an algebra of finite type, if S is finitely generated over R as a ring. Adjoin finitely many elements to R, apply all ring operations (creating a ring of polynomials), and mod out by certain relations.
If the homomorphic image of R is commutative, that image forms an R module and a finite R algebra. Thus Z/n is a Z algebra. Trivially, R (commutative) is an R algebra.
Every ring is a Z algebra. Map 1 to 1, and Z maps into the center of S.
Polynomials, power series, and laurent series, in multiple indeterminants that may or may not commute with each other, are examples of R algebras over a commutative base ring R of coefficients.
The complex numbers and the quaternions are real algebras.
If S is an R algebra, then the ring of n×n matrices over S is an R algebra. Map R into diagonal matrices, with 1 leading to the identity matrix. The polynomials over S also form an R algebra.
Within the category of R algebras, a morphism is a ring homomorphism that gives a commutative diagram with respect to R. That is, f takes S1 into S2, and for each x in R, x → S1 → S2 = x → S2.
If R is commutative then R is an R algebra. The identity map takes R into the center of R. Let S be another R algebra, with h(R) in the center of S. We can go from R to R to S, or from R directly to S. Thus h is a perfectly good morphism in the category of algebras. It is also the only morphism that makes the diagram commute. Therefore R is the universal object.
If a theorem holds for R algebras, it holds for all rings, since the category of rings is the category of Z algebras. To prove this we need to show that every ring homomorphism respects multiplication by n (an arbitrary element of Z), making it a morphism between Z algebras, but this is a consequence of 1 mapping to 1 in a ring homomorphism. Verify that there is no trouble if one or both rings has a finite characteristic. The action of Z is still preserved.
We're going to prove that the tensor product of two R algebras is another R algebra, but first we need a lemma about associative homomorphisms.
Throughout this section, R is commutative, else the tensor product of R modules wouldn't even be an R module, much less an R algebra.
Let S be an R algebra. Let T be S×S, the tensor product of S and S. If S was commutative, and if T was viewed as an S module, it would simply be S, since S×S = S. But S need not be commutative, and in any case, we're viewing S and T as R modules. For instance, if S is R2, the direct product of R and R, then T becomes R4.
S is a ring, possibly without 1, and multiplication in S implements a bilinear map from S cross S into S. Since T is universal, there is a module homomorphism h(T) into S that agrees with multiplication in S. Apply h to (x,y), giving xy in S, thus mirroring x times y in S cross S. Pair this with z in S, map this into T, and apply h again, giving an element in S that is roughly described as h(h(x,y),z). This has to be xyz. Yet the same thing arises from h(x,h(y,z)). The function h is associative, and there is always such a function h whenever S is an R algebra.
Now assume the converse. Let S be an R module, and let T = S×S. Let h be an R module homomorphism from T into S, such that for every x y and z in S, h(h(x,y),z) = h(x,h(y,z)). Define multiplication in the module S as x*y = h(x,y). Here x and y start out in S cross S, map to the pair generator (x,y) in T, and then map to an element of S through the function h.
We need to verify the properties of a ring. Addition is already an abelian group, since S is an R module, so look at multiplication. Consider xy+xz in S. Every tensor product is bilinear, whence (x,y)+(x,z) = (x,y+z) in T. Since h is a module homomorphism it respects addition, thus h(x,y)+h(x,z) = h((x,y)+(x,z)) = h(x,y+z). Translate to multiplication in S, and xy+xz = x*(y+z). Multiplication distributes over addition.
Since 0 crossed with anything yields 0 in T, 0 times anything in S equals 0.
Use h to derive the product xy, then multiply by z, running through h a second time. Since h is associative, multiplication is associative. That's enough to prove S is a ring, although it may not be a ring with 1.
Now let's show S is an R algebra. Since R is commutative, the action of R on S is the same from the left or the right. Thus ax is the same as xa. Within T, a acting on (x,y) is (xa,y), which is the same as (x,ay). Follow this up with h, which is an R module homomorphism, and within S, a(xy) = (ax)y = x(ay). This satisfies the criteria of an R algebra, as described in the introduction. S is an R algebra iff if has an associative function h from T into S.
Now for the tensor product of finitely many R algebras. Let S1 S2 S3 … Sn be R algebras, whose tensor product is T. Let U = T×T. Since tensoring is commutative and associative, write U this way.
U = (S1×S1) × (S2×S2) × (S3×S3) … (Sn×Sn)
This isomorphism is accomplished by permuting generators. In other words, T×T is isomorphic to U as given by the above formula.
Since S1 is an R algebra, there is a module homomorphism h1 from S1×S1 into S1 with the associative property. Find such a homomorphism hi for each algebra Si. Tensor the domains of these homomorphisms to get U, and tensor the ranges to get T. This induces a new homomorphism h(U) into T that operates per component. Since the component homomorphisms are associative, h is associative. Fold in the isomorphism equating U ⇔ T×T, and find a homomorphism from T×T into T. If there are three algebras, for instance, and h is to act on x1 x2 x3 y1 y2 y3, it acts, through h1, on (x1,y1), as though they were together, as they are in U, and similarly for (x2,y2) and (x3,y3). Each action is associative, and h is associative. This makes T an R algebra.
In summary, the tensor product of finitely many R algebras gives another R algebra. Furthermore, multiplication in the tensor product T, as defined by h, is compatible with multiplication in each ring, as defined by hi. In other words, multiplication is performed per component. This does not mean the individual rings Si are commutative, but they do commute with each other under multiplication in the ring T.
If each ring Si contains 1, let e be the element in T represented by (1,1,1,…1). Multiply e by x in T, which multiplies 1 by xi in each ring. The result is x, hence e is the multiplicative identity for T. If the component rings contain 1 then so does their tensor product.
The converse may not hold. The even integers form a Z module, and so does Q. Their tensor product is Q, containing 1, but the component ring 2Z does not contain 1.
Yes, T is an R algebra, but it is also an Si algebra for each Si. (This assumes Si is commutative, whence T is an Si module.) The aforementioned homomorphism h is still associative. If it is an Si module homomorphism then that proves T is an Si algebra.
Let v be an element of Si and consider h(vxy). The action of v on T, or on T×T, is implemented by multiplying xi by v. Thus xi becomes vxi. The isomorphism onto U rearranges generators, but doesn't change vxi. Now h is compatible with hi, which implements multiplication in Si, hence the ith component becomes vxiyi. This is once again the action of v on T. Therefore h respects the action of v, and is an Si module homomorphism, whence T is an Si algebra.
Let R be a commutative ring and let S and T be R algebras, such that R embeds in S and in T. In other words, the homomorphism that carries R into S, and into T, is a monomorphism. For clarity, let's say that S and T intersect in the subring R. Other than R, S and T are assumed disjoint.
I'm going to talk about the generators of S, sometimes as a ring, and sometimes as an R module. These are quite different, and should not be confused. If S is R[x], the polynomials in x with coefficients in R, then the single indeterminant x generates all of S as a ring. In other words, S is a finitely generated R algebra, or an R algebra of finite type. However, as an R module, S is infinitely generated, with generators 1, x, x2, x3, x4, and so on.
Thanks to the previous theorem, we know the tensor product S×T is a ring, and an R algebra. We would like to describe this ring in terms of R, and the generators of S and T, as rings, but we have to start by looking at the generators of S and T as R modules first, since the tensor product operates on modules, not rings.
As a warmup, look at polynomial rings. Let S be R adjoin x1 x2 x3 etc. These indeterminants may or may not commute with each other. Similarly T is R adjoin y1 y2 y3 etc. If S and T can be described as R modules, using generators, then a procedure builds the tensor product S×T. The generators of S are all the finite products of x1 x2 x3 etc, including the empty product, which is 1. Similarly, the generators of T are all the finite products of y1 y2 y3 etc. If rings are noncommutative, then x3x5 will be a different generator from x5x3. Armed with these generators, we can build S×T.
Cross each generator of S with a generator of T to get a generator of S×T. This is a product of x indeterminants times a product of y indeterminants. Examples include 1, x2y5, x73, x4y1y6, etc.
The relations denoted KS and KT are folded into the kernel of the tensor product, but S and T are free R modules, and there are no relations. That leaves only the action of R. If c is in R then (xc)y = x(cy). Since R commutes with indeterminants by convention, this is already part of the free ring. As an R module, S×T is generated by products of indeterminants in S times products of indeterminants in T.
The previous section describes multiplication within S×T. Polynomials in x, and in y, are multiplied in the usual way. The x and y indeterminants commute past each other and regroup - even if they do not commute among themselves. If all rings are commutative then the tensor product is simply R[x1,x2,x3…y1,y2,y3…], the ring of polynomials in all indeterminants.
Now assume S and T are arbitrary ring extensions of R. Write S as the quotient of a free ring. If x1 x2 x3 etc act as generators for S, then S is the image of the polynomial ring R[x1,x2,x3…], having kernel KS. Similarly, let T be the quotient of a polynomial ring having kernel KT. As an R module, S is generated by all finite products of these generators - and similarly for T. Take the cross product to find generators in S×T. This time we have kernels KS and KT that must be folded into the tensor product. Multiply the relations that generate KS by all the products over y, and all the products over x by the generators of KT, to build a kernel that characterizes the tensor product S×T.
You can generate the free group over S×T as an R module, or as a ring, as you please. The latter representation has all the indeterminants as generators, with relations that cause x and y to commute - these relations being necessary only if we are working in a noncommutative world wherein indeterminants do not commute with each other by default. In the same way, you can generate KS times products of indeterminants from y, and products of indeterminants from x times Kt, as an R module, or as an ideal in the aforementioned ring. It doesn't matter as long as the same kernel, i.e. the same set, is generated. Easier, then, to generate the ideal in the free ring. Just use the generators of KS and KT, as ideals within their respective rings, leading to S and T. That produces each kernel crossed with the other polynomials, which is what we want. This gives a remarkably concise description of S×T. Put the indeterminants together and let them generate a polynomial ring wherein x and y commute past each other, and put the two kernels together and let them generate an ideal. The quotient ring is S×T. Furthermore, multiplication within the tensor product agrees with multiplication in S and in T, i.e. per component.
If S and T are R algebras, they need not embed in their tensor product. This is demonstrated by Q and Z/p, which are both Z algebras. Their tensor product is 0, thus neither algebra embeds in S×T. However, when R embeds in S and T, as described above, then S and T both embed in S×T.
Suppose u is a nonzero element of S that becomes 0 in S×T.
In other words, (u,1) is equivalent to 0.
In the aforementioned example, we moved the action of p in R from 1 back to u,
whence (u,1) was equivalent to (up,1/p).
If you try to do the same thing here, p becomes a unit in R, which becomes a unit in S.
Since u is nonzero and p is a unit, up cannot equal 0.
Nor can p pass from u to 1, since 1*p = p is nonzero in R, and in T.
Both S and T embed in S×T,
and the embedding is accomplished by crossing the original ring with 1.
Multiplication in S×T occurs per component, and these components are copies of S and T. Therefore the ring S×T is almost the direct product of S and T. Almost - except S and T share a common instance of R. After all, (1,u), for u in R, is the same as (u,1). And the intersection does not go beyond R. Suppose (u,1) is equivalent to (1,v). Move R between components, and the second component of the first pair is always in R, as is the first component of the second pair. If they are equal, then they both become (x,y) for x and y in R. This in turn is (xy,1) in R cross 1. (R,1) is the only intersection of (S,1) and (1,T) in the tensor product, just as R is the only intersection of S and T. |
|
In the ultimate generalization, let f map R onto a subring U in the center of S, and let g map R onto a subring V in the center of T. S is a U module and a U algebra, and T is a V module and a V algebra. Cross multiply the generators of U as an R module with the generators of S as a U module to generate S as an R module. Do the same for T, vectoring through V, then evaluate the tensor product S×T as an R module.
U is a quotient ring, and a cyclic R module generated by 1. In other words, U only needs one generator, namely 1. V is also a quotient ring. Let K = U×V. The tensor of two quotient rings, K is another quotient ring whose ideal is the sum of the ideals that built U and V. These ideals can be generated, as R modules, in any way you please. In any case, they contribute to the kernel of the tensor product S×T.
With U as a base, S is a free ring mod some polynomial relations, and similarly for V and T. As mentioned earlier, a generator of S, as an R module, is a generator of U, namely 1, times a finite product of x indeterminants in the free ring that defines S. Cross this with a similar generator of T to find a generator of S×T. This gives a product of x variables times a product of y variables, as we have seen before. All products times all products are necessary, to generate S×T as an R module. Alternatively, these can be produced, as a free ring, by the individual x and y variables, assuming x and y commute past each other.
The action of R, from S to T, can always be accomplished through U and V. In other words, the relations of K as a tensor product are sufficient to pass R between S and T. In fact these relations remain in force, thus an element from U and an element from V can always be combined to make an element from K. S×T is a K module.
Let p be a polynomial relation, with coefficients in U, that defines S as the image of a free ring. Cross the coefficients with 1 in V to become coefficients in K. As an R module, p joins with 1 in V and with every finite product of y variables from T. This is part of the kernel of S×T. As an ideal, p could be multiplied by anything in V, which is implied by K, and then by any combination of y variables, which are already part of the free ring. Once again p stands alone as a generator of the ideal beneath S×T. This holds for all polynomials that define S and T as quotient rings.
Put this all together and S×T is K adjoin the generators of S as a U ring, adjoin the generators of T as a V ring, mod the polynomial relations that define S and T. Once again S and T run independently, with K as their common base.
since R does not embed, the tensor product could collapse to 0. Let S and T be rings of characteristic 2 and 3 respectively. As Z modules, U is the integers mod 2 and V is the integers mod 3. K = U×V is 0, and with S×T a K module, S×T has to be 0. Seen another way, any nonzero entity in T can be divided by 2, which is a unit, thus multiplying the entry in S by 2, and driving it to 0.
Assume S and T are ring extensions of R that are contained in an even larger ring E. The compositum of S and T, written S+T, is the smallest subring of E that contains S and T. The compositum always exists; it is the intersection of all subrings containing S and T. This may seem familiar; we analyzed the compositum of two field extensions in an earlier chapter.
If there is no larger ring containing S and T, yet they still share a common subring R, let E be their tensor product. Refer to the previous section for a characterization of S×T. (S,1) and (1,T) lie in the compositum, and that brings in all of E. Therefore S+T within S×T is S×T. However, if S and T are embedded in some other ring E, the compositum could be different.
Let S and T lie in a larger ring E, and ask whether S×T = S+T. The tensor product is constructed from S and T, so we say it equals the compositum if it is isomorphic to the compositum, with S mapping onto S and T mapping onto T. In other words, the tensor product embeds into E, as a subring of E, and becomes the compositum.
That's the definition of equality; now assume S×T embeds in E, having an image H. Since H is a ring containing S and T, S+T is contained in H. We saw above that the compositum within the tensor product is the tensor product. Therefore the compositum is isomorphic to the tensor product iff the tensor product embeds in E.
Within the tensor product, S and T commute past each other. If this is not the case in E, the tensor product will never embed, so we may as well assume S and T commute within E.
Take a closer look at the map f from S×T into E. Of course S and T map onto themselves. Multiplication in the tensor product occurs per component, so a typical pair generator like (x,y) is the product of (x,1) and (1,y). We want f to be a ring homomorphism, hence f(x,y) has to equal x times y in E. In other words, the pair generator (x,y) in S×T maps to xy in E. This holds for all pair generators, and sums thereof, hence the entire embedding has been defined.
Is f well defined? A linear relation in the kernel of the tensor product maps to 0 in E. This by the distributive property of multiplication in E. A relation of the form (xd,y) = (x,dy) holds by the associative property of multiplication in E. Thus f is well defined on the tensor product.
Verify that f is a ring homomorphism. f certainly respects addition in the free group of pair generators that lies over the tensor product, and since f carries the kernel to 0, f respects addition in S×T. Since both sides are rings, you can verify f respects multiplication term by term, and invoke the distributive property from there. Here again we use the fact that S and T commute in E, just as they do in S×T.
In summary, the tensor product equals the compositum iff S×T embeds in E, iff the aforementioned map f is a monomorphism. Here is an example wherein f does not embed.
Let S equal Q adjoin the cube root of 2, a real field extension, and let T = Q adjoin a complex cube root of 2. The intersection of S and T is a field extension of Q. Its dimension cannot be 3, else the intersection includes S and T, and S = T. Thus its dimension is 1. The extensions are disjoint, other than Q, and the earlier formulas apply.
If x and y are the respective cube roots of 2, i.e. the elements that generate S and T as rings, the tensor product is Q[x,y] mod x3-2 mod y3-2. This is a vector space of dimension 9. Of course it is - the tensor product of two vector spaces of dimension 3 has to have dimension 9. However, the compositum is Q adjoin the cube root of 2, adjoin the cube root of 1, which has dimension 6. Since dimension is an invariant, the tensor product cannot equal the compositum, even up to isomorphism.
Let's show this another way. Suppose the tensor product embeds in the complex plane, with x mapping to the real cube root of 2 and y mapping to the complex cube root of 2. Within the complex plane, xy + y2 = -½x2, which is not the case in S×T. In other words, xy + y2 + ½x2 is a nonzero element of the tensor product that maps to 0 in E. The map f is not a monomorphism, and the tensor product is not equal to the compositum.