I debated, for a time, whether to start with groups or fields, whether to start at the bottom or the top. Then I remembered my high school algebra, which included a brief section on fields. Groups and rings were not touched. The teacher talked about fields because we are all familiar with fields. We understand the reals and the rationals, so it is a good beginning. In contrast, groups and rings can be somewhat abstract. Of course fields become abstract too, as soon as you extend them, but at least the foundation is familiar. So I have decided to open with fields.
A field is a set with two operators, + and *, that are commutative and associative, and * distributes over +.
There is an identity element for +, that I will call 0, and an identity element for *, that I will call 1. Thus 0+x = x, and 1*x = x.
Every x has an additive inverse y, such that y+x = 0, and every nonzero x has a multiplicative inverse y, such that y*x = 1.
Write 0*x = (0+0)*x = 0*x + 0*x, and subtract 0*x from both sides, giving 0*x = 0. This is familiar; 0 times anything is 0.
In theory, 0 and 1 could be the same thing. Since 1*x = x, and 0*x = 0, then every x is 0, and the field has only the element 0. This is not interesting, and so a field, by definition, has 1 different from 0.
Familiar fields include the rationals Q, the reals R, and the complex numbers C, with the usual plus and times operators. You can verify all the properties yourself.
There are many fields between Q and C, but Q lies at the bottom. Every field has to include 1, and 1+1+1 etc brings in all the positive integers. Each of these has an opposite, hence the negative integers. Everything has in inverse, thus the reciprocals. Multiply everything together to get all the fractions, or Q.
Another field is the integers mod p, denoted Z/p. Addition and multiplication, and their properties, carry over from the integers. The opposite of x is p-x, or x itself if x is 0. We only need show every nonzero x is invertible. Since x and p are coprime, you can solve kx + lp = 1, whence k becomes the inverse of x. That makes Z/p a finite field.
If n is composite, the integers mod n are not a field. If n is 35, 5 has no inverse. We really need p to be prime.
The order of a field is its size, i.e. the number of elements in the field. This is often infinite, but can be finite, as with Z/p. Finite fields are discussed in the next chapter.
The characteristic of a field is the integer that becomes 0. Add 1 to itself p times and if p becomes 0 then the characteristic is p. If the sequence never becomes 0, running up the positive integers forever, then the characteristic is 0. Q is the smallest field of characteristic 0, contained in all the others, and Z/p is the smallest field of characteristic p, contained in all the others. This because every field contains 1.
A subfield is contained within another, like the reals within the complex numbers. Let a field or ring have characteristic p. Assuming multiplication is commutative, (ab)p = ap times bp. This is clear, but here is something less obvious: (a+b)p = ap + bp. Expand the left side by the binomial theorem, and note that all the terms in between ap and bp have a binomial coefficient with p in the numerator. Anything added to itself p times over is 0. This is due to the distributive property: x+x+x+… = x*(1+1+1+…) = x*p = x*0 = 0. The middle terms all drop out, leaving ap + bp.
The map f(x) = xp is a ring homomorphism, respecting both addition and multiplication. This is called the Frobenius homomorphism.
Apply f again and again, and raising to the pk is a homomorphism.
|A field is ordered if its nonzero elements can be split into two sets, P and N, such that P contains x iff N contains -x, and P is closed under addition and multiplication. The reals and all its subfields are ordered; P is positive and N is negative.||
Use these sets to define a < relation. Assume x < y iff y-x is in P. Since x-x is 0, x is not less than x. But any two distinct elements can be compared: x is less than y iff y is not less than x. One is always bigger than the other. Finally if x < y < z, then x < z. This because z-x = z-y + y-x, and the last two terms are in P, so z-x is in P. The field is linearly ordered.
Show that a < b & c < d implies a+c < b+d, and a < b & 0 < c implies ac < bc. These follow from the fact that P is closed under addition and multiplication.
Since -1 and -1×-1 cannot both be positive, -1 is negative, and 1 is positive. Successive integers are all positive. If the field has characteristic p, successive integers eventually reach -1, whence -1 is positive, which is impossible. Every ordered field has characteristic 0.
C has characteristic 0, but is not ordered. Neither i nor -i can be positive, their squares being -1.
If a field automorphism on the reals respects order, then it doesn't change a thing. 0 maps to 0, and 1 maps to 1, and that fixes Q. Every real number is bracketed between converging sequences of rational numbers less than and greater than x. Since the rationals stay put, and order is preserved, x must stay put as well. A ring is more general than a field; multiplication need not commute, and elements are not necessarily invertible. Examples include the integers, the integers mod n, the Gaussian integers, polynomials over any other ring, and n by n matrices over any other ring.
A division ring is halfway between a ring and a field. Division is well defined, but multiplication might not commute. Every nonzero x has in inverse y, such that yx = xy = 1.
Division rings are pretty rare, and not used very often. The most common is the quaternions over an ordered field. Since ij ≠ ji, multiplication does not commute. If s = a+bi+cj+dk, |s| = a2+b2+c2+d2, and for s nonzero, the norm is positive. The base is a field, so this norm is invertible. The inverse of s is then s/|s|. Thus the quaternions over the rationals, or the reals, is a division ring. The inverse of 1+2i+8j+10k is (1-2i-8j-10k)/13.
Most of this chapter pertains specifically to fields, but some of the theorems are valid for division rings as well. For instance, a division ring, like a field, has a characteristic, which is p or 0, and always contains the base field Q or Z/p. Since a field has two operators, a field homomorphism f should respect addition and multiplication. Apply f to 0+0, f(0) + f(0) = f(0), and f maps 0 to 0. Similarly, f maps 1 to 1.
Apply f to 1+1+1…, and f maps positive integers to positive integers. Then consider f(x)+f(-x) and f maps negative integers to negative integers. Finally apply f to x*y = 1, and f(1/x) = 1/f(x). The base field, either Q or Z/p, maps onto the same field within the range.
If some nonzero x maps to 0, then f(1/x) maps to 1/f(0) = 1/0, which is impossible. Only 0 maps to 0, and that makes f a monomorphism. The domain embeds in the range as a perfect copy of itself. This applies to division rings as well as fields. Let K be a field or division ring, and let E be a structure where addition, and multiplication by members of K, are well defined. This is called a K vector space.
Let b be a set drawn from E, possibly infinite. A linear combination spanned by b might look like 4b1 + 7b5 - 3b9. The empty linear combination, denoted 0, is also valid. This sum is of course 0. b is a linearly independent set if only the trivial linear combination of 0 yields 0. There is just no other way to get 0.
b is linearly independent iff each v, spanned by b, has a unique representation. If two different linear combinations yield v, subtract them to find a nontrivial linear combination that yields 0. Conversely, if a linear combination yields 0, then bring in 0 to find a second linear combination that yields 0.
To illustrate, think of b as the three unit vectors in eal space, running along the three axes x y and z. b1 = [1,0,0], b2 = [0,1,0], and b3 = [0,0,1]. Now each point in R3 is a unique linear combination of the three vectors b1, b2, and b3. [3,5,7] = 3*b1 + 5*b2 + 7*b3, and it can't be anything else.
b is a basis if it is linearly independent, and spans all of E. Thus every v in E is a unique linear combination drawn from b. Let's prove that a basis exists.
Let C be an ascending chain of linearly independent sets, each set containing an independent vector that was not present in the set before. If C is an infinite chain of sets, let U be the union of all these sets. Verify that U is linearly independent, another set in the chain. If a linear combination of vectors drawn from U is 0, those vectors are all present in one of the earlier sets, yet that set is linearly independent. Therefore U is linearly independent as well. C climbs up to the sky, adding new independent vectors along the way, and sometimes taking the union of all that has gone before.
To start the process, choose any linearly independent set in E, or select any element b1 from E. If this spans all of E then we're done. Wait a minute - how do you know b1 is linearly independent? What happens if 4b1 = 0? Well, multiply on the left by ĵ, and get 1*b1 = 0. By definition, a K vector space is a unitary K module. In simpler terms, 1 continues to be the multiplicative identity. If x is any vector in E, 0+x = x, and 1*x = x. Scale a vector by 1 and get the same vector back again. Therefore b1 is linearly independent, and seeds the chain.
If our base set, as selected above, does not span all of E, then it is the base of one or more ascending chains. In fact there may be many such chains, swarms and swarms of them. In the plane, you could start with b1 = [1,0], pointing along the x axis, but then there are lots of choices for b2, namely any vector that does not lie on the x axis. By zorn's lemma, there is a maximal linearly independent set, at the top of its chain, that I will call b. This maximal set is a basis. Suppose it does not span all of E, and let v be anything not spanned by b. Add v to b and suppose some linear combination yields 0. This combination must include v. Something like 5b5 - 2b9 + 7v = 0. Solve for v, and v is already spanned. Notice that we need the properties of division for this to work. We have to divide through by 7 to solve for v, which is feasible because 7 is invertible. Thus b already spans v, and b is a basis.
E is isomorphic to the direct sum of copies of K, one copy for each bi in b. Given any v in E, write it as a linear combination of elements of b. If v = 3b2 + 9b3, then the corresponding sequence in the direct sum is 0,3,9,0,0,0,…. This can be reversed, hence the map is a bijection.
Show that the map respects addition and scaling by K. Add u and v by adding their linear combinations, which adds coefficients, which is exactly how you add two strings in the direct sum. Multiply v by a in K by multiplying the linear combination that yields v on the left by a. This multiplies each coefficient by a, which is exactly how you multiply the string by a. The operations are the same on both structures. Thus the vector space E acts like so many copies of K running in parallel. This is called a free K module. Each copy of K spins freely, and is acted upon independently of the others.
The dimension of E over K is the size of b. R3 is 3 dimensional over the reals, as you would expect, because it is spanned by 3 vectors x y and z. Remember that b might be infinite, perhaps uncountably infinite.
Is dimension well defined? Might there be another basis with a different dimension for the same space?
Suppose b and c are two different bases for E, and are not the same size. First suppose b is infinite and c is finite. Each element of c is spanned by b, hence each element of c is a finite linear combination of basis elements from b. Finitely meny basis elements of b are sufficient to span c, which spans E. Additional elements of b are spanned by the earlier elements in b, and b is not an independent set.
Next let b and c be finite of size l and m respectively, where l < m. Reorder the elements of b if necessary, so that the representation of c1, relative to the basis b, places a nonzero coefficient on b1. Now replace b1 with c1. Since the former b1 is still spanned by c1 and the rest of b, the space is the same. Verify that the new set b, containing c1, is still linearly independent. Suppose some linear combination involving c1 = 0. Replace c1 with its b equivalent, which includes b1, and there is just this one instance of b1, now spanned by the rest of b.
Next do the same for c2, as a linear combination of c1, b2, and the rest of b. Realize that c2 cannot be spanned by c1 alone. Replace b2 with c2, and show that E is still spanned, and the modified version of b is still linearly independent.
Do the same for c3, c4, and so on. After l iterations the set b has been transformed into the first l vectors in c. These still span all of E, including the last m-l vectors of c, hence c was not a basis after all. Every finite basis has the same size.
Finally let b and c be infinite bases for a vector space. Or if you prefer, let the space spanned by b live in the space spanned by c. Represent each element of b as a linear combination of elements of c. Let f be a function from the set b into the finite subsets of c. Specifically, f maps bi to those basis elements from c that are used to represent bi. If a finite subset of c contains m elements, at most m independent vectors can be spanned by these m elements. We just proved this in the last paragraph. At most m elements of b map to a given finite subset of c having size m. The cardinality of b is no more than the cardinality of c, + 2 times the pairs from c, + 3 times the triples from c, and so on. If the cardinality of c is s, we have s + 2s2 + 3s3 + 4s4 etc, and this produces the same cardinality s. Thus b is no larger than c. If the spaces contain each other, or are isomorphic, then run this argument in the other direction and c is no larger than b. Therefore b and c have the same cardinality.
The dimension of E is well defined. In fact K has the invariant dimension property. Any vector space E over K has a certain dimension, and the spaces over K, of dimension l, are, in some sense, all the same. They all look like l copies of K running in parallel. Of course E may have more structure, besides being a vector space, that is not captured by a simple direct sum. Let E and F be K vector spaces, where K is a field or division ring. Let g be a K linear function from E into F. Thus g(x+y) = g(x) + g(y), and ag(x) = g(ax).
Let Z be the kernel, that portion of E where g(Z) = 0. Verify that Z is a K vector space.
Let R be the range of g, the image of g in F. Verify that R is a K vector space.
Let b be a basis for Z, and then add more basis elements c to cover the rest of E. Thus b&c is a basis for all of E.
Let g(u) = v, and write u as a linear combination of b and c.
Since everything in b maps to 0, b doesn't matter.
Only the coefficients on c matter.
For each ci in c, let di = g(ci. Suppose some linear combination of d = 0. Pull this back to a linear combination of c that winds up in Z. Since b spans Z, we find a linear combination of b&c that equals 0, and that is impossible. Therefore d is a basis for R.
Put this all together and the dimension of the kernel plus the dimension of the image equals the dimension of the K vector space. This is usually applied when the dimension is finite.
Let's look at an example in real space. A simple transformation squashes 3 space down onto the xy plane. If a line rises up from the plane at an angle, it is pushed back down into the plane, and distance is compressed by a ratio that is a function of the angle of inclination. If the angle is 90 degrees, the entire line is squashed into a point, and distance is scaled by 0.
Under this transformation, the kernel is the z axis, and the quotient space, or image space, is the xy plane. Let [0,0,1] be a basis for the kernel. Extend this basis by [1,1,1] and [-1,1,1] to cover all of 3 space. The image of these two basis elements is [1,1] and [-1,1], which is a basis for the xy plane. A 3 dimensional space maps onto a 2 dimensional space with a 1 dimensional kernel.
If E and F have the same finite dimension l, g is injective iff g is surjective. If g covers all of F, the dimension of the kernel is l - l, or 0, the kernel is 0, and g embeds. Conversely, if g embeds, the dimension of the kernel is 0, the dimension of the image is l, and the image cannot be a proper subspace of F else the dimension of F would be larger than l. Thus g becomes an isomorphism, and the two vector spaces are equivalent.
It follows that g cannot embed a space into a smaller space, nor map a space onto a larger space. Let F/E/K be a tower of field extensions, so that K is a subfield of E, is a subfield of F. (These could all be division rings.)
Let x be a basis for E as a K vector space, and let y be a basis for F as an E vector space. Let z be the cross product of x and y, elements of x times elements of y. Consider linear combinations of z, with coefficients taken from K, and multiplied on the left. For any element v in F, v is spanned by y, using coefficients in E. Each of these coefficients is spanned by x, using coefficients in K. Thus z spans v for every v, and z spans all of F.
If z spans 0, group terms together to write 0 as a linear combination of basis elements drawn from y. Since y is a basis, all coefficients are 0. Each coefficient is a linear combination of elements drawn from x, and all those coefficients are 0, hence all the coefficients on z are 0. Therefore z is a basis for F written as a K vector space.
the product of the dimensions F over E and E over K gives the dimension of F over K. The dimension of F over K is infinite iff at least one of the subdimensions is infinite. An extension of the field K is a possibly larger field F with K as subfield. This is written F/K.
Remember that F is a K vector space. The dimension of the extension is the dimension of the vector space. The extension is called finite if the dimension is finite. A quadratic extension has dimension 2.
The product of the dimensions of two successive field extensions is the dimension of the composite extension. This follows directly from the previous theorem on dimensionality.
Let F be an integral domain that contains the field K. Again, F is a K vector space. If the dimension of F is infinite, F need not be a field. Let K be the reals and let F be K[x], polynomials with real coefficients. The powers of x can act as a basis for this vector space. Since x has no inverse, this is an integral domain that is not a field.
However, if F/K is finite, F is a field. Let x be a nonzero element in F and consider the map x*F. By cancellation, the map is injective. It is also a K linear map. In other words, the map respects addition and scaling by elements in K. The map has an empty kernel, hence the range has the same dimension as the domain. The domain and range are both F, so multiplication by x maps F onto F. For some y in F, x*y = 1. This makes y the inverse of x, and this holds for all nonzero x, hence F is a field.
This generalizes to a larger integral domain E over K, wherein each x belongs to a smaller integral domain F inside E, with F/K finite. F is a field, and x is invertible. This holds for all x, hence E is a field. When it was determined that the square root of 2 was not a fraction, we simply waved it into existence. It was "adjoined" to Q. Other numbers were soon created, including the cube root of 2, and the solution to x5+3x+1 = 0. Eventually these were all subsumed in the real numbers, but that was not complete either. There was no square root of -1, so that was waved into existence too.
If F is a field, and x is an element outside of F, F[x] is the smallest ring that contains F and x, subject to any constraints placed on x. In the simplest case, there are no constraints. Here x is just a variable, also called an indeterminant. A ring must be closed under addition and multiplication, that's the definition of a ring, so the presence of x implies x+1, and x+2, and 2x, and 17x, and x2, and x3+5, and all the polynomials in x. Nothing else is required beyond these polynomials, hence F[x] is the ring of polynomials with coefficients in F.
The base does not have to be a field. We adjoined i to the integers to make the Gaussian integers, and found a very useful structure, a ufd, that helps characterize the pythagorean triples among other things. This said, it is easier to start with a field. In fact Z[x] is sometimes analyzed in the context of Q[x]. Watch what x does to the rationals, then restrict attention to the integers. We did this on occasion, when I invoked the properties of arithmetic across the entire complex plane, and then said, "Naturally these same properties apply to the integer grid of points in the complex plane." So for the time being, adjoin x to a field F.
If F is a field, and x is an element outside of F, F(x), using parentheses rather than brackets, is the smallest field that contains F and x, subject to any constraints placed on x. If x is an indeterminant, as described above, F(x) includes all the polynomials in x, with coefficients in F, because they are all implied by addition and multiplication. Thus F(x) contains F[x]. But F(x) has to be a field. Everything has to be invertible. The elements of F are already invertible, but x is not. No polynomial, times x, yields 1. So bring in the rational function 1/x. In fact you need every rational function, with a polynomial in the numerator and a polynomial in the denominator. These are typically put in lowest terms. Write 4/5 instead of 8/10, and write (x+1)/(x-1) instead of (x2+2x+1)/(x2-1). This makes sense only because F[x] is a ufd. The numerator and denominator have a gcd g(x), and this can be pulled out of the top and bottom, creating a rational function in lowest terms.
Z(x) is the same as Q(x). The parentheses imply a field, so every integer has to have an inverse, and that pushes Z up to Q. Then x comes in and it's Q(x), quotients of polynomials in x with rational coefficients.
Multiple elements can be adjoined, such as x and y. They can be brought in one at a time, or all in one go; the result is the same. F[x][y] = F[y][x] = F[x,y]. They are all the polynomials in x and y, with coefficients in F.
The extension F[x] is finite, of dimension n, if the powers of x stop just before xn. A convenient basis is 1 through xn-1. But multiplication remains valid, so xn-1 times x has to be something. It has to be a linear combination of the lesser powers of x. This is a constraint on x. Thus x is no longer an indeterminant, it is an adjoined element that satisfies some polynomial xn equals some linear combination of lower powers of x. Move everything to the left and x is the root of some polynomial p(x) of degree n. We've certainly seen this before. Adjoin a root of x2-2 to get the square root of 2. Adjoin a root of x2+1 to get the complex numbers. Adjoin a root of x2+x+1 to Z to get the Eisenstein integers. And so on.
Conversely, assume x is a root of some polynomial p(x) with coefficients in F. Since F is a field, divide through by the lead coefficient, so that the lead coefficient is 1. This is called a monic polynomial. Keep multiplying x by x by x, until xn collapses back to a linear combination of lower powers of x. The basis is finite, and the extension is finite of dimension n.
In summary, F[x] is a finite extension iff x satisfies some polynomial p(x).
An adjoined element is called algebraic if it satisfies some polynomial. It is, in some sense, subject to algebra.
If x is algebraic over F, is F[x] really an algebraic extension? This seems like a silly question, but it's not. Let y be another element in F[x]. Is y algebraic? If y is not algebraic then the powers of y go on for ever, and the dimension of F[y] is infinite. However, F[y] lives in F[x]. An infinite dimensional F vector space cannot live inside a finite dimensional F vector space. This is a contradiction, hence y is algebraic over F. In fact the dimension of F[y] is a factor of the dimension of F[x]. If F[x] has dimension 6 over F, then F[y] has dimension 6, 3, 2, or 1 (if y belongs to F).
When x is algebraic over F, perform polynomial math in F[x] as usual, but replace each nth power of x with its lower degree polynomial as you go. Division is the only tricky operation. Sometimes there are shortcuts, but there is a general approach that always works. Let y be a member of F[x], which is written as a polynomial in x, of degree n-1 or less. Multiply y by 1, x, x2, x3, etc, which is the action of y on the basis. Each product is another polynomial in x. Let the coefficients on these products become rows in an n by n matrix. The n rows correspond to the products y*xi. The top row is y. Verify that this matrix implements multiplication by y.
In the same way, another matrix implements multiplication by 1/y, assuming y is invertible. The product of the two matrices is multiplication by y, and then 1/y, which is the identity map. The two matrices are inverses of each other. Therefore division by y is the inverse of the matrix that implements multiplication by y. If the matrix is singular then y has no inverse.
Another procedure is an application of the gcd algorithm, made possible by the fact that F[x] is a euclidean domain. Let p(x) be the polynomial that makes x algebraic, and let y(x) be some other polynomial that is coprime to p. The gcd algorithm with backtracking finds the inverse of y mod p.
Speaking of 1/y, is every y invertible? Is the extension a field? Is F[x] the same as F(x)? If x is transcendental then F[x] is certainly not a field, unless you upgrade it to F(x) to bring in all the fractions. If x is algebraic, the extension is a field iff p(x) is irreducible. As shown in the previous section, a finite extension is a field iff it is an integral domain. We only need ask whether the extension has any zero divisors. Assume g(x)*h(x) = 0, which really means g(x)*h(x) is a multiple of p(x). This is certainly possible if p is reducible. Let g and h be the two factors of p, and they are zero divisors, not units, and F[x] is not a field. Conversely, if p is irreducible then it is prime. Once again this is due to unique factorization in the polynomials over F. So p must divide g or h, yet g and h both have lower degree. There are no zero divisors, F[x] is an integral domain, and F[x] is a field.
Most of the time p is irreducible, by assumption or by construction, yet we continue to write F[x], even though the result is a field, and F(x) might be clearer notation.
Let s be an element not in F, though perhaps s lives in a structure containing F, and let E be F(s), that is, the field containing F and s. Assume E/F is finite, thus s is the root of some irreducible polynomial p(x) over F. If s is the root of some other polynomial q(x), then use the gcd algorithm to show s is the root of a common polynomial r(x). Since r(x) divides p(x), and p is irreducible, r = p, or at least r is an associate of p. Make all polynomials monic, and r = p. Thus there is one irreducible polynomial associated with F(s).
Had we chosen t instead of s, we might get a different polynomial, even though the extension is the same. Extend the reals into the complex numbers by adjoining i, x2 + 1 = 0, or 2i, x2 + 4 = 0. These are different polynomials, with different roots, yet they generate the same field extension.
Let x be algebraic over F, generating a field, and let y be algebraic over F[x]. Multiply dimensions to show F[x][y] is finite, hence F[x,y] is an algebraic extension. Any expression in x and y, such as xy + 7x - 5y + 6, is algebraic, and the root of some polynomial whose degree is no larger than the dimension of F[x,y]. The square root of 2 plus the cube root of 17 is the root of some irreducible polynomial of degree 2, 3, or 6.
If you really want to follow up with this example:
a2 = 2
s0 = 1
Write s6 as a linear combination of 1 through s5; it's an exercise in linear algebra, 6 simultaneous equations in 6 unknowns.
Here is an example of an infinite algebraic extension. Start with the rationals and adjoin the pth root of 2 for each prime p. Each polynomial xp-2 is irreducible by eisenstein's criterion. Each extension, on its own, has dimension p, and if the composite extension were finite, there would not be room for a p dimensional extension for some large p. Thus the algebraic extension is infinite.
If x is not algebraic it is transcendental. The extension looks like polynomials in x. But x doesn't have to be an indeterminant. For instance, x could be π. There is no polynomial p(x) with π as a root. (This will be proved in another chapter.) So you could write polynomials in π, just as you might write polynomials in x. Of course the former has meaning in the reals, while the latter is just symbolic. Still, the two structures are isomorphic. E is another transcendental number, satisfying no polynomial.
By definition, an algebraic number is algebraic over Q, and a transcendental number is transcendental over Q.
The polynomials over the rationals are countable, and each has finitely many roots, hence the algebraic elements over the rationals are countable. The reals are uncountable, which means almost all of the reals are transcendental numbers.
If x is transcendental over F, is F[x] a transcendental extension? Again this seems like a silly question, but it's not. Let h be a polynomial in x. Suppose h is algebraic, so that h satisfies some polynomial g. Expand g(h) to get a polynomial in x that is equal to 0. Yet x is transcendental. Therefore F[x] is a purely transcendental extension. A similar proof shows F(x) is a purely transcendental extension. Let h be the quotient of two polynomials, expand g(h), set this equal to 0, clear denominators, and find a polynomial p(x) that equals 0. An element g, or a set of generators g, combined with a base B, "generates" a larger structure C, if C is the smallest structure containing B and g. This is an abstract definition, because words like generate, generator, and finitely generated, are highly overloaded. One can generate a group, monoid, module, vector space, ring, or field, to name a few.
Let the base be a field K, and let the structure of interest be a vector space. A set of generators g, consisting of g1, g2, and g3, implies all the linear combinations of g1, g2, and g3, using coefficients in K. This is necessary to have a vector space containing K and g, and it is also sufficient; we don't need anything else. If the generators happen to be independent, then g forms a basis for the vector space.
If K is a field, the ring generated by x is K[x]. This is the ring of polynomials in x with coefficients in K. All these polynomials are implied by addition and multiplication, as per the definition of a ring. In contrast, the field K(x), generated by x, includes the rational functions in x, the quotients of polynomials, so that everything is invertible, as per the definition of a field. It all depends on what is being generated, doesn't it?
A field, or ring, or vector space, is finitely generated over K, if it can be produced by K and a finite number of generators. If one generator is enough then the extension is simple. Z adjoin i, to get the Gaussian integers, is a simple ring extension.
Don't confuse a finite field extension with a finitely generated field extension. K[x], where x is transcendental, is not finite, because it's basis, the powers of x, is infinite; yet it is finitely generated by x.
If F/K is a finite field extension, take any element s in F and consider successive powers of s, until one of them is spanned by the earlier powers. This defines a polynomial p(x), with root s. The extension K(s) is a subfield of F. Now F is a finite extension of K(s) with a lower dimension. Repeat the above procedure until the field becomes F. Thus F is K adjoin a finite set of elements. Every finite field extension is finitely generated.
If x is an indeterminant, K[x] is a ring extension with a countably infinite dimension (as a K vector space). Use the powers of x as a basis for K[x]. However, the field extension K(x) contains even more basis elements to handle the denominators. Let's see what happens when K is the real numbers. The field K(x) includes 1/(x+c) for every real number c. Suppose a linear combination of these fractions sums to 0. Equate the polynomials in K(x) with their corresponding functions in the xy plane. Thus 1 over x+c becomes a real valued function; add c to x and take the reciprocal. We can graph this; it's a hyperbola. It is defined everywhere, except at -c. If a linear combination of reciprocal polynomials drops to 0, that same combination of functions must be identically zero across the x axis. Yet this is impossible near -c. The other functions are bounded near -c, while 1 over x+c approaches infinity. Therefore the fractions 1/(x+c) form an independent set. The field extension K(x) has an uncountable basis. Of course, if K is countable, such as the rationals, then the field K(x) has countable dimension over K.
As mentioned earlier, K[x] is finitely generated as a ring, with only one generator, namely x. The polynomials are the smallest ring containing K and x. Similarly, K(x) is finitely generated as a field, being the smallest field containing K and x. Weve looked at K(x), regenerated as a K vector space, but how about K(x) regenerated as a ring? Suppose K(x) is finitely generated as a ring. Each generator is a reduced quotient of polynomials. Since K[x] is a ufd, there are finitely many primes in each denominator. Together the generators bring in finitely many primes downstairs, and no new primes are created by addition or multiplication. Since there are infinitely many primes, select one not on the list, and its reciprocal is not spanned. Therefore K(x) is not a finitely generated ring over K. Of course this assumes there are infinitely many primes - a prime in this case being an irreducible polynomial. Verify using this proof.
The same proof shows finitely many rational numbers cannot conspire to build (generate) all of Q. A linear factor is a polynomial of degree 1, such as x-7.
A polynomial splits over a field K if it can be separated into linear factors. For instance, x2+3x+2 splits in the rationals, with factors x+1 and x+2.
Let S be a set of polynomials taken from K[x]. The splitting field for S is the smallest extension that splits all the polynomials in S. Call this extension F/K. Note that F is the splitting field for S over any intermediate extension between F and K as well.
If U is the set of all roots of all polynomials in S, then F = K(U). If F does not exist, because K is not embedded in a larger field, or that field is not a complete splitting field for S, extend the field, so that it acts as a splitting field for S. Adjoin the roots of p(x) one by one, until the field splits p(x), and do this for every polynomial p in S.
If the set S is uncountable, create the algebraic closure, as described in the next section. This field splits every polynomial in S; in fact it splits every polynomial period. Extract the subfield that is minimal and still splits S.
If p(x) has degree n, repeatedly adjoin the roots of p to show the splitting field of p has dimension at most n!. For example, split x3-2 over the rationals. The real root creates an extension of dimension 3, and a subsequent quadratic extension brings in the complex roots, giving a splitting field of dimension 6. A field K is closed if it contains all its algebraic elements. In other words, every polynomial in K[x] splits. For example, the complex numbers form a closed field. Every polynomial with complex coefficients has a complex root u. (This is not obvious; we'll prove it later.) Divide by x-u and find a smaller polynomial, which also has a root, and continue, until the polynomial splits. Thus it is enough to show every polynomial in K[x] has a root in K.
The closure of K is a minimal field extension of K that splits every polynomial. Thus the closure of C is C, and the closure of R is C.
Let F be the closure of K, and suppose F is not closed. Adjoin u, such that F(u) is algebraic over F. It is the root of some polynomial, whose coefficients are all algebraic over K. Adjoint these roots, then u, giving a finite extension of K, hence u is algebraic over K. Thus u is contained in F after all, and the closure of a field is closed.
To find the closure of the rationals, adjoin all the roots of all the polynomials with rational coefficients, giving a subfield of the complex numbers that is closed. This is well defined, because we are working within a larger framework, i.e. the complex plane. In general, you have to do some work to prove the closure exists. This is easy if K is countable, so that the algebraic elements are countable. Adjoin them one at a time, building an ascending chain of algebraic extensions, and then take the union. This is the closure of K. But if K is uncountable we need more machinery from set theory. You can skip this if you like.
First bound the size of algebraic extensions of K, so we don't wind up considering the set of all sets. Let c be the cardinality of K. Even if every polynomial is irreducible, there are c + c2 + c3 + … such polynomials. This is countably infinite if c is finite, or it is equal to c otherwise. Upgrade c to infinity if it was finite. At worst each polynomial contributes n roots, and that still leaves the cardinality at c. The closure is no larger than c.
Build the set of all algebraic extensions of K. A chain of extensions adds a new algebraic element, then another, then another, and so on, perhaps to infinity. The union of such a chain is another algebraic extension. Thus each ascending chain of extensions has an upper bound, the maximum set in this chain or the union of said chain Use zorn's lemma to find a maximal field F. Any additional algebraic elements not in F would contradict maximality. And a proper subfield misses certain algebraic elements, and is not closed. Thus F is the closure of K. Let c() be a field isomorphism from K onto L, and extend this to c(u) = v, where u is transcendental over K and v is transcendental over L. This becomes an isomorphism from K[u] to L[v], or K(u) to L(v). Polynomials in u become polynomials in v, with coefficients in K mapping to coefficients in L. Reverse this isomorphism by turning v back into u, and running the coefficients through c in reverse, which can be done since c is an isomorphism.
A similar result holds if u and v are algebraic, with p(u) = 0, and q(v) = 0, where q is the image of p. In other words, the coefficients of p are mapped, via c, to the coefficients of q. Since c is an isomorphism from K onto L, p is irreducible iff q is irreducible. Factor p over K, and apply c to find a factorization of q over L.
As you multiply polynomials in u together, the product collapses, according to p. In the same way, polynomials in v collapse according to q. Since c maps p to q, polynomials collapse in exactly the same way in both worlds. Thus the isomorphism c has been extended from K(u) onto L(v).
The above can be applied to the case of K = L, where the base field is mapped onto itself. If u and v both lie in F, we are building an automorphism on F, though it's not clear that the automorphism will extend all the way up to F. More on this below.
If c fixes K, and u and v are roots of the same polynomial p, then the isomorphism extends. Conversely, assume some ring isomorphism maps u to v, and fixes K. Raise u and v to successive powers, until they are spanned by lower powers of u and v respectively. Since the map is an isomorphism that fixes K, the resulting polynomials are identical, and u and v are both roots of p(x).
We've certainly seen this before. Conjugation in the Gaussian integers, Eisenstein integers, and cyclotomic 8, permutes the roots of an irreducible polynomial. This is by design. We want conjugation to be an automorphism that maps the ring or field onto itself, respecting addition and multiplication, while fixing the integers underneath, and the only way to do that is to map each root of p(x) to another root of p(x). Thus i maps to -i in the complex plane, both roots of x2+1, and so on.
If E and F are the algebraic closures of K and L respectively, and c maps K onto L, then there is an extension of c that maps E onto F. This is one of those statements that seems obvious, just map u to v, over and over again, until all the algebraic elements are accounted for. That's fine if the dimension of E/K is countable. If it is uncountable then you need to argue as follows.
An ascending chain of isomorphisms maps another element, and another, and another, each isomorphism an extension of the previous. The union of all these isomorphisms is another isomorphism. Every chain is bounded, and by zorn's lemma, there is a maximal isomorphism from a subfield of E to a subfield of F, consistent with c(K) onto L. If this does not map all of E, then let u be unmapped, where u is the root of a polynomial p with coefficients in the domain. Since p(x) is irreducible, its image q(x) is also irreducible over the range, and has some root v. Extend the isomorphisms to map u onto v. This contradicts the maximality of c, hence C is an isomorphism on all of E.
Set K = L in the above, and let c fix K. There is one algebraic closure of K, up to isomorphism. Any closure E/K can be mapped, isomorphically, onto F/K. If you look at the 3 cube roots of 2, they seem quite different. One is real, on the x axis, and the other two are 120 degrees around in the complex plane. And yet, these roots are indistinguishable. Pick one and adjoin it to the rationals, and the resulting field is the same, no matter which one you pick. Then adjoin another root to complete the splitting field. Once again it doesn't matter which one.
Let L/K be a splitting field for an irreducible polynomial p(x). In other words, L contains all the roots of p(x), and is generated by those roots.
Let u and v be two such roots. There is an isomorphism c() from K(u) onto K(v), as described above. I'm going to extend this isomorphism up to a compatible automorphism on L. Yes, this can always be done, though that is not obvious at the outset.
Since the dimension of K(u) = the dimension of K(v) = the degree of p, neither can properly contain the other. If K(u) = K(v), then c, from K(u) into K(v), is an invertible linear transformation from an n dimensional K vector space into itself, hence it is onto, and it defines an automorphism on K(u). Does this happen? Sure. Assume K(u) is a quadratic extension, which means u is a root of x2-bx+c. By the quadratic formula, u = (b+sqrt(b2-4c))/2. Subtract this from b to get the other root v. Thus K(u) is the same field as K(v).
If K(u) ≠ K(v), there is more work to do. Remember that the ring of polynomials over a field exhibits unique factorization. Factor p(x) over K(u). We know p(x) is irreducible over K, but it may factor in K(u). In fact it must, since u is a root. At a minimum, pull out all the powers of x-u. So there is some factorization for p(x). Apply c() to the coefficients of the resulting polynomials. This gives the factorization of p(x) over K(v). Each factor x-u maps to x-v.
Return to K(u), where p has been factored into smaller polynomials with coefficients in K(u). Since v is another root, different from u, there is some irreducible polynomial q(x), a factor of p(x), with q(v) = 0.
Let c(q) = r. In other words, r is the image of q, the result of applying c() to the coefficients of q(x). Once again r is irreducible over K(v).
Let w be any root of r(x), and map v onto w. L contains w, since it contains all the roots of p(x). K(u)(v) has a basis of ui*vj with coefficients from K. Multiplication of basis elements is reduced mod p(x) and q(x). At the same time, K(v)(w) has a basis vi*wj with coefficients in K, and the same rules for multiplication, using c(p) and c(q) to collapse the polynomials in v an w respectively. The rings are symbolically the same. These are two isomorphic fields that intersect in K(v). Their dimensions over K(v), and over K, are equal.
If w is contained in K(u,v), then c is an automorphism on L or a subfield of L. If not then there is more work to do.
Repeat the above procedure. Factor q(x) in K(u,v) and let s(x) be the factor with s(w) = 0. Map s(x) to t(x) in K(v,w), let z be a root of t(x), and let c(w) = z. Extend the isomorphism from K(u)(v)(w) onto K(v)(w)(z). This cannot continue forever, because p(x) has a finite number of roots. Eventually c becomes an automorphism on L, or a subfield of L.
Assume the automorphism acts on a field E, a subfield of L. If E is not all of L then there is more work to do. As above, assume L is a splitting field for some polynomial p, which is irreducible over K. Let u lie in L, but not in E, and let u be a root of p(x). Factor p in e, so that u is the root of some irreducible factor p1. In other words, p = p1*q1. Apply c to the coefficients of p1, giving p2. Thus p2*q2 = c(p) = p. Since c is an automorphism on E, p2 is irreducible, having some root v. Since v is also a root of p, it is present in L. Extend c by mapping u to v. The dimension of E(v) over E is the same as the dimension of E(u) over E. If v lies in E(u) c becomes an automorphism on a larger field; otherwise there is more work to do.
From here the reasoning is as above. Factor p1(x) over E(u), then apply c() to the coefficients of the resulting polynomials, giving the factorization of p2(x) over E(v). Each factor x-u maps to x-v. Return to E(u), where p1 has been factored into smaller polynomials with coefficients in E(u). Since v is another root, different from u, there is some irreducible polynomial q1(x), a factor of p1(x), with q1(v) = 0. Let c(q1) = q2. Once again q2 is irreducible over E(v). Let w be any root of q2(x), and map v onto w. L contains w, since it contains all the roots of p(x). This is where we need L to be a splitting field for p. E(u)(v) has a basis of ui*vj with coefficients from E. Multiplication of basis elements is reduced mod p1(x) and q1(x). At the same time, E(v)(w) has a basis vi*wj with coefficients in E, and the same rules for multiplication, using p2 and q2 to collapse the polynomials in v an w respectively. The rings are symbolically the same. These are two isomorphic fields that intersect in E(v). Their dimensions over E(v), and over E, are equal. If w is contained in E(u,v) then c is an automorphism on L, or a subfield of L; Otherwise there is more work to do. Continue mapping roots of p onto roots of p, extending the isomorphism all the way up to L. If L is larger than the field generated by the roots of p, yet L is still a splitting field, then bring in the roots of the other polynomials, one by one, and push the automorphism all the way up to L. Use zorn's lemma if L/K is uncountable.
Remember that u and v, at the start, were arbitrary. Within the splitting field of p(x), one root looks just like another. Any root can be mapped onto any other, and the isomorphism extends to an automorphism on L. We only require that all the roots of p(x) be present.
Since conjugates are indistinguishable, they should have the same multiplicity. Let u and v be distinct roots of an irreducible polynomial p(x) over K. Embed these roots in the splitting field L/K. An isomorphism maps u to v, so factor p(x) in K(u), and apply the isomorphism, and factor p(x) in K(v). The linear factors x-u and x-v correspond. The number of factors x-u in one polynomial equals the number of factors x-v in the other. The multiplicity of the two roots is the same. These roots are arbitrary, hence the multiplicity of all the roots of an irreducible polynomial is the same. The number of distinct roots times the common multiplicity gives the degree of p(x). This multiplicity is almost always 1. It can be higher for inseparable extensions, but I'll save that for another day.
The extension of c up to two transcendental elements u → v within the same field L is an endomorphism on L, but need not be an automorphism, or compatible with any automorphism. Let K be any field, let u be an indeterminant, let L = K(u), and let v = u2, which is another transcendental element in L. The isomorphism c() maps u onto v and fixes K. Note that K(u,v) = K(u). Thus the map carries K(u) into itself. If you want to think in terms of u, c() doubles the degree of every term of every polynomial. Since u has no preimage, the map is not onto; it is not an automorphism. And we can't extend it, because it is already defined on all of L. An extension F/K is normal if, for any irreducible polynomial p(x) in K with a root in F, p(x) splits in F. If F has one root, it has them all.
The intersection of arbitrarily many normal extensions is normal.
A purely transcendental extension is normal, by the above definition, since it has no roots. This said, I will define a normal extension as algebraic; that's usually what we mean by normal anyways.
Here are some equivalent criteria for a normal extension, starting with the definition.
Recall that c is determined by its action on the generators. Each generator is a root of some p(x), and is mapped to another root of p(x), which is contained in F. Thus c maps F into F.
At this point you might be tempted to invoke dimensionality, to say c maps F onto F, but F/K might not be finite. G could be an infinite set of generators.
Look at any generator gi in G. This is a root of p(x), and all the roots are present, and c maps these roots to roots. Since c maps a finite set into itself, and c is injective, c is also surjective. In other words, c defines a permutation on the roots of p(x).
Let s be a member of F, which means s is a polynomial in g1, g2, g3 … gn. Each generator in this set runs through a cycle, according to c. These cycles have lengths l1, l2, l3 … ln. Let m be the least common multiple of these cycles. Repeat c m times over and the generators g1 through gn return to their original positions. Therefore cm-1(s) is the preimage of s under c. Thus c is invertible on F.
Like any field homomorphism, c is injective, and now c is surjective. Therefore c is an automorphism on F. Every embedding of F into a larger extension remains an automorphism on F.
If F is not normal then let u lie in F, while v, a conjugate of u, does not. Embed F in its algebraic closure, which I will call E. Build an isomorphism c from K(u) onto K(v). Extend c up to the splitting field of p(x), and then up to all of E. Restrict c to F, and c maps F into e, but does not keep F within F, since c(u) = v. This is a contradiction, hence F is normal over K.
When proving normality, the third of these four criteria is the most common. Start with a field K and adjoin all the roots of p(x). In fact, adjoin all the roots of all the polynomials in a set, even an infinite set. These adjoined roots act as generators. The conjugates of each generator are present - that's the way we built the extension - hence the extension is normal.
The normal closure of an algebraic extension E/K is the smallest extension F/E/K that is normal. If E lives within the context of a larger field, or within its algebraic closure, then F is well defined; it is the intersection of all normal extensions containing E.
If E/K = K(G), for some set of generators G, then bring in all the conjugates of G to build F. By criterion 3 above, F is normal. And nothing less than F would do.
If necessary, let G be all of E. Bring in all the conjugates of E, i.e. build a splitting field for all the polynomials with roots in E. The result is the same field F, the normal closure of E.
Return to the example of Q adjoin the real cube root of 2. This is not normal, because the other cube roots are not present. Bring in the complex cube roots, and the higher field extension of dimension 6 over Q is normal. Every u in this field implies all its conjugates.
Let F be normal over K, and let c be any isomorphism from one subfield of F onto another. Let F live in its algebraic closure E. Extend the isomorphism up to an isomorphism from E onto E, which is an automorphism on E. Restrict this automorphism to F, and by criterion 4, the restriction is an automorphism on F. Thus c extends up to an automorphism on F. The composition of normal extensions need not be normal. The easiest example is the rationals adjoin sqrt(2), adjoin sqrt(sqrt(2)). (The latter is also known as the fourth root of 2.) Adjoining the square root of anything, at any time, is normal, because both roots, u and -u, are included. So the first extension has ħsqrt(2), and the second extension has ħsqrt(sqrt(2)). The latter is a root of x4-2, which is irreducible by Eisenstein's criterion. So if the entire extension of dimension 4 is normal then this polynomial should split. However, the two complex roots, the fourth root of 2 times i and the fourth root of 2 times -i, are not part of this real extension. You have to toss in i to build a normal extension of dimension 8.
This example shows the difficulty of extending an automorphism up to a higher field that is not a splitting field over the field fixed by the original automorphism. Let c be the automorphism that swaps +sqrt(2) and -sqrt(2). Clearly c fixes Q. Extend c to the field of dimension 4 by mapping the fourth root of 2 to one of its conjugates. On the real line there are only two choices, the fourth root of 2 and minus the fourth root of 2. Both of these, when squared, become +sqrt(2), thus c fixes sqrt(2); however c maps sqrt(2) to -sqrt(2). Thus c cannot be extended up to the second field, even though the second extension is normal over the first. Since c fixes Q, c extends to fields that are normal over Q, as is the case when i is brought in. An irreducible polynomial with coefficients in K factors uniformly in F/K if all its irreducible factors have the same degree. This leads to yet another criterion for a normal extension.
Assume uniform factoring, and assume p(x) has a root u in F. All the factors of p are linear, hence all the roots of p are in F. This makes F a splitting field for any polynomial with a root in F, hence F is normal.
Conversely, assume F/K is a normal extension. Consider an irreducible polynomial p(x). If F is a splitting field for p(x), it factors uniformly into linear pieces, and we are done. Otherwise let E = F adjoin the roots of p(x). Since all the roots of p(x) are included, E is also normal over K.
Let q and r be any two irreducible factors of p(x), as p is factored over F. Let u be a root of q and let v be a root of r. Remember that u and v are in E, but not in F.
Let L = K adjoin the roots of p(x). Thus L is a subfield of E, and L is also normal.
Intersect both F and E with L. Now F and E are much smaller; in fact both are finite extensions of K. After intersection, E = L, and F is a proper subfield of L. Note that F is still normal, the intersection of two normal extensions.
We would like to know that q and r have not changed. Since p splits in L, and since q is the product of linear factors taken from p, all the coefficients of q lie in L. These same coefficients were present in F, and are still present in F. Intersecting F with L has not changed q. Similarly, intersecting F with L has not changed r. They are still polynomials over F, and still irreducible factors of p(x).
Build F(u) and F(v) by starting with K(u) and K(v), which are isomorphic. Let c be the isomorphism between K(u) and K(v). Extend c to an automorphism on L. Since F is a normal subfield of L, c is an automorphism on F. Also c maps u to v, so c is an isomorphism from F(u) onto F(v).
Now u is a root of q and v is a root of r, both polynomials being irreducible over F. The extensions are isomorphic, so F(u) and F(v) have the same dimension over F. This is the degree of q and the degree of r; both degrees are equal. Since q and r were arbitrary, p factors uniformly.
If p has degree 11, 11 being prime, and p is irreducible over K, and F is normal over K, then p splits in F, or p remains irreducible over F. An example is x11-2, which is irreducible over Q. Adjoin i to the rationals, a normal extension, and p splits over Q[i], or it remains irreducible. In fact it remains irreducible, for an extension of dimension 2 cannot contain an extension of dimension 11, as would be the case if Q[i] brought in even one root of x11-2. The ideas presented in this section can be applied to fields or rings, though they are usually introduced as part of field theory. I'll try to present the subject in both contexts simultaneously; I hope it's not too confusing.
An element v is algebraic over the base field K iff it satisfies a polynomial with coefficients in K. Failing this, v is transcendental. These definitions remain valid for K a commutative ring.
A set of elements S in a ring/field extension F/K is algebraically independent if there is no multivariable polynomial p(x1,x2,…xn), with coefficients in K, that, when evaluated at s1, s2, … sn, gives 0. Polynomials constructed from the elements of S are never 0, nor are they equal, else their difference would be a polynomial that evaluates to 0. Therefore, the algebraically independent elements of S act just like so many indeterminants, and generate a polynomial ring K[S] inside F. This is very much like linearly independent elements in a basis.
As an example, consider π and e in the reals. These are algebraically independent over the rationals, though we don't have the machinery to prove that just yet. This means any polynomial, evaluated at π and e, cannot come out 0. The following is impossible.
17π3e2 - 5π2e2 + 14π + 3e7 + 2e - 19 = 0
The polynomials in π and e are just like polynomials in indeterminants s and t, with s set to π and t set to e. Each polynomial is unique, and each polynomial evaluated at π and e is unique, because π and e are algebraically independent. It is your choice whether you want to think of this as polynomials in s and t, entirely symbolic, or as a ring inside the reals generated by π and e. The structure is the same.
Continuing this example, a rational function such as
(π2 + 4e - 1) / (7e3 + 6πe - 9π - 3)
cannot come out 0, for then the numerator, which is a polynomial, would have to equal 0. Let's explore this in general.
If K is a field, or a ufd that we might want to embed into a field, like Z into Q, polynomials extend to rational functions, i.e. quotients of polynomials. If a rational function in s1 s2 s3 … sn is 0, its numerator (a polynomial) has to be 0, and that is a contradiction. Nor can distinct fractions of polynomials be equal. The field generated by algebraically independent elements is K(S) inside F. (In this process, K is upgraded to a field if necessary.)
If v is algebraic over K[S], it is algebraic over K(S). That's pretty obvious, but how about the converse? Assume v is algebraic over K(S) via some polynomial p. Each coefficient on p(v) is a quotient of polynomials from K[S]. Multiply through by a common denominator, and find a polynomial in v with coefficients in K[S] that equals 0. This makes v algebraic over K[S]. It doesn't matter whether S generates a ring or a field; the algebraic elements over K[S] or K(S) are the same.
Let a/b be a quotient of polynomials living in K(S). This is of course algebraic over K(S), in fact it belongs to K(S). By the above it is algebraic over K[S]. In particular, a/b solves the polynomial bx - a = 0. Thus there are no new independent elements in K(S). S still does the trick.
All of F is algebraic over K[S] iff F is algebraic over K(S). If S is a maximal independent set in F, generating polynomials K[S], then S is a maximal independent set generating rational functions K(S) in F.
Every permutation of the elements of S extends to a ring/field automorphism of K[S] or K(S) respectively. This can be extended to an automorphism on the algebraic elements above K(s).
Verify that u is transcendental over K[S] iff S ∪ u is an algebraically independent set. A polynomial p(u) over K[S] implies a polynomial p(S,u) over K, and conversely.
You can make the same claim for fields: u is transcendental over K(S) iff S ∪ u is an algebraically independent set. If u is transcendental over K(S) it is transcendental over K[S]. Adjoin element u to K[S], giving K[S][u], whence S and u are independent in K[S], and in K(S). Conversely, assume independence, and suppose u is algebraic over K(S). Thus p(u) is 0 using coefficients from K(S). Multiply through by the common denominator and find a polynomial in S and u that is equal to 0. This is a contradiction.
In either rings or fields, algebraically independent sets grow by adding transcendental elements.
Build an ascending chain of independent sets by adding transcendental elements. The union of such a chain remains independent. Apply zorn's lemma to find a maximal algebraically independent set. This is called a basis, and it spans the transcendent space of the ring or field extension. If anything above the transcendent space is transcendental, it could be added to the basis to make a larger independent set. The basis is maximal, hence everything above is algebraic over the transcendent space, like the icing on the cake.
We call K[S] or K(S) a transcendent space because, when K is a field, the transcended space is a vector space built from transcendental elements. Even when K is a ring, K[S] is a free K module, spanned by products of powers of indeterminants drawn from S.
The field extension F/K does not have one transcendent space; it may have many. I will illustrate with K(x), quotients of polynomials in x. Instead of x, adjoin x2 , giving a smaller transcendent space. This is a smaller cake, and it has more icing. The element x is algebraic over x2 , namely its square root. The field F can be entirely transcendental, or it can be described as a transcendental extension followed by an algebraic extension.
It may not be possible to get rid of the icing on the cake, to make F entirely transcendental. Start with K(x), where x is an indeterminant, and adjoin v1, the square root of x. Then adjoin v2, the square root of v1. Then adjoin v3, the square root of v2. Continue this forever, building a field F that is algebraic over K(x). The intermediate field generated by vj looks like the polynomials, and quotients thereof, in vj, with coefficients in K. This brings in x, and K(x). We'll see below that a ring or field based on one indeterminant, such as x, cannot contain two algebraically independent elements. You can select one and only one indeterminant, x or something else. Let w be some other transcendental element in F. Now w lies in one of these intermediate subfields, generated by vj, for some j. It is trapped in the jth extension, and cannot generate all of F. The rest of F is an infinite algebraic extension over K(w). F always consists of a transcendent space in one indeterminant, and an infinite algebraic extension on top.
If the dimension of a transcendent space is the size of its basis, i.e. the number of indeterminants, does F/K have the invariant dimension property? Remember, there are many ways to build a transcendent space within F.
Consider the finite case first. Let the set A, consisting of a1 a2 a3 … an, be a maximal independent set, a basis for a transcendent space within F/K, and suppose B is any set of algebraically independent elements b1 b2 b3 etc, with more than n elements. This is very much like the corresponding proof for vector spaces, which was presented earlier in the section on Basis and Dimension. Replace a1 with b1, a2 with b2, and so on, until we run out of A.
Write b1 as the root of a polynomial over K[A], which is possible since everything in F is algebraic over K(A). Renumber elements, so that the polynomial with root b1 contains a1. Include b1, instead of a1, in the basis, and leave everything else the same, and a1 becomes algebraic over K[b1,a2,a3,…an]. With a1 in hand, all of A is at our disposal, and the rest of F remains algebraic. This does not guarantee b1 is independent from the rest of A. There may be some polynomial satisfied by b1 a7 and a9. That's ok; b1 is transcendental, and the remaining generators from A are algebraically independent, and together they span a subfield that leaves only icing - only algebraic elements in F.
Next write b2 algebraic over K[b1,a2,a3,…an]. Its polynomial must contain some a2, since b1 and b2 are algebraically independent over K. Replace a2 with b2. The resulting set has a2 algebraic, then a1 algebraic, whence the rest of F remains algebraic. This reproduces the conditions described after we replaced a1 with b1. Continue this process through b3, b4, and so on up to bn. Now bn+1 is both algebraic and transcendental over its predecessors. This is a contradiction.
If a basis is finite it fixes the transcendent degree for all of F/K, and every transcendent space within F has degree n.
What happens when A and B are infinite? Again, the proof is very similar to the infinite bases of a vector space. Each bi is algebraic over K[A]. Thus bi is the root of a polynomial with coefficients in K[A]. This uses a finite set of indeterminants from A. Let bi correspond to this finite set. If this set has size n, then it cannot span a transcendent space whose algebraic closure includes more than n elements from B. We just proved that in the last paragraph. Thus the size of B is no larger than the sum of (n times (the number of tuples of length n drawn from A)), for all n. This is the size of A. With B no larger than A, and A no larger than B, the size of the basis is determined.
Let F be an extension of E, which is an extension of K. Let x1 x2 x3 etc form a transcendent basis for E over K, and let y1 y2 y3 form a transcendent basis for F over E. We will show that x∪y gives a transcendent basis for F over K. Suppose p is a polynomial in x∪y that is equal to 0. This becomes a polynomial in y, with coefficients in E, that is equal to 0, and that is a contradiction. The union is algebraically independent.
If X is the set of indeterminants in E, and A is the set of follow-on algebraic elements, the icing on the cake, then E = K(x,A). Everything in F is algebraic over E(Y), is algebraic over K(X,Y,A), is algebraic over K(X,Y). This makes X∪Y a maximal set of independent elements, i.e. a basis. The transcendent degree of F/K is the transcendent degree of E/K plus the transcendent degree of F/E. This is different from vector spaces, where dimensions are multiplied.
A larger vector space cannot fit inside a smaller one, and the same holds for transcendent spaces. Let F/K have transcendent degree n, and for E inside F, let E/K have a higher transcendent degree, possibly infinite. Extend the basis of the transcendent space of E to become a basis for the transcendent space of F. Now F has degree beyond n, which is a contradiction.