Let R be a ring, and adjoin x, but let x*c = σ(c)*x, where c is a coefficient drawn from R, and σ is a ring endomorphism of R. This extension is called a twisted ring.

A polynomial is in normal form if its coefficients appear on the left. Multiply any three terms together and verify associativity. Each coefficient runs through σ a certain number of times, according to the powers of x on its left.

axk * bxl * cxm = aσk(b)σk+l(c)xk+l+m

Apply the above to all terms in the expanded product of three polynomials, and multiplication becomes associative.

Verify the distributive property, on the left and on the right. This works because σ(a+b) = σ(a) + σ(b). The other properties are straightforward. Therefore the twisted extension is a ring.

If σ is the identity map, R commutes with x, giving the standard polynomials R[x].

Since ring homomorphisms map 1 to 1, σ is assumed to be nontrivial. If σ was trivial then x = x*1 = 0*x = 0, and we are back to R.

If σ is not the identity map, the ring is noncommutative. Of course, R could be noncommutative to begin with.

When R has characteristic p, σ can be set to the frobenius homomorphism cp.

Another extension is the twisted series, which is based on the formal power series R[[x]]. Once again xc = σ(c)x. If there is any ambiguity, I will use the term "twisted polynomials" for finite polynomials, and "twisted series" for infinite series.

Twisted laurent series are also possible, wherein each series can have finitely many terms with negative exponents. This assumes σ is an invertible automorphism, so that constants can be pulled past negative powers of x. These rings are not fundamentally different from the twisted series, so I won't spend a lot of time on them. Just remember that a theorem regarding twisted series can usually be generalized to twisted laurent series, assuming σ is invertible.

If R is commutative, and there is no twist, units and nilpotents are well characterized. This section extends some of those results to the noncommutative world.

If σ has a nontrivial kernel, then for c in that kernel, xc = 0. In other words, x is a zero divisor. This is usually not what we want, so for the rest of this section, assume σ is injective.

If R is a domain, consider the lowest term in the product of two polynomials or two series. This looks like c*σk(d), which is always onnzero. Therefore the twisted extension of a domain is a domain.

Next let σ be an automorphism with a fixed period, such that σk is the identity map. Let R be commutative, or at least, let the nilpotent elements of R commute past each other. If cn = 0 in R, expand (cx)kn. The result includes cn, and drops to 0. (Actually, (cx)kn-k+1 will do.) Thus cx is nilpotent. A similar result holds for c times any power of x.

Now consider a polynomial p with nilpotent coefficients. If p has l terms, expand pn to find ln products, as each of l terms is placed in each of n positions. The trick is to choose n high enough. Let m be the highest exponent on any of the l coefficients of p. Then set n = mlk. At least mk instances of one of the terms of p will appear in each product. Pull all these coefficients to the left, and at least m will be run through the same power of σ. When σj(c) is raised to the m, the result is 0, just like cm. After all, σ is an automorphism. Each product drops to 0, and p is nilpotent. With these stipulations in mind, nilpotent coefficients produce nilpotent polynomials.

You might be tempted to bring in another indeterminant y, with its periodic automorphism τ, and you can do that, as long as σ and τ commute. Let S be the twisted ring R[x], and show that τ is a periodic automorphism on S.

τ(x) * τ(c) = xτ(c) = σ(τ(c))x

τ(xc) = τ(σ(c)x) = τ(σ(c))x

This works because σ and τ commute. Apply the earlier result, and polynomials in y are nilpotent if their coefficients are nilpotent in S. Thus polynomials in R[x,y] are nilpotent if all coefficients are nilpotent in R.

In a twisted ring, a polynomial can be nilpotent without nilpotent coefficients. Let R = Z[u,v], with uv = 0. Let σ swap u and v. Now ux is a nilpotent polynomial, even though u is not nilpotent in R.

Ad vx, another nilpotent polynomial, and get (u+v)x. Raise this to the nth and get (un+vn)xn. Thus the sum of nilpotent polynomials need not be nilpotent.

Since σ commutes with σ, (σ taking the place of τ above), σ becomes an endomorphism on the ring R[x]. If σ is injective on R, as is often the case, then σ is injective on R[x]. If σ is an automorphism then it extends to an automorphism. Apply this to a nilpotent polynomial and find another one of the same order. Apply this to a unit and find another unit.

The units of R are still units in the twisted extension. This because R is a subring of its extension.

If a polynomial or series is a unit, its constant coefficient must be a unit.

If R is a domain, the highest terms in two polynomials combine to create a nonzero term in the product. Therefore the units of the extension are precisely the units of R.

Like any other ring, 1-w is a unit when w is nilpotent. Divide 1 by 1-w using synthetic division, and the process stops.

In the commutative world, a unit polynomial is always some unit in R plus a nilpotent polynomial, the converse of the above, but this is not the case here. We don't even need a twist to demonstrate this fact; simply choose an appropriate noncommutative base ring R. Start with Z and adjoin the indeterminant u, giving the polynomials in u. Then adjoin 1/u, so that u is a unit in R. Adjoin the indeterminant v, such that u and v do not commute. Then let vn = 0. Now 1-vx is a unit, since vx is nilpotent. Multiply by u to get another unit, u-uvx. But uvx is not nilpotent, since u and v do not commute.

Let σ be an R endomorphism, building the twisted series over x. (Here σ need not be injective.) As mentioned earlier, a series has to start with a unit to be a unit. Premultiply by the inverse of this unit, so that our series f(x) starts with 1. Divide this into 1 by synthetic division. The quotient, g(x), begins with 1. Subtract 1*f(x) from 1, and the linear term of the "remainder" is -a1x. Multiply this by f(x) on the right and subtract. The remainder begins with a squared term, and the process continues from there. This builds g(x), such that g*f = 1. By symmetry, f is also right invertible, hence f is a unit. A series is a unit iff its constant term is a unit.

You know in your heart that the left inverse has to be the right inverse, but it's hard to believe they come out the same as they pass through σ. They do however. The inverse of 1 + ax + bx2 + cx3 … starts out:

1 - ax + {aσ(a)-b}x2 + {aσ(b)-aσ(a)σ2(a)+bσ2(a)-c}x3 + …

A twisted laurent series is invertible iff it starts with a unit.

If R is noetherian, and x is adjoined without a twist, the resulting ring R[x] is noetherian, but the proof is not obvious. Please take some time to review hilbert's basis theorem. Once you are comfortable with the proof, you can extend it to twisted rings.

σ must be a ring automorphism, thus a coefficient can pass through x from left to right by reversing the action of σ. It is now a matter of convention whether we keep coefficients on the left or on the right.

If you want the extended ring to be left noetherian, then R must be left noetherian, whence the word ideal is shorthand for left ideal. I will assume R is right noetherian, and prove same for the extended ring. In support of this, coefficients will be on the left of x as usual.

Given an ideal W in the twisted polynomials of R[x], let H be the ideal of lead coefficients. Is H in fact a right ideal? Given two polynomials in W, multiply the lesser polynomial by a power of x on the right, which lines up the lead coefficients. Add these together and their sum is in H. H is closed under addition. If a polynomial has degree 7, multiply on the right by σ-7(c) to multiply the lead coefficient by c. This makes H a right ideal.

Select generators p1 through pn, having lead coefficients a1 through an, as described in hilbert's basis theorem. Given a polynomial q(x), let the sum of aici equal the lead coefficient of q. But remember, powers of x are required to make things line up. Beyond this, each ci must be pulled back through σ, according to the degree of pi. Since σ is an automorphism, there is no trouble. Each pi can be multiplied by a constant, and then an appropriate power of x, whereupon their sum kills off the lead term of q. Repeat this, until the degree of q is less than m, less than the highest degree of p1 p2 p3 etc.

The polynomials of degree m-1, living in W, have lead coefficients that form a finitely generated ideal. This implies a finite set of polynomials capable of killing off the lead term. Do the same for degree m-2, m-3, and so on down to the constant polynomials of W. This builds a finite set of generators that span W as a right ideal in R[x], and that completes the proof.

In the same fashion, the twisted series over R is right noetherian. See that proof here, and modify it for σ as above.

This theorem, for polynomials or series, can be invoked again and again, adjoining several indeterminants, provided the twist at each level is an automorphism on the prior ring.

The 0 twist, (not an automorphism), does not produce a noetherian ring. Adjoin x without a twist, and let that be the base ring. R is noetherian, so R[x] is noetherian. Adjoin y, and let yx = 0y. σ is not injective, it maps x to 0. A polynomial in normal form has coefficients from R, times powers of x, times powers of y. After multiplication, the terms with yx disappear.

Let xy generate the first left ideal, producing polynomials whose terms all end in xy. None of these can have any prior instances of y. Bring in xy2 for the next ideal, and xy3 for the next, and so on. This is an ascending chain, hence the polynomial ring is not left noetherian. Note that the union of this chain is the two sided ideal generated by xy, which is still a proper ideal, not containing 1.

Let σ be an automorphism that twists the polynomials in R[x]. Remember that a normalized polynomial has its coefficients on the left. Let q(x) be such a polynomial. Evaluate q at z, and we are working entirely within the ring R. The twist does not come into play. Therefore the roots of q can be found by traditional means. However, x-z need not divide q(x), either on the left or the right.

For a simple example, let R be the finite field of order 9. Start with the integers mod 3, and adjoin i, where i2 = -1. Let σ be conjugation, mapping i to -i, and build the twisted ring accordingly.

Let q = x2+1. The roots are i and -i. If q is the product of (x+a)*(x+b), then a+b = 0, and ab = 1. Substitute, and b times its conjugate = -1. This means b is ±1±i. None of these is the root i or -i.

The quadratic q(x) factors in two different ways, and each linear factor is irreducible. The twisted ring does not exhibit unique factorization.

(x + i+1) * (x + i-1)
(x + -i+1) * (x + -i-1)

Let D be a division ring and let σ be an automorphism on D. Given two polynomials p and q, use synthetic division to divide the smaller into the larger, on the left or on the right, leaving a remainder. As you run through the process, you will need to push certain values of D through the inverse of σ, to compensate for the twist that occurs when you multiply by a power of x. That is why σ has to be an automorphism.

Let g(x) be a polynomial that divides p and q on the left. If p = qh+j, thinking of h as quotient and j is remainder, then g divides j. Conversely if g divides j and q then g divides p. Use euclid's gcd algorithm to find g, the greatest common divisor of p and q. This can be extended to a finite set of polynomials, giving the left gcd for the entire set.

By symmetry, you can take quotients on the left to find the right gcd.

Let W be a right ideal. Since D[x] is right noetherian, W is finitely generated. Let g be the left gcd of the generators of W. Since euclid's gcd algorithm takes place entirely within W, g is in W. Thus every right ideal is principal, and every left ideal is principal. This is a noncommutative pid.

This is not the first time we have encountered such a ring. Recall the half quaternions from chapter 5.

Let W be a two sided ideal with left generator g and right generator h. Either will generate W as a two sided ideal. Perhaps we can use this to get a handle on the two sided ideals.

Let q be an n-degree monic polynomial, such that the left ideal generated by q equals the two sided ideal generated by q. If q is a constant it is a unit, and the ideal is the entire ring. If q is a power of x then that generates the two sided ideal. Every string in every polynomial has x to that power or higher.

Assume q has at least two terms. For any c in D, qc is some left multiple of q. Keeping the degree of q fixed, qc = bq for some b in D. Multiplying by c on the right multiplies all coefficients by b. If q contains xi, σi(c) = b. This holds for every degree i in q. And a similar assertion can be made for any c in R. The order of σ must divide the difference in degree between any two strings in q. This is trivially satisfied if σ is the identity map, but then there isn't any twist. Thus the automorphism σ has some order k in the group of automorphisms of D, and all exponents present in q are equal mod k.

Next notice that qx has to equal fq, where f is linear. If f includes a constant then fq has a lowest term that is not present in qx. Therefore f is some unit u times x. Run all the coefficients through σ, premultiply by u, and get the original coefficients back again. Since the lead coefficient is 1, u = 1. Therefore, σ fixes the coefficients of q.

Conversely, suppose q(x) satisfies these conditions. (The base ring need not be a division ring.) Multiply by c on the right, and q is multiplied by some other constant on the left. Multiply by x on the right, and that is the same as multiplying by x on the left. Everything in the right ideal is in the left ideal. In this case q is called a strong generator. It is reasonable to mod out by q(x), producing a quotient ring.

If σ has infinite order, q can have only one term. Thus the ideals are generated by the powers of x.

If there is no twist, every polynomial acts as a strong generator for its two sided ideal.

If the base ring is a field F, the extension is a noncommutative dedekind domain. I'm getting ahead of myself here, because dedekind domains have not yet been introduced. The fundamental property of a dedekind domain is: every proper nonzero ideal is a unique product of prime ideals. This is yet another form of unique factorization. And of course a dedekind domain is commutative by definition, but if we wave that restriction, a twisted ring over F is dedekind.

If there is no twist, the result is F[x], which is a pid. Each ideal is principal, the product of two ideals is generated by the product of their generators, ideals and monic polynomials correspond 1 for 1, polynomials exhibit unique factorization, and prime polynomials correspond to prime ideals. Put this all together and ideals exhibit unique factorization. Following the same proof, every pid is a dedekind domain. When there is no twist, F[x] is a pid, and F[x] is dedekind. Setting this aside, assume σ is a nontrivial automorphism on F.

Since any ideal can be considered left or right, the product of two ideals is generated by the product of their strong generators.

The generator x is clearly maximal and prime. Divide out all powers of x from the other ideals, i.e. from the other strong generators. A strong generator now has a constant as its lowest term. Every other term is divisible by xk, where k is the order of σ. Replace xk with y, so that generators become polynomials in y. Now we can ignore the twist. The result, F[y], is a commutative pid. Following the generators in F[y], the ideals exhibit unique factorization. Bring x back in, and the twisted ring is a dedekind domain.

If the twist is not periodic, x is the only maximal / prime ideal, and the powers of x generate the only proper ideals.

Many of the properties of a dedekind domain apply, including the oft quoted, "To contain is to divide." Let u and v strongly generate two ideals. If v divides u, that is, u = zv for some z, then the ideal {v} contains the ideal {u}. Conversely, if {v} contains {u} then u is spanned, and u = zv for some z.

If u and v span the same ideal then they span each other, u = zv, v = yu, u = zyu, and yz = 1. Strong generators are associates iff they generate the same ideal.

If w is nonzero, let u strongly generate the ideal {w}. Every ideal containing w contains u. To contain is to divide, and by unique factorization, finitely many ideals divide/contain {u}. Therefore a nonzero element belongs to finitely many ideals.

The union and intersection of two ideals corresponds to the gcd and lcm of their generators, respectively.

Let u and v generate two ideals, and let w generate their intersection. Since {u} and {v} contain {w}, w is a multiple of both u and v. If w is not the least common multiple, then some other ideal contains {w} and lies in both {u} and {v}. This contradicts the definition of intersection, hence w = lcm(u,v).

Let w generate the union of {u} and {v}. By containment, w is a factor of u and v. A larger factor would generate a smaller ideal that still contains {u} and {v}. This contradicts the definition of union, hence w = gcd(u,v).

As a corollary, two ideals are coprime, i.e. they span the entire ring, iff their strong generators are coprime, i.e. the gcd is a unit.

Although this theorem is usually applied to commutative rings, it is valid for a noncommutative ring, as long as the intersection of coprime ideals equals the product.

Let u be a strong generator for the ideal {u}. Since {u} is actually a two sided ideal, the ring can be reduced to cosets of {u}, which is the quotient ring. Let y be an arbitrary element, or polynomial if you prefer, in our twisted ring. The canonical cosrep is found by dividing y by u and taking the remainder, also known as y mod u. Since y maps to a specific value in the quotient ring, there can be only one value of y mod u. It doesn't matter whether you divide u on the left or the right. You may get a different quotient y/u, but you'll get the same remainder.

Let t be a composite polynomial, the product of coprime factors wi. Assume t, and each wi, is a strong generator for its ideal. Since the factors are coprime, the product t is also the least common multiple. As described in the previous section, the least common multiple generates the intersection. The product of the coprime ideals equals their intersection, and the chinese remainder theorem applies. The polynomials mod t are the direct product of the polynomials mod wi.

Extend the reals to the complex numbers in the usual way. Then adjoin x, with complex conjugation as a twist. This is one of our twisted dedekind rings.

Let q(x) = x2+1. Note that σ has period 2, and the exponents of q are equal mod 2. Also, the coefficients are fixed by σ. Therefore q is a strong generator. Its left ideal is its right ideal is its two sided ideal. Elements of the quotient ring can be computed by reducing polynomials mod q. In other words, replace each instance of x2 with -1. Relabel x as j, and ix as k, and you have the quaternions.

Twisted rings can be used to generalize the quaternions. Start with a commutative ring R and adjoin i, where i is the square root of -m. Let σ map i to -i, complex conjugation. Note that σ has period 2, and fixes R.

Let q = x2+t, for some t in R, and verify that q is a strong generator. The quotient ring consists of polynomials mod q, accomplished by replacing x2 with -t.

If you want notation similar to the quaternions, let j = x and let k = ix. Verify that i2 = -m, j2 = -t, k2 = -mt, and i*j, i*k, and j*k are each negated if the operands are interchanged. Set m = t = 1 to produce the traditional quaternions.

If an element is a+bi+cj+dk, let its norm be a2 + mb2 + tc2 + mtd2. Verify, algebraically, that norm respects multiplication. This is tedious if done by hand, so you might want to use this program.

Norm implements a group homomorphism into R. Norm is also a quaternion times its conjugate. Thus v is a unit iff |v| is a unit.

Take the norm of uv = 0, and zero divisors map to zero divisors. Zero divisors map to 0 if R is an integral domain.