- Complete DVR, Local Field
- All the Powers of p
- Completion = Inverse Limit
- Hensel's First Lemma
- Hensel's Second Lemma
- Hensel's Third Lemma
- The Chinese Remainder Theorem, Completion
- Completing a PID
- The Splitting Problem in the Completion
- Extending the Valuation of a CDVR
- Separable implies Simple
- Kernel of R[x] onto S

This chapter expands upon valuation rings, so you need to be familiar with those theorems. Yet the chapter is called "local fields". How do we get from rings to fields? For starters, a valuation ring R is an integral domain, so we can embed R in its fraction field F. Beyond this, R has a unique maximal ideal M, so mod out by M to find the residue field K. In many cases K and F are related. And there are other fields that arise from the completion of R. This will lead to a definition of a local field.

Let R be a dvr, with fraction field F, and maximal ideal M, and residue field K. Since R is a pid, let t generate M. The ideals of R are now the powers of M, generated by the powers of t.

Step back for a moment and simply assume R is dedekind.
Mod out by the ideal M^{j+1} to give a quotient ring S.
By ideal correspondence, let H be the image of M^{j} in S.
There is nothing between 0 and H,
since that would pull back to something between M^{j} and M^{j+1} in R.
This makes H a simple S module,
hence it is isomorphic to S mod a maximal ideal.
Since M drives M^{j} into M^{j+1}, M drives H into 0.
Thus H is isomorphic to S mod a maximal ideal containing M.
The maximal ideal is M, and H = S/M.
By correspondence, this is the same as R/M.

Now H is an R module, and with MH = 0, H is an R/M module,
or a K vector space.
It is also isomorphic to K.
Therefore, H = K.
This holds for every quotient M^{j}/M^{j+1}.

When M is principal, generated by t,
you can map K onto each quotient without resorting to the axiom of choice.
The isomorphism between K and the simple R module H is determined by the image of 1.
Map 1 to t^{j}, and the isomorphism is established.
Each successive quotient is isomorphic to K in a canonical fashion.

Now return to R a dvr. Let R′ be the completion of the valuation ring R, and let F′ be the completion of the field F, using the valuation metric. Now R′ is a complete dvr, also known as a cdvr. The elements of R′ are uniquely represented by power series in K[[t]], and the elements of F′ are laurent series. Addition and multiplication follow the usual polynomial rules, though there may be carry operations depending on R. You may want to review the section that describes these representations.

For illustration, take R = **Z**_{7} (**Z** localized about 7),
t = 7, M = 7*R, and K = **Z**/7 (**Z** mod 7).
Take the completion of this dvr to get the 7-adic numbers.
Represent the integer 423,
which is contained in both the cdvr R′ and the original ring R, as follows.
We're really converting to base 7.
The resulting sequence is {3,4,1,1,0,0,0…}.
This is the reverse of the usual notation for base conversions.
The constant term comes first and the higher powers of 7 flow off to the right.
The left right confusion comes from
our conventions on polynomials,
with the constant term at the right,
and power series, with the constant term at the left.
The completion of R extends polynomials into power series, and that switches things around.
This can be especially confusing when you add to 7-adic numbers together.
The carry operation flows to the right, rather than the left.
Just take what you learned in elementary school and reflect it.

The cdvr is determined by the residue field K and the carry rules.
In the above example, K is **Z** mod 7, and the arithmetic carry rules apply.
Remove these rules, and the cdvr becomes the formal laurent series in t
with coefficients in K.
Thus 4t + 5t = 2t, rather than 2t + t^{2}.

The carry rules do not change from one digit to the next.
If 3+5 = 1+t+2t^{2}, then multiply through by t^{j},
and the same relationship holds at position j in the series.
This is valid for multiplication as well.
The spillover need not be confined to the next digit,
as it is with the p-adic integers; it could ripple all the way down the series.
The cdvr is uniquely determined by the field K and the carry rules for the sums and products within K,
i.e. in position 0.

Remember that R could be a pid, with a maximal ideal M. Localize about M to get a dvr. If M is principal, generated by t, then the maximal ideal, after localization, is also generated by t. Complete this to get the cdvr. K = R/M has not changed, from the pid, to the dvr, to the cdvr.

Let R be a pid with maximal ideal M. Localize to get the dvr. The completion of R gives the M-adic integers, and the completion of F gives the M-adic numbers. The M-adic integers are the formal power series in K, and the M-adic numbers are the formal laurent series. The latter is the fraction field of the former. If M is principal, generated by p, we refer to these structures as the p-adic integers and the p-adic numbers respectively.

A local field is the fraction field of a cdvr,
having characteristic 0 and finite residue field.
This is the aforementioned laurent series, and it necessarily entails carry rules.
Without carry rules K has some finite characteristic p, and K[[t]] has the same characteristic.
If K[[t]] has characteristic 0,
then 1+1+1+… has to overflow into t, and t^{2}, and so on,
as illustrated by the p-adic integers.

This is not a universal definition. Some say the reals or rationals, with the usual distance topology, form a local field. Some allow any characteristic, or any residue field.

A uniformizer of a local field is an element with valuation 1,
a generator of the maximal ideal.
This is anything in M but not in M^{2}.

If **Z**/p is the integers mod p, and **Z**/p^{2} is the integers mod p^{2},
there is a natural embedding of **Z**/p into **Z**/p^{2}.
This is the multiples of p inside the integers mod p^{2}.
And all this can be embedded in **Z**/p^{3},
which embeds in **Z**/p^{4}, and so on.
The result is an abelian group denoted **Z**/p^{∞}.
For the rest of this section I will call this group G.

Let R be the localization of **Z** about p.
Thus R is the set of fractions with denominators not divisible by p.
R is a normal subgroup of the rationals under addition.
Let H be the quotient group.
Each member of H is a coset of R.
Given a rational x, subtract something in R so that x is between 0 and 1.
In lowest terms, let x = d/mp^{k}.
With m and p coprime, find integers a and b such that bm + ap^{k} = 1.
Note that b and p are coprime.
Write a/m + b/p^{k} = 1/mp^{k}.
Multiply through by d to get x.
Remember that d is coprime to p.
Subtract da/m, since it belongs to R, and db/p^{k} is the same member of H.
Once again adjust by an integer to find w/p^{k}, where 0 < w < p^{k},
and w is not divisible by p.
Conversely, subtract two such elements, and the result does not lie in R.
The elements of H have been characterized.

Expand the rationals in [0,1) in base p. The fractions with finite expansion (analogous to a terminating decimal) are the elements of H. These are added together in the usual way, then reduced mod 1, to keep the result in [0,1).

Stop at the first k digits,
and find the subgroup **Z**/p^{k}.
This naturally embeds in the next subgroup **Z**/p^{k+1}, and so on.
Therefore, H = G.

G is not a subgroup of the p-adic integers. One group is torsion and the other is torsion free.

What are the subgroups of G?
Start with w/p^{k}, and add it to itself as often as you like.
This is the same as multiplication by m,
and since w is coprime to p^{k}, w×m covers all the integers mod p^{k}.
The subgroups are **Z**/p^{k},
the fractions in base p with k digits or less.
This is an unusual group, since it is artinian, but not noetherian.
Subgroups increase in size forever,
but a descending chain is always finite, terminating in **Z**/p → 0.

If you like category theory,
take a moment and describe G as a direct limit.
Embed 0 in **Z**/p,
in **Z**/p^{2},
in **Z**/p^{3}, and so on.
This is an infinite ascending chain of finite groups.
Since the chain is linearly ordered, it is partially ordered, as the direct limit requires.
Map 1 (the generator) in **Z**/p to 0.1 (as a base p fraction between 0 and 1).
Map 1 in **Z**/p^{2} to 0.01, and so on.
Note that the diagram commutes.
We can go from x in one group to y in a larger group, then over to z in G,
or we can follow the homomorphism directly from x to z.
Either way we wind up at z.

Assume another group H creates a commutative diagram with our chain of finite groups.
Given z in G, pull back to any x, e.g. some x in **Z**/p^{5},
and follow x to y in H.
Our new function f maps z to y.
The commutative nature of the two diagrams (into G and H respectively) shows f is well defined.
It doesn't matter which x we select.

Since each z has some preimage x, f is forced. We can't set f(x) to anything but y, else the composite diagram (with G and H together) would not commute. Thus f is uniquely determined on all of G.

Is f a morphism in the category of groups?
Pull z_{1} and z_{2} back to x_{1} and x_{2},
selecting a finite group that contains both preimages.
Each homomorphism into G is injective,
so x_{1} and x_{2} faithfully represent z_{1} and z_{2}.
Follow these up to y_{1} and y_{2} in H.
Compose these maps, from z to x to y, and f is a group homomorphism.
That completes the proof.
G is the initial object, and the colimit,
and the direct limit.

The same holds for any infinite ascending chain of modules, rings, or fields, provided each embeds in the next. Collapse the entire structure into something that is infinitely wide. We're basically taking the union of everything, and equating x with f(x) for each internal embedding. The result is the direct limit of the ascending chain.

Even this can be generalized to a partially ordered set of modules, such that each module embeds in the one above, and every two modules have an upper bound. But I digress.

Let H be an ideal in R such that the powers of H form an infinite descending chain.
This implies a sequence of quotient rings R/H^{n},
where R/H^{n+1} maps onto R/H^{n} via a ring homomorphism,
whose kernel is the cosets of
H^{n+1} in H^{n}.
This builds an ascending chain of rings,
each mapping onto the one below,
with R/H at the bottom.
The completion of this system is, by definition, the inverse limit.
When R is a dvr this is the same as the H-adic completion.
This will become clear as we proceed.

At each level, mark the cosets of H^{n+1} in H^{n}.
The bottom level is simply R/H.
You will need to designate specific cosreps for these cosets at every level.
Let S be the set of infinite sequences of cosets,
selecting one cosrep at each level.
This will become our inverse limit, but we have some technical details to take care of,
like turning S into a ring.

Consider the first n terms of a sequence in S.
A correspondence map
carries these partial sequences to the elements of R/H^{n}.
Basically, the map adds up S_{1} through S_{n}, giving a unique element in R/H^{n}.
The first cosrep S_{1} specifies a coset of H in R,
the second specifies a coset of H^{2} in H, and so on.
Partial sequences can be added or multiplied together by mapping to R/H^{n},
applying the operation there, and mapping back.
Addition looks simple - add the elements S_{i} term by term - and sometimes it is that simple.
But remember, you must turn cosreps into cosets, add the cosets,
and turn the sum back into its designated cosrep.
This may or may not be the sum of the two original cosreps in R.

Good news, the ring properties are inherited from R/H^{n}.
However, we need to prove the sum and product are well defined across the entire sequence.

When you add two cosets of H^{n+1} in H^{n}, this does not change the sum of the cosets at the previous levels.
Addition remains consistent across S.

Multiply two partial sums of length n+1.
The last term in each sum is in the ideal H^{n},
so all the pairwise products that use these terms live in H^{n}.
This does not change the coset of H^{n} in lesser powers of H.
The cosets that have already been established do not change.
Thus multiplication is well defined across S,
and S is a ring.

By construction, a ring homomorphism maps S onto R/H^{n} -
simply restrict attention to the first n terms of the sequences of S.

Apply the aforementioned ring homomorphism from S to R/H^{n},
then map onto R/H^{m}, for m < n.
Alternatively, go straight from S to R/H^{m}.
The result is the same - the diagram commutes.

Is this a terminal object?
Let T be another limit of the system.
For any x in T, map x into each R/H^{n}, and pull these images back to a sequence in S.
You need to prove this map, from T into S, is well defined,
is unique, and is a ring homomorphism.
I'll leave the details to you.
Therefore S is the inverse limit of the system R/H^{n}.

The inverse limit is unique up to isomorphism, so a different set of cosreps produces essentially the same ring. There is one completion of R through H, up to isomorphism.

When H is the maximal ideal of a dvr, S is isomorphic to the H-adic completion of R as a valuation ring. In this case the cosreps are convenient - K times the powers of t, where K is the residue field and t generates H. Addition and multiplication are also convenient - following the power series rules with possible carry operations. But the inverse limit can exist for other rings, even rings with zero divisors.

Since R maps onto each R/H^{n} in a consistent fashion,
R must map into S.
That's the definition of an inverse limit.
R embeds in S iff the powers of H intersect in 0.
If the powers of H intersect in a nonzero element u then u and 0 produce exactly the same sequence in S.
In fact J, the intersection of the powers of H, maps to 0 in S.
Conversely, any u outside of J lies outside H^{n} for some n, and yields a nontrivial coset of H^{n} in R/H^{n},
and a nonzero sequence in S.
R/J embeds in S.

If J = 0,
the points of R can be turned into a metric space, even though R may not be a valuation ring,
or even an integral domain.
It's really the same metric
you've seen before.
Let the valuation of x, or v(x), equal m, where m is the least integer
such that H^{m} contains x.
If x lies in J, let v(x) = infinity.
(You should really mod out by J, so that the intersection is 0.)
The norm of x is then c^{v(x)}, for a fixed c between 0 and 1.

Let v(x) = m, and v(y) = n, for m < n.
Their sum (or difference) lies in H^{m}.
If it lies in H^{i}, for i > m, then subtract y, and x lies in H^{i}.
This is a contradiction, hence the valuation of the sum is the lesser valuation.
If m = n, v(x-y) is at least m.
Use these properties to prove symmetry, and the triangular inequality - thus giving a metric space.
The completion of this metric space is the completion of R as an inverse limit.
This was described in valuation rings - no point in rehashing it here.
We are merely extending these results to other rings and ideals.
The only difference is, the topology, and its completion, are restricted to R.
We cannot extend these results to the fraction field of R -
in fact R may not even have a fraction field.

In some cases
completion through higher ordinals is well defined.
If the intersection of H^{n} is nonzero, call this intersection H^{ω}.
Then let H^{ω+1} be the square of H^{ω}.
Let H^{ω+n} equal H^{ω} raised to the n-1.
The intersection over all these ideals is H^{2ω}.
This continues through the ordinals,
but not all the ordinals, since R is itself a set.
The resulting system of rings with quotient maps going down
has an inverse limit, which is the completion of R.

In practice, higher ordinals are not used very often.
Hensel's three lemmas (presented below),
and the theorems that rely on these lemmas,
do not extend through a limit ordinal.
They only apply to a countable system of quotient rings R/H^{n},
such as a dvr.

The completion through any power of H is the same as the completion through H.
I'll illustrate with H^{2}.
At an intuitive level we are merely grouping the digits together into blocks of size 2,
like reading a number in base 100 instead of base 10.
Operations add or multiply 2 digits at a time, but the math is the same.

Technically, you want to build a map from one completion into the other,
and prove it is a ring isomorphism.
Any system of cosreps will do,
so add the first and second cosreps in the first system to get the first cosrep in the second system.
In other words, a coset of H in R, plus a coset of H^{2} in H,
becomes a coset of H^{2} in R.
Apply this correspondence all the way down the line.
It is unique and reversible.
The math in both systems is the same.
Thus we have a ring isomorphism between the two inverse limits.

The completions are also equivalent as topological spaces.
When moving from H to H^{2}, you may have to lower the valuation by 1,
but this multiplies the distance by 1/c, which is a fixed constant.
The map is uniformly bicontinuous, and a homeomorphism.

Bear in mind,
you can't base a true valuation on H^{2}.
You have lost v(xy) = v(x) + v(y).
Consider x in H-H^{2}.
This has valuation 0 (relative to H^{2}).
Yet v(x^{2}) has valuation 1, which is not v(x) + v(x).

These results extend to an ascending sequence of modules.
(One possible filtration is M times the powers of H, where H is an ideal of R.)
The quotient modules build an inverse limit S,
consisting of cosreps of M_{i+1} in M_{i}.
Show that S is an R module.
We say M is complete if it equals its inverse limit S.
The valuation, or pseudo valuation,
wherein v(x) is the index of the least submodule containing x,
turns M into a metric space.
The topological completion equals the algebraic completion.
I'll let you verify all these results yourself.

Hensel developed three lemmas that build upon each other, just as Sylow is known for his three theorems. Of course the sylow theorems are theorems, not lemmas. The distinction is somewhat subjective. The sylow theorems can be used directly to prove things about finite groups, while hensel's lemmas are used to prove other theorems, which are then used to prove results in algebraic number theory. Apparently Hensel's results are one step back, so they are called lemmas. No matter - a rose by any other name would smell as sweet. And these lemmas are indeed lovely.

Let R be a commutative ring with an ideal H, such that H^{2} = 0.
The idempotents in R correspond one for one with the idempotents in R/H.
Clearly idempotents in R are idempotent in R/H.
Suppose e and f are idempotents in R that map to the same idempotent in R/H.
In other words, f = e+b for some b in H.
Write (e+b)^{2} = e+b.
Since b^{2} = 0, we have b*(1-2e) = e^{2}-e.
The right side is 0, so if 2e-1 is a unit, then b is also 0.
This forces e = f, and the map from idempotents of R into the idempotents of R/H is injective.
We only need show 2e-1 is a unit.

Let e ∈ R be idempotent in R/H.
Thus e^{2}-e = v, for v in H.
Consider the element 2e-1, and square it to get 4e^{2}-4e+1, or 4v+1.
Since v is in H, v^{2} = 0, and (4v+1)*(4v-1) = -1.
Thus 2e-1 is a unit in R.
Idempotents upstairs map uniquely into idempotents downstairs.

To show the map is surjective,
let e be idempotent in R/H, so that e^{2}-e = v, for some v in H.
Set b = -v/(2e-1).

(e+b)^{2} =

e^{2} - 2ev/(2e-1) + v^{2}/(2e-1)^{2} =

e^{2} - 2ev/(2e-1) { v^{2} is 0 }

e+v - 2ev/(2e-1) =

e - v/(2e-1) =

e + b

That completes the correspondence.

Let the powers of an ideal H descend within a ring R.
The ring R/H^{n} is a quotient ring of R/H^{n+1}, via reduction mod H^{n}.
Note that H^{n} squared becomes 0 in the quotient ring.
Thus the idempotents of R/H^{n} correspond to the idempotents of R/H^{n+1}.
By induction, the idempotents of R/H^{n}, for each n, correspond to the idempotents of R/H.

Let S be the completion of R. If R is a dvr, R and S are both integral domains, with no idempotents, and there isn't much to talk about. But R could be any ring, whence its completion is the inverse limit, as described in the previous section.

Let e be an idempotent of S, which maps to a consistent set of idempotents in the quotient rings R/H^{n},
terminating in an idempotent e′ in R/H.
Let f be another idempotent, different from e.
Thus f differs in the n^{th} entry, for some n.
This becomes a different idempotent in R/H^{n},
which leads to a different idempotent f′ in R/H.
The idempotents of S map injectively into the idempotents of R/H.

Conversely,
start with an idempotent e_{1} in R/H.
Pull this back to an idempotent e_{2} in R/H^{2}.
By induction, find an idempotent e_{n} in R/H^{n}.
This defines an entry e in S.
Multiply e*e in S, and you multiply e_{n}*e_{n} in each ring R/H^{n}.
The result is always e_{n}, hence e*e = e, and e is idempotent.
The correspondence is complete.

If R/H has no idempotents, e.g. when H is prime, then S has no idempotents either.

Let S be the completion of R through an ideal H.
Let f(x) be a polynomial with coefficients in S.
Let f′ be the derivative of f.
Let q be the image of f in R/H, i.e. retain only the first "digit" of each coefficient.
Let q′ be the image of f′ in R/H.
Let v ∈ R/H be a root of q.
Let q′(v) be a unit in R/H.
There is a unique u in S, such that u is a root of f,
and u/H = v.
This is a generalization of
Hensel's first lemma.
To see this, set f(x) = x^{2}-x.
The element v is an idempotent, and 2v-1 is a unit.
This lifts to a unique idempotent u in S.

Since S is the inverse limit, ring homomorphisms map S into each R/H^{n}.
These reduce f(x) mod H^{n},
which is implemented by retaining the first n digits of the coefficients of f
and any x that you care to evaluate.
If u is a root of f, it is still a root when everything is reduced mod H^{n}.
Thus each root in S maps to a consistent set of roots, or a "root system", in the chain of quotient rings.

If u_{1} and u_{2} are different roots,
they differ in their n^{th} digits for some n.
Thus they map to distinct roots in R/H^{n}.
The roots upstairs map injectively into root systems downstairs.

Given a root system downstairs,
collect the successive cosets of the roots and build u in S.
Evaluate f(u), and the result is 0 in each R/H^{n}, hence the result is 0.
This makes u a root of f,
and that completes the correspondence.
We only need tie the root system back to R/H.
More accurately, a root in R/H should lift, uniquely, to a root system.
There are certain situations, not at all uncommon, where this is assured.
In these cases the roots of f in S correspond one for one with the roots in R/H.

Consider a quotient ring where the kernel squared is 0. If a root in the quotient ring lifts to a unique root in the original ring, this can be pushed all the way up to a root system by induction. (This was described in the previous section.) So the problem has been reduced to a quotient ring, one link in the chain.

Let H be an ideal of R, such that H^{2} = 0.
Let f be a polynomial over R, with q = f/H, the image of the polynomial in the quotient ring.
Let q(v) = 0, with q′(v) a unit in R/H.
We want to lift v up to u, such that f(u) = 0.
Also, f′(u) has to be a unit in R to keep the chain going.

Take the second criterion first; it's not hard. If the elements of H are nilpotent, and s in R maps to a unit t in R/H, we have s*w = 1+z for some z in H. Yet 1 + a nilpotent is a unit, so s is a unit. If q′(v) is a unit in R/H, then f′(u) will be a unit in R. No trouble there.

Let y be anything in the preimage of v. By the above, f′(y) is a unit in R. We wish to solve for b, such that f(y+b) = 0, and b lies in H.

Use the taylor expansion to write f(y+b) as f(y) + b*f′(y) +b^{2} times some other stuff.
Of course b^{2} = 0,
hence b = -f(y)/f′(y).
Since the denominator is a unit, there is one and only one solution.
Set u = y+b for the unique lift of v.
That completes the proof.

Sometimes a root of q in R/H automatically becomes a unit when evaluated on q′.
In this case the roots in R/H and the roots in S correspond one for one.
We saw this in Hensel's first lemma.
If e was a root of x^{2}-x, then 2e-1 was a unit.

The same thing occurs when R/H is a pid, and q and q′ are relatively prime. We know q and q′ generate 1, and if q drops to 0 at v, q′(v) must be a unit. Assuming q′ does not fall to 0 via the characteristic of R, q is coprime to q′ iff q contains no repeated irreducible polynomials in its unique factorization.

Apply the above
to the m^{th} root of an element c in R.
Assume m and c are both units in R/H,
and let f = x^{m}-c.
If x is a root then x is a unit, and so is mx^{m-1}.
Thus m^{th} roots in R/H and S correspond 1-1.

Let H be maximal in R, with residue field K. Let f be monic, and project it down to K. Assume there is a v in K such that f(v) = 0, and f′(v) is nonzero. The nonzero elements of K are units, so v lifts to a unique root u in S.

Since each sequence in S begins with something from the field K, S is an integral domain. Assume S is integrally closed. This happens when R is a dvr, and its completion is a dvr, which is integrally closed. Since f is monic, a root of f in the fraction field of S is integral, and lies in S. The roots in K correspond with the roots in S, or the fraction field thereof.

Let R be **Z** localized about p, which is a dvr.
Let K be the residue field **Z** mod p.
Let S be the completion of R, which is also a dvr.
(You know this better as the p-adic integers.)
Let f be a monic polynomial in **Z** or in R, and push this down to a monic polynomial in K.
Assume f does not have a multiple root in K[x].
Let f have a root v, hence f = (x-v)*g(x).
The derivative, evaluated at v, is simply g(v).
Since v is not a root of g, this is nonzero.
The roots of f in K correspond one for one with the roots of f in the p-adic integers, or the p-adic numbers.

Let f = x^{m}-c.
If p does not divide m,
so that f and f′ are coprime,
then the m^{th} roots of c mod p correspond 1-1 with the m^{th} roots of c in the p-adic numbers.

Let R be a cdvr with metric |x| = c^{v(x)},
where v(x) is the valuation.
Set c to any real number between 0 and 1, as you prefer.
I'm going to talk about valuations, rather than metrics,
because it's one less thing to worry about.
Just remember that a large valuation is a small distance.

Let f(x) be a polynomial with coefficients in R.
If we can almost find a root x_{0} in R,
then there is a root x in R, not necessarily unique.
By almost, we mean v(f(x_{0})) > 2×v(f′(x_{0})).
The valuation of f is more than twice the valuation of the derivative.

Build a cauchy sequence x_{i} in R, starting with x_{0}.
The construction is recursive,
with the following conditions proved by induction.

v(x

_{i+1}-x_{i}) ≥ 2iv(f(x

_{i})) - 2×v(f′(x_{i})) ≥ 2i

Set i = 0 to start.
We are given condition 2, right off the bat.
So assume condition 2 holds for x_{i}.

Remember Newton's method from calculus?
Start at (x,f(x)), and follow the slope down to the x axis.
That's where you should make your next guess.
Let's apply this here.
Let d = -f(x_{i}) / f′(x_{i}).
We know the denominator is nonzero, as condition 2 bounds its valuation.
The quotient exists in the fraction field of R.

In fact condition 2 tells us v(d) ≥ v(f′(x_{i})) + 2i.
Yet x_{i} and all the coefficients of f′ are taken from R,
hence d, with a higher valuation, belongs to R.

Let x_{i+1} = x_{i}+d, the next point in our sequence, and another point in R.
Now x_{i+1}-x_{i} = d,
and v(d) ≥ 2i, so condition 1 is satisfied.

Use taylor expansion to write f(x_{i}+d) =
f(x_{i}) +d*f′(x_{i}) + d^{2} times some other stuff that lives in R.
Expand f′(x_{i}+d) similarly.
By construction, f(x_{i}) + d*f′(x_{i}) drops to 0.
That sets f(x+d) = d^{2} times some stuff in R.
Thus v(f(x_{i+1})) ≥ 2×v(d).

We know that v(d) > v(f′(x_{i})).
This is clear when i > 0, and when i = 0 it is a given.
This means all the terms in the expansion of f′(x_{i}+d)
have valuations strictly greater than f′(x_{i}),
save the first.
The valuation of the sum is the least valuation,
hence v(f′(x_{i+1})) = v(f′(x_{i})).
The derivative remains nonzero
and always sits at the same valuation.

It's time to verify condition 2 at the next level.
Remember that v(f(x_{i+1})) is at least twice v(d).
So there's no harm in using 2×v(d).
This gives the expression 2×v(d) - 2×v(f′(x_{i+1})).

Replace f′(x_{i+1}) with f′(x_{i}), which doesn't change a thing.
Factor 2 out, and the expression is at least 2 times 2i, or 4i.
This is at least twice (i+1), as long as i is nonzero.
When i = 0, v(d) - v(f′(x_{0})) is positive as part of the given.
Double this to get at least 2, which is at least 2×(i+1), so we're good.
That completes condition 2, and the inductive step.

The difference between x_{m} and x_{n} is a finite sum over d_{i}.
The valuation of each d_{i} is at least 2m, hence v(x_{n}-x_{m}) ≥ 2m.
The sequence x_{i} is cauchy, with some limit x in R.
Since addition and multiplication are continuous under the valuation metric,
f(x) is the limit of f(x_{i}).
By condition 2, v(f(x_{i})) is at least 2i, and approaches infinity.
Therefore f(x) = 0, and x is a root of f.

If R is not complete, and an x_{0} can be found in R with
the required properties, the same sequence x_{i} exists in R,
and that implies a root in the completion of R.

What is the rate of convergence?
Each derivative is equal, in its valuation, to f′(x_{0}).
Let w = v(f′(x_{0})).
Let z = v(f(x_{0})).
Thus the first instance of d, denoted d_{0}, has valuation z-w.

Remember that each d_{i} has valuation ≥ w+2i,
or greater than w if i = 0.

Using the definition of d, v(d_{i+1}) = v(f(x_{i+1}))-w.
The first term on the right is at least 2×v(d_{i}),
which is at least v(d_{i})+w+2i.
Put this all together and get this.

v(d_{i+1}) ≥ v(d_{i}) + 2i { i > 0 }

v(d_{i+1}) > v(d_{i}) { i = 0 }

Each difference has higher valuation than the previous.
The first difference is z-w, and it steps up quickly from there:
z-w+1, z-w+3, z-w+7, z-w+13, …
An infinite sum of these differences stands between x and x_{0}.
When a sum, even an infinite sum, contains distinct valuations,
the valuation of the sum is the least valuation.
Therefore v(x-x_{0}) = z-w.
The approximation improves geometrically thereafter.

As an example,
let R be the dvr **Z**_{2}, and find the square root of -7.
There is no solution in **Q**, or R, but perhaps there is in the 2-adic integers, i.e. the completion of R.

Set f = x^{2}+7.
Reduce mod 2, and the polynomial becomes x^{2}+1.
This is the same as (x+1)^{2}.
Thus 1 is a repeated root,
and Hensel's second lemma is not applicable.
However, the third lemma comes into play.
Set x_{0} = 1.
Thus f(x_{0}) = 8, and f′(x_{0}) = 2.
Condition 2 is satisfied, and -7 has a square root in the 2-adic integers.
The above proof can be used to construct a root.
Here are the first few values of x_{i} with their associated precision.
Remember that a number like -3 is really 1011111111 etc, with ones repeating forever.
You need to convert negative numbers, and fractions,
to see the square root growing in precision.

x_{0} = 1 { 2 digits }

d_{0} = -4

x_{1} = -3 = 1011111… { 3 digits }

d_{1} = 8/3

x_{2} = -1/3 = 10101010… { 5 digits }

d_{2} = 32/3

x_{3} = 31/3 = 10101101010101… { 9 digits }

d_{3} = -512/93

x_{4} = 449/93 = 10101101000000110 { 17 digits }

d_{4} = -131072/41757

Let H be the finite product of maximal ideals M_{1}*M_{2}*M_{3}*…M_{l}.
Actually, these ideals need not be maximal; they only need be pairwise coprime,
so that any two ideals generate R.
Let S be the completion of R through H.

On the other side,
let T_{1} be the completion of R through M_{1},
and so on,
up to T_{l}, which is the completion of R through M_{l}.
S is isomorphic, as a ring, to the direct product of T_{1} through T_{l}.

Apply the chinese remainder theorem,
and build an isomorphism between R/H and the product of R/M_{1} through R/M_{l}.
The isomorphism is not arbitrary; it is prescribed.
For each M_{i}, R/H maps onto R/M_{i} via a quotient map,
i.e. mod out by M_{i},
hence R/H maps into the direct product.
This is known to be an isomorphism.

The j^{th} power of two coprime ideals remains coprime,
hence the chinese remainder theorem can be applied at the j^{th} level,
giving an isomorphism between
R/H^{j} and the direct product of R/M_{i}^{j}.
Furthermore, the diagram commutes.
Start with x in R/H^{j+1}.
Follow the isomorphism across by taking the image mod M_{i}^{j+1},
then mod out by M_{i}^{j}.
Or mod out by H^{j}, then by M_{i}^{j}.
The image of x is the same.
As a ring system, the filtration of R through H is the direct product of the filtrations of R through M_{i}.
We are merely relabeling the elements.
Therefore, the completion of R through H,
denoted S,
is isomorphic to the direct product of the completions of R through M_{i},
denoted T_{1} through T_{l}.
This isomorphism is canonical,
as described above.

The completion of an integral domain, such as **Z** through 15,
need not be an integral domain.

Give S the usual valuation topology,
and let T_{1} through T_{l} combine to build a multidimensional metric space,
with the usual product topology.
We will see that our isomorphism turns into a homeomorphism, hence the spaces are equivalent.

In fact, the map is uniformly bicontinuous.
Let x be an element of S with valuation d.
It maps into H^{d}, which maps into M_{i}^{d}.
Each component in the product space T has distance c^{d} or less, and these are orthogonal,
So a distance in S is multiplied by sqrt(l) (as an upper bound) to give the corresponding distance in T.
Conversely, for x ∈ T, let d be the log of |x| base c.
Each component has valuation d or greater.
Their product lives in H^{d},
and has valuation at least d in S.
Distance is no greater when we move from T back to S.
That completes the homeomorphism.

If R is noetherian and H is any proper ideal, the completion of R through H is noetherian.

If the powers of H intersect in Z, mod out by Z. This leaves R noetherian, and does not change the completion.

Suppose there is an ascending chain V of ideals in the completion R′.
Move from one ideal up to the next, and the new elements have to make an appearance in R/H^{j} for some j.
In other words, V_{2} properly contains V_{1} in R/H^{j}.
Pull this back to W_{2} properly containing W_{1} in R.
Now bring in V_{3}.
Its image in R/H^{j} is at least V_{2}.
Advance j as necessary, until V_{3} is larger than V_{2} in R/H^{j}.
This lifts to W_{3} containing W_{2} containing W_{1} in R.
Continue this process forever,
building an infinite ascending chain of ideals in R.
This is a contradiction, hence R′ is noetherian.

We saw above how an ideal properly containing another ideal in R′ implies proper containment in some R/H^{j},
which pulls back to proper containment in R.
By correspondence,
the same holds for prime ideals.
A chain of prime ideals in R′ implies a chain of prime ideals in R.
The dimension of a ring is the length of the longest chain of prime ideals, minus 1.
Thus the dimension of R′ cannot exceed the dimension of R.

When R is dedekind, its dimension is 1. The dimension of R′ is at most 1. Either R′ is a field, or its nonzero prime ideals are maximal.

Let R be dedekind, and let P be a prime/maximal ideal. Let K be the residue field R/P. Let R′ be the completion of R through P. Assume P is principal, so that P is generated by t in R.

The sequences in R′ are formal power series in t with coefficients in K.
Addition and multiplication follow the usual polynomial rules,
though there may be carry operations, as with the p-adic integers.
Since K is an integral domain,
R′ is an integral domain.
Since K is a field, anything with a nonzero constant term is a unit.
Use synthetic division to find its inverse.
Bring in the reciprocals of the powers of t to build F, the fraction field of R′.
Thus F is the formal laurent series of t, with coefficients in K.
Conversely, given such a series, let the least exponent on t be its valuation.
Verify that this is a
valuation group,
hence R′ is a valuation ring.
The valuation group is **Z**, hence R′ is a dvr.
Its only nonzero prime ideal is generated by t,
giving the same residue field K.

We've seen this before, when R was a dvr, but the same is true when R is dedekind, and the maximal ideal in question is principal. The completion becomes a cdvr. Therefore, if R is a pid, the completion through any maximal ideal gives a cdvr.

Since the powers of P intersect in 0, R embeds in R′. Extend P by embedding P into R′. The image of t gives the sequence {0,1,0,0,0,0…}, or simply t. This generates the maximal ideal in R′. Thus the extension of P into R′, or the completion of P, sometimes denoted P′, is the maximal ideal generated by t.

Consider the completion of the ideal P^{m}.
This is generated by t^{m}, which embeds into R′ as t^{m}.
Therefore the completion of P^{m} is P′^{m}.

What about the contraction of the ideals of R′ back to R?
Let t^{m} generate an ideal in R′, and restrict to R,
whence t^{m} generates the corresponding ideal in R.
The powers of P in R and the ideals of R′ correspond one for one.

What about some other ideal H that is coprime to P? Since H is not contained in P, it joins P to span 1. Write x+y = 1 and embed this in R′. Since y is generated by t, x has to start out with 1. Thus x is a unit. The extension of H into R′, or the completion of H, is the entire ring. This confirms the fact that R′ is not integral over R. If it were, and H was a prime other than P, there would be a prime lying over H. Of course, if R is already a dvr, there may be no other primes. And if R is a cdvr, then R′ = R, which is integral over R.

If H is coprime to P, the composite ideal H*P^{m} extends into R′
via the extension of H times the extension of P^{m}.
This because product and extension commute.
The result is R′*P′^{m}.
The "other" prime ideals of R don't matter.
The ideals of R′ come from, and contract back to, the powers of P.

Let S and R be dedekind, where S is a ring extension of R, and a finite R module. This means S is integral over R.

Let P be a prime ideal in R, and let U be the extension P*S in S. As you know, the splitting problem asks for the unique factorization of U in S. The primes of U all lie over P, and their ramification and residue degrees are constrained by the degree equation. This relationship does not change under localization, but does it change under completion?

The key is that product and extension commute.
The extension of P^{j} into S yields U^{j}.
For the converse,
contract U back into R and get at least P.
Anything else, and you get all of R, including 1.
Yet the extension of P is contained in a prime lying over P, which does not include 1.
Therefore U contracts back to precisely P.

Suppose U^{2} contracts to more than P^{2}.
As above, the contraction can't include any x outside of p, for then x
is a unit, or x combines with p^{2} to make a unit, whereas all of u^{2} is contained in a prime Q lying over P.
So if the contraction is not p^{2} it is p.
This means P extends into U^{2}, not U, and U^{2} is properly smaller than U (because S is dedekind).
This contradiction tells us U^{2} contracts back to P^{2}.
Continue this all the way down the line.
The extension of P^{j} into S is U^{j}, and the contraction of U^{j} into R is P^{j}.

The cosets of P^{j+1} in P^{j}
lift uniquely into the cosets of U^{j+1} in U^{j}.
Select cosreps for the cosets of P^{j+1} in P^{j} first,
then select more cosreps, as needed, for the cosets of U^{j+1} in U^{j}.
Now the completion of R through P, which I will call R′,
is a subset of the completion of S through U, which I will call S′.

Consider the sum or product of two sequences in S′, that happen to live in R′.
At the j^{th} level,
the image is something in S/U^{j},
which is built using elements of R,
hence it is something in R/P^{j}.
This in turn represents something in S/U^{j}.
The operation is exactly the same when restricted to R′.
In other words, R′ is a subring of S′.

Let g_{1} through g_{n} generate S as an R module.
The powers of any ideal in a dedekind domain intersect in 0,
hence R embeds in R′, and S embeds in S′.
Each g_{i} embeds in S, building a sequence of cosets of U^{j+1} in U^{j},
or if you prefer, a consistent sequence of values in S/U^{j}.

For any j, g_{1} through g_{n} generate S/U^{j}, as an R/P^{j} module.
The generators span all of S, so just mod out by U^{j}, and you're there.

Let x be an element of S′,
producing a consistent sequence of values in S/U^{j}.
The first image of x, x/U if you will, is generated by the images of g_{1} through g_{n} in S/U.
Now step back to the image of x in S/U^{2}.
This is a linear combination of the images of g_{1} through g_{n} in S/U^{2}, with coefficients in R/P^{2}.
Furthermore, if we mod out by U, the equation in S/U and R/P reappears.
The coefficients on g_{1} through g_{n}, drawn from R/P^{2},
are consistent with the coefficients drawn from R/P in the previous step.
This continues all the way down the line,
until x itself is spanned by g_{1} through g_{n} with coefficients in R′.
Thus S′ is a finitely generated R′ module.

This proof breaks down if S/R is infinitely generated; the generators of S/R need not generate S′ over R′. x may require all of the generators, infinitely many, for its span.

As a corollary, S′ is integral over R′. It is reasonable to talk about primes over primes etc.

Don't assume S′ is dedekind; it isn't even an integral domain.
Apply the
chinese remainder theorem
as presented earlier,
and S′ becomes the direct product of several completions, one for each prime Q_{i} in the factorization of U.
Each prime lying over P creates its own completion,
and these rings, taken together, form S′.

If Q is a prime lying over P, such that Q^{e} is a factor of U,
it is sufficient to characterize the completion of S through Q^{e},
as a ring extension of the completion of R through P.
Such a ring is equivalent
to the completion of S through Q.

A prime ideal in the direct product S′ is prime in exactly one of its components. Thus it is sufficient to characterize the prime ideals in the completion of S through Q.

To make further progress we need a pid.
Since localization does not change the splitting problem,
localize about P, so that R is a dvr, and S is a pid.
Once this is done,
we can apply the results of the previous section.
The only prime in R′ is P′,
and the only prime in the i^{th} summand of S′ is
Q_{i}′, the extension of Q_{i} into the completion of S through Q_{i}.

Let Q be any of the primes over P, so I don't have to use subscripts all the time. Of course Q lies over P, and as shown above, R′ is a subring of S′ when completing through P and Q respectively. The residue fields, upstairs and downstairs, have not changed, thus the residue degree is preserved.

The extension of U into S′ can be calculated factor by factor,
then multiplied together.
We are extending each factor into a direct product of rings,
and the result is the direct product of the individual extensions.
When Q^{e} is pushed into a completion through some other prime, other than Q,
the result is the entire ring.
When Q^{e} is pushed into the completion of S through Q,
the result is Q′^{e}.
This was discussed in the previous section.
The ramification degree e is preserved.

The product of these powers of prime ideals is the direct product of the extension of U into each summand, which is the extension of U into S′. This in turn is the extension of P into S, into S′, which includes P*R′, hence it is the extension of P′ into S′. Thus P′*S′ splits in S′ just as P*S splits in S.

In summary, one can localize, and then complete, and the splitting problem has not changed. The primes over P, the ramification degree, and the residue degree, are preserved.

When R is a pid, S becomes a free R module. Let R be a dvr, which is also a pid. Let S have rank n as a free R module. We will show that S′ is a free R′ module of rank n.

Remember that S′ is a direct product of rings, and each ring is an R′ module. If each summand is free, then S′ is free.

Concentrate on a particular prime Q lying over P.
Let Q^{e} be a factor of U.
Let W be the completion of S through Q, hence W is a summand of S′.
The extension of P, or P′, into W, is Q′^{e}.

Let t generate P in R.
Move to R′,
the formal power series in t with coefficients in K.
Once again t generates P′ in R′.
Let z generate Q,
and note that z also generates Q′ in W.
Therefore z^{e} = t.
(Adjust t by a unit if necessary, to make this work.)

Both R′ and W are valuation rings, and the valuation of the former extends naturally up to the valuation of the latter. If the valuation of t is 1, set the valuation of z to 1/e.

To show W is free, build a basis as follows.
Start with the powers of z, from 1 to z^{e-1}.
Then cross this with a basis for the finite field extension of S/Q over R/P.
Does this span all of W?
Start with a series in W, and separate it into e subseries,
having exponents 0 mod e, 1 mod e, 2 mod e, and so on up to e-1 mod e.
Focus on one of these subseries.
Each coefficient is in S/Q; write it as a linear combination of basis elements with coefficients in K.
Separate the series again, by the basis of the field extension.
The result is a series in R′.
Thus an arbitrary series in W is spanned by our basis.
Furthermore, the representation is unique,
hence W is a free R′ module.
Put the summands together, and S′ is free over R′.

What about the rank? Within S, the sum of residue degree times ramification degree is n, which is the rank of S as an R module. The splitting problem does not change with completion. Residue degree times ramification degree becomes the rank of W over R′, and when these are added up, the rank of S′ is n. All is well.

If you enjoy tensor products, the tensor product equals the completion.
Assume rings have already been localized,
so that R is a dvr and S is a pid.
Let g_{1} through g_{n} be a basis for S as an R module.
Let T be the tensor product R′×S as R modules.
Multiplication in S′ is a bilinear map on R′ cross S,
hence T maps into S′.
Since g_{1} through g_{n} span S′, T maps onto S′.
If the map from T onto S′ is injective, we have an isomorphism,
and the completion equals the tensor product.

Suppose the map has a nontrivial kernel C, and write the following short exact sequence.

0 → C → T → S′ → 0

Remember that S′ is a free R′ module of rank n. Since S is a free R module of rank n, tensor with R′, and T is a free R′ module of rank n. Thus one module of rank n maps onto another.

Divide R′ by its maximal ideal P′ to get an R′ module better known as K. Tensor the above sequence with K and get this.

? → C×K → K^{n} → K^{n} → 0

These are all K vector spaces, hence C×K = 0. Apply the quotient formula, and P′*C = C.

Since R is noetherian, R′ is noetherian. Since T is a finitely generated R′ module, it too is noetherian. The submodule C is finitely generated. Apply nakiama's lemma, and C = 0. The map is injective, and that completes the proof. The completion of S equals S tensored with the completion of R.

Thanks to the above isomorphism, a pair generator xy in S tensor R′ is multiplied together to get an element in S′. Restrict x to an ideal H in S. The tensor product becomes everything spanned by H times R′. Since H*S lies in H, this is everything spanned by H times S′, or the extension of H into S′. Therefore the completion of H is H tensored with the completion of R.

As a special case, set S = R. The completion of an ideal H in R is now H tensored with the completion of R.

Let R be a cdvr with fraction field F. Let E be a finite extension of F, of dimension n, with S the integral closure of R in E. Since R is a pid, S is free of rank n, and dedekind.

Let P be the prime ideal of R, and let U be the extension P*S in S. Now U is a product of primes lying over P.

The completion of S through U is the same as S tensored with the completion of R. But R is already complete, so we're talking about S×R, which is S. Therefore S is already complete.

If there are several primes lying over P, S is a direct product of rings. Yet S is an integral domain. Therefore there is but one prime Q lying over P. This makes S a cdvr.

Write P = Q^{d}, where d is the ramification degree.
The residue degree is the dimension of S/Q over R/P.
The product of residue degree and ramification degree is n.

As described in the previous section, one can extend the valuation from R up to S.

That's one small step - let's take a giant leap.
Assume there is a countable set of algebraic elements that we want to adjoin to R.
For instance, R could be the p-adic integers,
and we want to bring in all the algebraic elements over **Z**.
Let c be an algebraic element that is new to the ring.
Extend the fraction field by c, and take the ring of integers,
which includes c.
The result is another cdvr whose valuation is consistent with the prior ring.
Repeat this process, and let S be the union over all these rings.

If c is an element of S, it is assigned a valuation when it is first brought in, and its valuation remains the same in all subsequent rings, all the way up to S. Thus valuation is a well defined function from S into the rationals.

Given x and y in S,
find the first ring that includes both x and y.
This is a valuation ring,
hence v(xy) = v(x)+v(y), and v(x+y) is at least v(x) or v(y).
The result is a valuation group on S,
and this makes S a valuation ring.
But it's not a dvr.
If p has valuation 1 in the p-adic integers,
then the j^{th} root of p has valuation 1/j.
These valuations approach 0.
In fact the valuation group is the rationals.

Consider the ideals satisfying v(x) > 1/j, and S is not noetherian.

Let S and R be dvrs, with S finitely generated over R, and M the maximal ideal of S.
Let S_{1} be a ring between R and S, such that S_{1} has all the cosets of M in S.
Also let S_{1} contain w, a uniformizer of S.
In this case S_{1} = S.

Since S is a finite R module it is a finite S_{1} module,
giving an integral extension.
The maximal ideal M restricts to a maximal ideal M_{1} in S_{1}.
Any other M_{1} would lift to a different M in S, hence S_{1} is a local ring.
Units in S_{1} come from units in S,
and w is not a unit in S, hence w lies in M_{1}.
Since w generates M, M_{1}*S = M.
Think of M_{1} as an inert prime into S.

Consider the inclusion of S_{1} into S, as S_{1} modules, and let Q be the quotient module.

0 → S_{1} → S → Q → 0

Tensor this with the residue field S_{1}/M_{1}.
The middle term becomes S/(M_{1}*S), which is the same as S/M.
The first residue field maps into the second.
Since S_{1} represents all the cosets of M in S, this map is onto.
That means Q tensor S_{1}/M_{1} = 0, over a local ring.
Apply the quotient formula, and nakiama's lemma,
and Q = 0.
The embedding of S_{1} into S has no quotient,
hence S_{1} = S.

Let S and R be as above,
with residue fields K_{S} and K_{R}.
Assume the former field is a separable extension of the latter.
We know the residual degree divides n, so the extension is finite.

By the primitive element theorem, K_{S} is K_{R} adjoin some element u,
with a minimum polynomial p(u) over K_{R}.
For a trivial residue extension, set u = 1 and p = x-1.
Lift u in K_{S} to some element v in S, so that v/M = u.

Let S_{1} be the sub R algebra R[v] contained in S.
Now S_{1}/M contains K_{R}, and v, hence it contains all the cosets of M in S.
Find an element w with valuation 1, and we may use the above to assert S_{1} = S.

First try w = p(v), which is 0 in K_{R},
and lies in the maximal ideal M.
Note that p, in S[x], is any polynomial that reduces, mod M,
to the minimum polynomial p(u) over K_{R}.
If w has valuation 1 in S we are done,
so let w have valuation > 1.

Since v is outside of M, it is a unit in S.
If t generates M,
set v = v+t, which doesn't change u = v/M.
What happens to w = p(v)?
By taylor expansion,
we get p(v) + t*p′(v) +
t^{2} times some stuff.
The first and third terms have valuations > 1.
Suppose p′(u) = 0.
Assuming K_{R} is finite or has characteristic 0,
this makes u a repeated root,
which is impossible, since p is the minimum polynomial of u.
Thus p′(v) mod M is nonzero in K_{S}.
In other words, p′(v) is a unit.
The middle term has valuation 1, and w has valuation 1.
That completes the proof.

In summary, S is a simple ring extension of R, using the adjoined element v.

Let R be a cdvr with fraction field F. Let E/F be a finite field extension, and let S be the integral ring over R. As described in the previous section, S is a cdvr. Assume the residual field extension is separable. Thus S = R[v] for some v. Let p be the minimum polynomial of v over F. Since R is integrally closed, p is also the monic polynomial that proves v is integral over R. The powers of v span S, as an R module, and E, as an F vector space. In other words, E = F(v).

If S/R is totally ramified, the residual degree is 0, and the residual extension is trivial, and separable, hence S/R is a simple ring extension. Start by setting v = 1 and p(x) = x-1. This gives F(v) a valuation of infinity, so let v = 1+t. Now w = F(v) is a uniformizer of S, hence 1+t generates the extension. In other words, S = R adjoin 1+t.

Let S/R be a simple extension of dedekind domains, with R a pid. The previous section presents some situations where S/R is known to be a simple ring extension, so this is not unusual. Let v be the adjoined element, so that R[v] = S. Map the polynomial ring R[x] onto S by mapping x to v. We are interested in the kernel W of this map.

If R = S, W is trivial, so assume R is a proper subring of S.

Now v is integral over R, and a monic polynomial p(v), with coefficients in R, is also the minimum polynomial of v over the fraction field of R. Naturally p lies in W.

Since S is an integral domain, W is prime. W contains the prime 0, and thus has height at least one. It lives within R[x], which has dimension 2. The dimension of the range S is 1. Therefore W has height 1. There are no prime ideals between 0 and W.

With R[x] noetherian,
select a finite set of generators g_{1} through g_{m} for W.
If g_{i} is the product of two polynomials,
at least one of those polynomials lies in W, since W is prime.
Therefore we may assume each g_{i} is irreducible.

With R[x] a ufd, prime and irreducible are synonymous.
Each g_{i} is prime.
In particular, g_{1} is prime, and generates a prime ideal inside W.
This has to be W, hence W is principal.

Since p lies in W, g_{1} times some other polynomial h = p.
Yet p is minimal, and irreducible over F, and over R,
hence g_{1} = p (or p times a unit in R).
The kernel W is generated by the minimum polynomial p(x).