Darren L. Slider



The Spinor Norm and Zassenhaus’s Theorem







Darren’s Writings







Home
DARREN L. SLIDER (B. 1967)

The Spinor Norm and Zassenhaus’s Theorem

A Research Paper Submitted in Partial Fulfillment of the Requirements for the Master of Science Degree

Department of Mathematics
Graduate School
Southern Illinois University at Carbondale

© 1991



ACKNOWLEDGEMENTS

I would like to express my deepest gratitude to my advisor, Professor Andrew G. Earnest, for his abundant help and guidance with this paper, and for making my two years of graduate school a stimulating and enjoyable experience. I would also like to thank the other members of my committee, Professors Robert Fitzgerald and Mary Wright, for their time and effort with respect to this paper and its presentation. Finally, I express my wholehearted indebtedness and praise to God and to my wife, for reasons which transcend verbal explanation.


THE SPINOR NORM AND ZASSENHAUS’S THEOREM

The proof of Theorem 1 here given is due to Geoffrey Mason, which is sketched in his article “Groups, Discriminants, and the Spinor Norm” (see Bibliography). The examples following Zassenhaus’s Theorem were obtained from L. E. Dickson, Studies in the Theory of Numbers (see Bibliography).

We assume throughout that V is a vector space over a field F of characteristic not 2 (i.e., there exists a Î F such that 2a ¹ 0) and that dim(V) = n (where n Î N is fixed).

Definition. A mapping B : V ´ V ® F is a symmetric bilinear form on V if for all v, w, x Î V, a Î F, the following properties hold:

(1) B(v,w+x) = B(v,w) + B(v,x)
(2) B(av,w) = aB(v,w)
(3) B(v,w) = B(w,v).

Definition. Let B be a symmetric bilinear form on V. Let Q : V ® F be given by Q(v) = B(v,v). Q is a quadratic form on V.

Definition. Let V have basis {v1,...,vn} and symmetric bilinear form B. The determinant det(B(vi,vj)) of the n ´ n matrix (B(vi,vj)) is called the discriminant of V and is written d(V). d(V) is invariant under change of basis in the group F*/F*2 and is thus represented as an element of that field (see O’Meara, pp. 85-87).

Let n, t Î N satisfy 1 £ n £ t. We introduce variables aij (1 £ i, j £ t) and the t ´ t matrices

D = (aij)

Jn=
In0
00

N=
0a12 - - a1t
|\\ |
| \\|
|  \at-1,t
0 - - - 0

Ai=
0
|
0
ai1×××ait
0
|
0

En = (I + A1)(I + A2)××××××(I + An)

Lemma 1. Jn-1D + An = JnD and (Jn-1D - N)An = 0.

Proof. Note that

JkD=
Ik0
00
(aij) = 
a11 - - - - a1t
|    |
|    |
ak1 - - - - akt
0 - - - - 0
|    |
0 - - - - 0
.

It follows that Jn-1D + An =

a11 - - - - a1t
|    |
an-1,1 - - - - an-1,t
0 - - - - 0
|    |
|    |
0 - - - - 0
 + 
0 - - - - 0
|    |
|    |
an1 - - - - ant
0 - - - - 0
|    |
0 - - - - 0
 = 
a11 - - - - a1t
|    |
|    |
an1 - - - - ant
0 - - - - 0
|    |
0 - - - - 0
 = JnD.

Furthermore, (Jn-1D - N)An =

  
a11 - - - - - - a1t
|      |
|      |
an-1,1 - - - - - - an-1,t
0 - - - - - - 0
|      |
|      |
0 - - - - - - 0
-
0a12 - - - - - a1t
|\\    |
| \\   |
|  \\  |
|   \\ |
|    \\|
|     \at-1,t
0 - - - - - - 0
  
An =

a110 - - - - - 0
|\\    |
| \\   |
a1,n-1 - - an-1,n-10 - - 0
0 - - - 0an,n+1 - ant
|    \\|
|     \at-1,t
0 - - - - - - 0
0 - 0
| |
| |
0 - 0
an1 - ant
0 - 0
| |
0 - 0
0.

Q.E.D.

Proposition 1. (I - N)(En - I) = JnD.

Proof. By induction on n:

Note that E1 - I = I + A1 - I = A1. So (I - N)(E1 - I) = (I - N)A1 =

1 -a12 - -a1t
0\\|
|\\ -at-1,t
0 - 01
a11 - - a1t
0 - - 0
|  |
0 - - 0
 =  
a11 - - a1t
0 - - 0
|  |
0 - - 0
 =  JnD

by the first result of the proof of Lemma 1.

Assume that (I - N)(Ek -1 - I) = Jk -1D for some k with 2 ≤ kt. Then

(I - N)(Ek - I) = (I - N)(Ek - 1(I + Ak) - I)
  = (I - N)(Ek - 1 + Ek - 1Ak - I)
  = (I - N)(Ek - 1 - I) + (I - N)(Ek - 1Ak)
  = Jk - 1D + (I - N)(Ek - 1Ak - Ak + Ak)
  = Jk - 1D + (I - N)(Ek - 1 - I)Ak + (I - N)Ak
  = Jk - 1D + Jk - 1DAk + Ak - NAk
  = (Jk - 1D + Ak) + (Jk - 1D - N)Ak
  = JkD + 0
  = JkD

by Lemma 1.

Q.E.D.

From now on, let dim V = n = t, replace aij by -2aij/aii, and set T(vi) = I + Ai =

10 - - - - - - 0
0\\     |
|\\\    |
0 - 010 - - - 0
-2ai1 /aii - - -2ai,i-1 /aii -1 -2ai,i+1 /aii - - -2ain/aii
0 - - - 010 - 0
|    \\\|
|     \\0
0 - - - - - - 01

Lemma 2. det(T(v1)T(v2)×××T(vn) - I) = (-2)ndet(aij)/a11×××ann.

Proof. Since n = t, we have JnD = JtD = ID = D. We may obtain det(D) = det(-2aij/aii) by multiplying each row of (aij) by -2 and (for 1 ≤ in) the ith row by 1/aii; elementary linear algebra (see Anton, p. 65) tells us that this yields det(D) = (-2)ndet(aij)/a11×××ann. Also from elementary linear algebra, we get det(I - N) = 1 since I - N is upper triangular (see the proof of Proposition 1 and Anton, p. 64). Taking determinants on both sides of the result of Proposition 1 now yields

det(T(v1)×××T(vn) - I) = det(((I + A1)×××(I + An)) - I)
  = det(En - I)
  = det(I - N)det(En - I)
  = det((I - N)(En - I))
  = det(JnD)
  = det(D)
  = (-2)ndet(aij)/a11×××ann.

Q.E.D.

Definitions. Let B be a symmetric bilinear form on V, and let W be a subspace of V. We define the orthogonal complement W* of W to be {v Î V : B(v,w) = 0 " w Î W}. We call rad(W) = W Ç W* the radical of W. It is easy to prove that W* and rad(W) are subspaces of W. We say that W is nondegenerate if rad(W) = {0}. In this case, V = W + W* (O’Meara, p. 102).

We assume henceforth that V is nondegenerate and has a symmetric bilinear form B with associated quadratic form Q.

Let M : V ® V be an injective linear transformation such that for all v, w Î V, B(M(v),M(w)) = B(v,w). M is called an isometry of V onto itself. For w Î V, define T(w) : V ® V by T(w)(v) = v - (2B(v,w)w/Q(w)) for v Î V. T(w) is the symmetry on V with respect to w.

Let {v1, . . . ,vn} be a basis for V. Then if v Î V, v = a1v1 + . . . anvn for some a1, . . . ,an Î F. Note that

10 - - - - - - 0
0\\     |
|\\\    |
0 - 010 - - - 0
-2ai1 /aii - - -2ai-1,i /aii -12ai+1,i /aii - - 2ain/aii
0 - - - 010 - 0
|    \\\|
|     \\0
0 - - - - - - 01
a1
|
|
ai-1
ai
ai+1
|
|
an

=
a1
|
|
ai-1
ai - (2B(v,vi)/Q(vi))
ai+1
|
|
an

under the interpretation aij = B(vi,vj). Indeed, in the ith row we have

-2a1 ai1

aii
+ ××× +
-2ai-1ai,i-1

aii
+ (-ai)
+
- 2ai+1ai,i+1

aii
+ ××× +
- 2anain

aii

=
ai+
-2(aiai1 + ××× + anain)

aii
=
ai+
-2(aiB(vi,v1) + ××× + anB(vi,vn))

Q(vi)
=
ai+
-2(B(vi,a1v1) + ××× +B(vi,anvn))

Q(vi)
=
ai-
2B(vi,a1v1 + ××× + anvn)

Q(vi)
=
ai-
2B(vi,v)

Q(vi)
=
ai-
2B(v,vi)

Q(vi)

and from this we can see that the choice of terminology for the symmetry on V with respect to vi was no accident: the latter, as we have just shown, has the matrix representation of I + Ai shown above.

Let M be an isometry of V onto itself. By the theorem of Cartan and Dieudonné (O’Meara, p. 102) we know that M has a representation as a product of symmetries,

M = T(w1)×××T(wk )(1)

with kn and Q(wi) ≠ 0 for i = 1,...,n.

Proposition 2. Let M be as in (1) with k minimal. Let W = áw1,...,wkñ. Then M has no eigenvalue equal to 1 if and only if V = W.

Proof. (Þ) For i = 1,...,k, let wi* = {v Î V: B(v,wi) = 0}. Since W is spanned by {w1,...,wk}, for w Î W we may find a1,...,ak Î F such that w = a1w1 + ××× + akwk . Assume that wÎ Çwi*; then for w Î W we have

B(w’,w) = B(w’,a1w1 + ××× + akwk)
  = B(w’,a1w1) + ××× + B(w’,akwk)
  = a1B(w’,w1) + ××× + akB(w’,wk)
  = a1(0) + ××× + ak(0)
  = 0

so that wÎ W*. Therefore, Çwi* Í W*. On the other hand, if w Î W*, then B(w,wi) = 0 for i = 1,...,k since wi Î W; thus W* Í Çwi*. So we have W* = Çwi*.

Let w Î W* = Çwi*; then for i = 1,...,k, B(w,wi) = 0 and

T(wi)(w)
=
w-
2B(w,wi)

Q(wi)
wi = w

and thus M(w) = T(w1)×××T(wk)(w) = w. Therefore (M - I)(w) = 0 and if w ¹ 0, w is an eigenvector corresponding to the eigenvalue 1 of M. But by hypothesis, M has no eigenvalues equal to 1, so we must have w = 0. It follows that W* = {0}. In particular, rad(W) = W Ç W* = {0} so that W is nondegenerate. We conclude that V = W + W* = W + {0} = W.

(Ü) Assume now that V = W. Then V = áw1,...,wkñ; since k is minimal and dim(V) = n, k = n and áw1,...,wnñ is a basis for V. Under the interpretation vi = wi, aij = B(wi,wj), Lemma 2 yields

det(M-I) = (-2)ndet(B(wi,wj))/Q(w1)×××Q(wn).(2)

Since V is nondegenerate, d(V) = det(B(wi,wj)) ¹ 0, so det(M - I) ¹ 0. We conclude that M has no eigenvalue equal to 1.

Q.E.D.

Definition. Let M be as above, and let U be a subspace of V. We say that M acts as a unipotent operator on U if for each u Î U, there exists a t Î N such that (M - I)t(u) = 0.

We may assume that the largest subspace on which M acts as a unipotent operator contains all other such subspaces. Indeed, let U be a subspace and let u’ be a vector not in U such that (M - I)m(u’) = 0 for some m Î N. Let u Î U be arbitrary; then for some n Î N, (M - I)n(u) = 0. Let t = max{m,n}; then

(M - I)t(u + au’) = (M - I)t(u) + (M - I)t(au’)
  = (M - I)t-n((M - I)n(u)) + a(M - I)t-m((M - I)m(u’))
  = (M - I)t-n(0) + a(M - I)t-m(0)
  = 0

so that M acts as a unipotent operator on U + áuñ. Continuing in this way, we can (since V is finite-dimensional) obtain a subspace upon which M acts as a unipotent operator which contains all other such subspaces.

Lemma 3. Let U be the largest subspace of V on which M acts as a unipotent operator, and let W be as in Proposition 2. Then V = U + W.

Proof. Let v ΠV. Then

M(v) = T(w1)×××T(wk)(v)
= T(w1)×××T(wk-1)(v -
2B(v,wk)

Q(wk)
wk)
= T(w1)×××T(wk-1)(v - wk’) for wkÎW
= T(w1)×××T(wk-2)(v -
wk’ -2B(v-wk’,wk-1)

Q(wk-1)
wk-1)
= T(w1)×××T(wk-2)(v - wk-1’) for wk-1ÎW
= ×××××××××××××××
= ×××××××××××××××
= v - w1’ for w1ÎW

so that (M - I)(v) = v - w1’ - v = -w1Î W. It follows that im(M - I) Í W.

Now consider the operators M - I, (M - I)2, (M - I)3, . . . on V. Note that ker(M - IÍ ker(M - I)2 Í . . . are subspaces of V. Since V is finite-dimensional, there must exist k Î N such that ker(M - I)k = ker(M - I)k+j for all j Î N. Let V0 = ker(M - I)k and let V1 = im(M - I)k.

Let x Î V0 Ç V1. Since x Î V1, there must exist some y Î V such that x = (M - I)k(y). Since x Î V0, we must also have (M - I)k(x) = 0. Thus 0 = (M - I)k(x) = (M - I)k((M - I)k(y)) = (M - I)2k(y). But ker(M - I)2k = ker(M-I)k by the choice of k. It follows that y Î ker(M - I)k, so that we have x = (M - I)k(y) = 0. Therefore V0 Ç V1 = {0}.

Since W Ê im(M - I) Ê im(M - I)2 Ê . . . , we have V1 Í W and V0 Í U. So

dim(U + W)dim(V0 + V1)
= dim(V0) + dim(V1) - dim(V0 Ç V1)
= nullity(M - I)k + rank(M - I)k - 0
= dim(V)

and of course dim(V) ≥ dim(U + W); thus dim(V) = dim(U + W). It follows that V = U + W, as desired.

Q.E.D.

Lemma 4. Let W be as in Proposition 2, and let M’ be defined on W/rad(W) by M’(w+rad(W)) = M(w)+rad(W). Then M’ is a well-defined isometry on W/rad(W). Furthermore, let X/rad(W) be the largest subspace of W/rad(W) on which M’ acts as a unipotent operator, and let U be as in Lemma 3. If X/rad(W) is nondegenerate, then X Í U and W Í X + X*.

Proof. Given the symmetric bilinear form B on V, define B’ on W/rad(W) by B’(w+rad(W),w’+rad(W)) = B(w,w’). It is trivial to verify that B’ is a symmetric bilinear form on W/rad(W). To see that B’ is well-defined, note that if w1 + rad(W) = w + rad(W) and w1’ + rad(W) = w’ + rad(W), it follows that w  -  w1,w’  -  w1’  Î  rad(W)  so that B(w - w1,w’) = 0, whence B(w,w’) = B(w1,w’). By a similar argument, B(w1,w’) = B(w1,w1’). Therefore B(w,w’) = B(w1,w’) = B(w1,w1’).

Given this symmetric bilinear form B’ on W/rad(W), it is easy to prove that M’ is an isometry, for it is clear from the proof of Lemma 3 that M maps W into W. If, say, w and w’ belong to the same coset of W/rad(W), then w-wÎ rad(W), and so M(w - w’) = w - wÎ rad(W) by the proof of Proposition 2, since rad(W) Í W*. Therefore

(M(w) + rad(W)) - (M(w’) + rad(W)) = (M(w) - M(w’)) + rad(W)
= M(w - w’) + rad(W)
= (w - w’) + rad(W)
= 0 + rad(W)

so that M(w) + rad(W) = M(w’) + rad(W). Hence M’ is well-defined.

Now suppose that X/rad(W) as defined above is nondegenerate; let x Î X. Certainly x + rad(W) Î X/rad(W); since M’ acts as a unipotent operator on X/rad(W), there is t Î N such that (M-I)t(x)+rad(W)=(M’-I)t(x+rad(W)) = 0 + rad(W); in particular, (M - I)t(x) Î rad(W). Now rad(W) Í W* so that (M - I)t(x) Î W*; by the proof of Proposition 2, we have M((M - I)t(x)) = (M - I)t(x). Therefore, (M - I)t+1(x) = 0. It follows that x Î U. So X Í U.

We shall now prove that (X/rad(W))* = X*/rad(W). Let w+rad(W) Î (X/rad(W))*. Then if x+rad(W)ÎX/rad(W), we have B(w,x) = B’(w + rad(W),x + rad(W)) = 0. Then w Î X* so that w + rad(W) Î X*/rad(W). Therefore (X/rad(W))* Í X*/rad(W). On the other hand, let x + rad(W) Î X*/rad(W); then x Î X* and if xÎ X/rad(W), then B’(x+rad(W),x’+rad(W)) = B(x,x’) = 0. Therefore x+rad(W) Î (X/rad(W))* and X*/rad(W) Í (X/rad(W))*. It now follows that (X/rad(W))* = X*/rad(W).

Since X/rad(W) is assumed to be nondegenerate, we have

W/rad(W) = X/rad(W) + (X/rad(W))*
= X/rad(W) + X*/rad(W).

Consider w Î W. For some x Î X, xÎ X*, we have

w + rad(W) = (x + rad(W)) + (x’ + rad(W))
= (x + x’) + rad(W)

so that w - (x + x’) Î rad(W); say w’ = w - (x + x’). Then w = x + x’ + w’. To prove that W Í X + X*, it suffices to show that wÎ X*.

Since wÎ rad(W), then in particular wÎ W*. Since X Í W, for any y Î X, we have y Î W, so that B(w’,y) = 0. It follows that wÎ X*, which concludes the proof.

Q.E.D.

Proposition 3. The largest subspace U of V on which M acts as a unipotent operator is nondegenerate.

Proof. By induction on dim(V) = n. Let W be as in Proposition 2. Then by Lemma 3 we have V = U + W. We set R = rad(U) = U Ç U* and try to show that R = {0}.

If n = 1, then either V = U or V = W. If V = U, then U is nondegenerate because V is assumed to be so. If V = W, then by Proposition 2, M has no eigenvalue equal to 1. It follows that if (M - I)(v) = 0 for some v Î V, then v = 0. Likewise, if (M - I)2(v) = 0, then we have 0 = (M - I)2(v) = (M - I)((M - I)(v)) so that (M - I)(v) = 0, whence v = 0. Proceeding inductively in this way, we may conclude that (M - I)t(v) = 0 Þ v = 0 for any t Î N. It follows that U = {0}; in this case R = U Ç U* = {0} Ç {0}* = {0} and we are done.

Now assume that the largest subspace of any nondegenerate vector space of dimension less than n upon which any given isometry acts as a unipotent operator is nondegenerate.

Now consider dim(V) = n. By the above argument, we are done if V = W, so assume that V ¹ W. Note that W* Í U (since M acts as a unipotent operator on W* by the proof of Proposition 2); therefore (see O’Meara, p. 92)

R = U Ç U* Í U* Í (W*)* = W.

Let X/rad(W) be as in Lemma 4. Since W ¹ V, we have dim(W) < dim(V). Clearly dim(W/rad(W)) ≤ dim(W), so that dim(W/rad(W)) ≤ dim(W) < dim(V) = n. It now follows from the induction assumption that X/rad(W) is nondegenerate.

Since U* Í W, we get R = U Ç U* Í U Ç W. We shall show that U Ç W Í X so that R Í X. Indeed, let w Î U Ç W; then w Î U so that (M - I)t(w) = 0 for some t Î N. Letting M’ be as in Lemma 4, we have

(M’ - I)t(w + rad(W)) = (M’ - I)t(w) + rad(W)
= 0 + rad(W)

whence by the maximality of X/rad(W), w + rad(W) Î X/rad(W) so that w Î X.

Since R = rad(U), R is orthogonal to U; since R Í X = (X*)*, R is orthogonal to X*. Hence R is orthogonal to U + X*. By Lemma 4 we have X Í U and W Í X + X*, since X/rad(W) is nondegenerate. Therefore

V = U + W Í U + X + X* Í U + X*

and certainly U + X* Í V, so V = U + X*. Now V is nondegenerate, so we have V* = (U + X*)* = {0}, whence R Í (U + X*)* = {0} gives us R = {0}, as desired.

Q.E.D.

Definition. Let M be an isometry on the quadratic space V with associated quadratic form Q. If M = T(w1)T(w2)×××T(wk), we define the spinor norm of M to be Θ(M) = Q(w1)Q(w2)×××Q(wk) Î F*/F*2. This is a well-defined invariant (i.e., independent of the choice of product of reflections) in F*/F*2 (see O’Meara, pp. 131-137), Furthermore, it is easily seen that Θ is a homomorphism on the group (under the operation of composition) of isometries of V onto itself.

Proposition 4. If M acts as a unipotent operator on V, then Θ(M) = 1.

Proof. By induction on dim (V) = n.

Let dim(V) = á v ñ. Since M acts as a unipotent operator on V, there must exist some minimal t Î N such that (M - I)((M - I)t-1(v)) = (M - I)t(v) = 0. If t = 1, let v’ = v; if t > 1, let v’ = (M - I)t-1(v). Since v¹ 0 (otherwise v = 0 or t is not minimal, respectively), v’ = αv for some 0 ¹ α Î F and we have 0 = (M - I)(v’) = (M - I)(αv) = α(M - I)(v). It follows that (M - I)(v) = 0, whence M(v) = v. So for any w Î V, say, w = βv, M(w) = Mv) = βM(v) = βv = w. Therefore M = I.

Since M may be represented as a product of reflections, say, M = T(w1)×××T(wk) with Q(wi) ¹ 0, then in particular, Q(w1) ¹ 0 and w1 ¹ 0 so that V = áw1ñ. Hence for w Î V, say w = βw1,

T(w1)T(w1)(w) = T(w1)T(w1)(βw1)
= βT(w1)T(w1)(w1)
= βT(w1)(-w1)
= βw1
= w
= I(w)
= M(w)

so that M = T(w1)T(w1). Therefore Θ(M) = Q(w1)Q(w1) = (Q(w1))2 = 1 in F*/F*2.

Now assume that the spinor norm of any isometry acting as a unipotent operator on any vector space of dimension < n is 1.

Let W be as in Proposition 2. Let 0 ¹ v Î V; then we have (M - I)t(v) = 0 for some minimal t Î N. If t = 1, then 1 is an eigenvalue of M corresponding to v. If t > 1, then 1 is an eigenvalue of M corresponding to (M - I)t-1(v) (which is in this case an eigenvector of M as (M - I)((M - I)t-1(v)) = (M - I)t(v) = 0). In either case, M has an eigenvalue equal to 1 and by Proposition 2, V ¹ W. Then there is 0 ¹ w Î W* (for otherwise W* = {0} implies that V = W by the proof of Proposition 2). Consider the space Y / áwñ, where Y  =  áwñ  +  Z and Z = {v Î V  :  B(v, w) = 0}.

Given the symmetric bilinear form B on V, define B" on the space Y / áwñ by B"(aw + z + áwñ, a’ w + z’ + áwñ) = B(z,z’) where z, zÎ Z. It is trivial to verify that B" is a symmetrical bilinear form on V/áwñ. To see that B" is well-defined, note that if a1w + z1 + áwñ = aw + z + áwñ, a’1w + z1 + áwñ = a’w + z’ + áwñ, then z - z1, z’ - z1Î áwñ so that (in particular) z - z1 = βw for some β Î F and

B(z,z’) - B(z1,z’) = B(z - z1,z’)
= Bw,z’)
= βB(w,z’)
= 0

since z’  Î  Z. Hence B(z,z’)  =  B(z1,z’). By a similar argument, B(z1,z’)  =  B(z1,z1’) so that B(z,z’)  =  B(z1,z’)   = B(z1,z1’).

Define M" on the space Y/áwñ by M"(y + áwñ) = M(y) + áwñ. Given the symmetric bilinear form B", it is fairly easy to prove that M" is an isometry; to see that M" is well-defined, suppose that y + áwñ = y’ + áwñ. Then y - yÎ áwñ so that y - y’ = αw for some α Î F. Therefore M(y) - M(y’) = M(y - y’) = Mw) = αM(w) = αw since M acts trivially on W* by the proof of Proposition 2. Thus M(y) - M(y’) Î áwñ so that M(y) + áwñ = M(y’) + áwñ.

Furthermore, M" acts as a unipotent operator on Y/áwñ. To see this, let y + áwñ be any element of Y + áwñ. Since in particular y Î V, and since M acts as a unipotent operator on V, there exists n Î N with (M - I)n(y) = 0. Thus we have (M” - I)(y + áwñ) = M"(y + áwñ) - I(y + áwñ) = M(y) + áwñ - (y + áwñ) = (M(y) - y) + áwñ = (M - I)(y) + áwñ so that(M” - I)n(y + áwñ) = (M - I)n(y) + áwñ = áwñ. It follows that M" acts as a unipotent operator on Y/áwñ.

Note that for i = 1 to k, Q"(wi + áwñ) = B"(wi + áwñ,wi + áwñ) = B(wi,wi) = Q(wi) ¹ 0. Now let T"(wi + áwñ) be the symmetry on Y/áwñ with respect to wi + áwñ. For arbitrary y + áwñ Î Y/áwñ, we have

T"(wi + áwñ)(y + áwñ) = y + áwñ -
2B"(wi + áwñ,y + áwñ)

Q"(wi + áwñ)
(wi + áwñ)
= y + áwñ -
2B(wi,y)

Q(wi)
(wi + áwñ)
= (y -
2B(wi,y)

Q(wi)
wi) + áwñ
= T(wi)(y) + áwñ

so that

[T"(w1 + áwñ)×××T"(wk + áwñ)](y + áwñ) = [T"(w1 + áwñ)×××T"(wk -1 + áwñ)](T"(wk + áwñ)(y + áwñ))
= [T"(w1 + áwñ)×××T"(wk -1 + áwñ)](T(wk)(y) + áwñ)
= [T"(w1 + áwñ)×××T"(wk -2 + áwñ)](T(wk -1)T(wk)(y) + áwñ)
= ×××××××××××××××
= ×××××××××××××××
= [T"(w1 + áwñ)](T(w2)×××T(wk)(y) + áwñ)
= T(w1)×××T(wk)(y) + áwñ
= M(y) + áwñ
= M"(y + áwñ).

It follows that M" = T"(w1 + áwñ)×××T"(wk + áwñ) is a representation of M" as a product of reflections such that Q"(wi + áwñ) ¹ 0 for i = 1 to k.

Note that dim(Y/áwñ) < n since w ¹ 0. Therefore

1 = Θ(M")
= Q"(w1 + áwñ)×××Q"(wk + áwñ)
= Θ(M)

by the induction hypothesis.

Q.E.D.

Lemma 5. M|U and M|U* are well-defined isometries. Furthermore, let the characteristic polynomial of M be given by χ(z). Set χ(z) = (z - 1)fχ0(z) where (z - 1) does not divide χ0(z). Then the characteristic polynomials of M|U, M|U* are given by (z - 1)f, χ0(z) respectively.

Proof. Let u Î U; then there is t Î N such that (M - I)t(u) = 0. Then (M - I)t(M(u)) = M((M - I)t(u)) = M(0) = 0. It follows that M(u) Î U for all u Î U. Therefore M|U is well-defined and is an isometry because M is an isometry.

Since M|U is an isometry on U, M|U is injective; since U is finite-dimensional, M must also be surjective. In particular, if we let u Î U, there exists x Î U such that M(x) = M|U(x) = u. Now let uÎ U* be arbitrary. Since M is an isometry, we have B(u,M(u’)) = B(M(x),M(u’)) = B(x,u’) = 0. It follows (since u was also arbitrary) that M(u’) Î U*. Therefore M|U* is well-defined and is an isometry because M is an isometry.

We claim that charpoly(M) = charpoly(M|U)charpoly(M|U*). Indeed, since V = U + U*, there exists a basis {v1, . . . ,vm,vm+1, . . . ,vn} for V with v1, . . . ,vm Î U, vm+1, . . . ,vn Î U*, and the corresponding matrix for M is in block-diagonal form.

Let α be an eigenvalue of M|U, and let u be the corresponding eigenvector. Since u Î U, there exists n Î N such that (M - I)n(u) = 0. Since α is an eigenvalue, M(u) = M|U(u) = αu and (M - I)(u) = (α - 1)u so that

(M - I)2(u) = (M - I)((M - I)(u))
= (M - I)((α - 1)u)
= (α - 1)(M - I)(u)
= (α - 1)2u.

Proceeding inductively in this way, we get 0 = (M - I)n(u) = (α - 1)nu whence it follows that α = 1. Therefore 1 is the only eigenvalue of M|U. Furthermore, if 1 is an eigenvalue of M, M(x) = x for some 0 ¹ x Î V whence we get (M - I)(x) = 0 and x Î U. Thus 1 is not an eigenvalue of M|U*. We may conclude that (z - 1) f | charpoly(M|U) in F[z].

Let {v1, . . . ,vn} be the basis for V mentioned above, and let E be a splitting field for charpoly(M|U). Define V # = {α1v1 + ××× + αnvn : αi Î E} and U # = {α1v1 + ××× + αmvm : αi Î E}; it is not hard to see that these are vector spaces over E. Furthermore, define

B #(
n
å 
i=1
αivi,
n
å 
j=1
βjvj) =
n
å 
i=1
n
å 
j=1
αiβjB(vi,vj)

on V #; the fact that B # is a symmetric bilinear form on V # then follows from the fact that B is a symmetric bilinear form on V. Similarly, define M #1v1 + ××× + αnvn) = α1M(v1) + ××× + αnM(vn) on V #; it follows routinely that M # is an isometry on V # with respect to B #. Furthermore, it is routinely verifiable that M # acts as a unipotent operator on U#.

Since M # and M have the same matrix representations for the given basis by definition of M #, it makes sense to write “M #|U #”. Indeed, M #|U # and M|U will have the same matrix representations and hence the same characteristic polynomials. But by an analogue to the above argument, since M # acts as a unipotent operator on U #, 1 is the only eigenvalue of M #|U #. Since E is a splitting field for charpoly(M|U) = charpoly(M #|U #), charpoly(M|U) factors completely in E[z] as an integer power of z - 1. Since 1 Î F, charpoly(M|U) must also factor completely in F[z] as an integer power of z - 1. Therefore charpoly(M|U) = (z - 1) f, as desired.

Finally, since (z - 1) fχ0(z) = χ(z) = charpoly(M) = charpoly(M|U)charpoly(M|U*) = (z - 1) fcharpoly(M|U*), we must have charpoly(M|U*) = χ0(z), which completes the proof.

Q.E.D.

The spinor norm is usually defined (as above) in terms of the symmetries of which the given isometry can be written as a product. Theorem 1, which we are now ready to prove, gives an intrinsic characterization of the spinor norm independent of such a factorization.

Theorem 1. Let M, U, χ(z) be as in Lemma 5, and let V, F be as above. In F*/F*2, Θ(M0(1) = 2nd(U*).

Proof. Since U is nondegenerate by Proposition 3, V = U + U* and thus Θ(M) = Θ(M|U)Θ(M|U*) (see O’Meara, p. 139). Indeed, Θ(M) = Θ(M|U*) since Θ(M|U) = 1 by Proposition 4. By Lemma 5, χ0(z) is the characteristic polynomial of M|U*. Finally, since the largest subspace of U* upon which M|U* acts as a unipotent operator is {0}, the orthogonal complement of that space is just U*. Hence it suffices to prove the theorem for U*.

Let V’ = U*, M’ = M|U* = M|V’. Let W and {w1, . . . wk} be as in Proposition 2. By Lemma 5, (z - 1) does not divide charpoly(M’) so that M’ has no eigenvalue equal to 1; it follows from Proposition 2 that V’ = W. Furthermore, by (2) in the proof of that result, we have

χ0(1) = (-1)ndet(M’ - I)
=
2ndet(B(wi,wj))

Q(w1)×××Q(wn)
=
2nd(U*)

Θ(M’)

which is what we wanted.

Q.E.D.

As a special case of Theorem 1, we obtain a formula for the computation of the spinor norm in terms of a simple determinant. This formula was originally obtained by H. Zassenhaus as a by-product of his proof that the spinor norm is well-defined (i.e., independent of the choice of symmetries of which the given isometry is expressed as a product).

Theorem 2. (Zassenhaus) Let M be an isometry on V such that det(M + I) ¹ 0. Then Θ(M) = det((M + I)/2) in F*/F*2.

Proof. det(M + I) ¹ 0, so M has no eigenvalue equal to -1. It follows that -M has no eigenvalue equal to 1. As above, take charpoly(-M) = χ(z) = (z - 1)fχ0(z).

We assert that U = {0}. Indeed, assume (-M - I)t(v) = 0 for some v Î V. Let t be the least positive integer for which this is true for v in particular. If t = 1, then either v is an eigenvector corresponding to an eigenvalue of 1 or v = 0. Since there is no such eigenvalue, in this case v = 0. If, however, t > 1, then (-M - I)((-M - I)t-1(v)) = 0 so that (-M - I)t-1(v) is either an eigenvector corresponding to the eigenvalue 1 or is 0. Both are impossible, because there is no such eigenvalue and because t was chosen to be minimal; hence this case yields a contradiction. It follows that v = 0, and we have proved our assertion.

Hence V = U + U* = U*, so that by Lemma 5, χ(z) = χ0(z). It follows that χ0(1) = det(M + I).

Let {x1, . . . ,xn} be an orthogonal basis for V. It is almost trivial to verify that -I = T(x1)×××T(xn). It will now follow that Θ(-I) = Q(x1)×××Q(xn) = d(V). Furthermore, Θ(-M) = Θ(-I)Θ(-M) = d(V)Θ(M).

Hence by Theorem 1, d(V)Θ(M)det(M + I) = Θ(-M0(1) = 2nd(U*) = 2nd(V) so that Θ(M)det(M + I) = 2n. Hence in F*/F*2, multiplying both sides by 2-2ndet(M + I) yields Θ(M) = 2-ndet(M + I). By elementary linear algebra, 2-ndet(M + I) = det((M + I)/2), and we are done.

Q.E.D.

Examples. (1) Let F = Q, V any vector space over Q with dim(V) = 3, {v1,v2,v3} a basis for V over Q. Define

(B(vi,vj)) =
4 2 2
2 1 1
2 1 1
,
M(v1) = v1
M(v2) = v3
M(v3) = v1 - v2.

Then B is a symmetric bilinear form on V and M is an isometry from V onto itself. Since det(M + I) = 4 ¹ 0, Zassenhaus’s Theorem applies and Θ(M) = det((M + I)/2) = 1/8 = (1/4)(2) = (1/2)2(2) = 2 in Q*/Q*2.

(2) Let F = Q, V any vector space over Q with dim(V) = 3, {v1,v2,v3} a basis for V over Q. Define

(B(vi,vj)) =
2 1 1
1 3 1
1 1 3
,
M(v1) = -v1
M(v2) = -v1 + v3
M(v3) = -v1 + v2.

Then B is a symmetric bilinear form on V and M is an isometry of V onto itself. Unfortunately, det(M + I) = 0, so Zassenhaus’s Theorem does not apply directly. We shall use Theorem 1.

We find that χ(z) = det(zI - M) = (z + 1)2(z - 1) so that χ0(1) = 4 = 1 in Q*/Q*2. Since

M - I  =  
-2-1-1
0-1-1
01-1
,

it follows by induction that (M - I)n = (-2)n(M - I) for all n Î N. Therefore u Î U if and only if (M - I)(u) = 0, that is, U = ker(M - I). By setting the matrix M - I = 0 we find that U is generated by the vector v1 - v2 - v3. Therefore U* = {v Î V : B(v,v1 - v2 - v3) = 0}. If v = α1v1 + α2v2 + α3v3 Î U*, we have 0 = B(v,v1 - v2 - v3) = -3(α2 + α3). In this case α2 = -α3 and it follows that {v1, v2 - v3} is a basis for U*. By direct computation, d(U*) = 8 = 2 in Q*/Q*2.

Applying Theorem 1, we now have Θ(M) = Θ(M0(1) = 2nd(U*) = (8)(2) = 16 = 1 in Q*/Q*2.

BIBLIOGRAPHY

Anton, Howard. Elementary Linear Algebra. John Wiley & Sons, New York, 1973.

Dickson, L. E. Studies in the Theory of Numbers. University of Chicago Press, 1930.

Mason, Geoffrey. “Groups, Discriminants and the Spinor Norm.” Bulletin of the London Mathematical Society 21 (1989) 51-56.

O'Meara, O. T. Introduction to Quadratic Forms. Springer-Verlag, Berlin, 1963.

Zassenhaus, H. “On the Spinor Norm.” Archiv der Mathematik, Basel, 13 (1962) 434-451.


Author’s Note: I wrote this paper in the spring of 1991 as the research paper for a Master of Science degree in Mathematics, with which degree I graduated that May from Southern Illinois University at Carbondale. It is in the subspecialty of abstract linear algebra.