| Publication Type | honors thesis |
| School or College | College of Science |
| Department | Mathematics |
| Faculty Mentor | Dragan Milicic |
| Creator | Garzella, Jack J. |
| Title | Algebraic theory of D-Modules |
| Date | 2019 |
| Description | We give an undergraduate-readable exposition of the theory of D-modules. D-modules play an important role in advanced mathematics, but they rarely get explained at the undergraduate level. We begin by briefly overviewing the necessary algebraic preliminaries that the reader is expected to be familiar with in Section 1. Next, in Section 2, we will develop in more detail some ideas from commutative algebra which are necessary for the study of D-modules. In Section 3 we will finally introduce the ring D of differential operators and ways of understanding D-modules. Lastly, in Section 4, we define a notion of dimension on D-modules, and show that this dimension is bounded above and below. |
| Type | Text |
| Publisher | University of Utah |
| Subject | d-modules; differential operators; commutative algebra |
| Language | eng |
| Rights Management | (c) Jack J. Garzella |
| Format Medium | application/pdf |
| ARK | ark:/87278/s6xdx07s |
| Setname | ir_htoa |
| ID | 2947097 |
| OCR Text | Show ABSTRACT We give an undergraduate-readable exposition of the theory of D-modules. D-modules play an important role in advanced mathematics, but they rarely get explained at the undergraduate level. We begin by briefly overviewing the necessary algebraic preliminaries that the reader is expected to be familiar with in Section 1. Next, in Section 2, we will develop in more detail some ideas from commutative algebra which are necessary for the study of D-modules. In Section 3 we will finally introduce the ring D of differential operators and ways of understanding D-modules. Lastly, in Section 4, we define a notion of dimension on D-modules, and show that this dimension is bounded above and below. ii TABLE OF CONTENTS 1 INTRODUCTION 1 2 REVIEW OF ALGEBRAIC STRUCTURES 2 2.1 GROUPS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 2.2 RINGS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 2.3 IDEALS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 2.4 TYPES OF RINGS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.5 FIELDS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.6 MODULES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.7 ALGEBRAS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 3 4 5 6 COMMUTATIVE ALGEBRA 9 3.1 EXACT SEQUENCES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 3.2 GRADED AND FILTERED RINGS . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 RINGS AND MODULES OF DIFFERENTIAL OPERATORS 13 4.1 RINGS OF DIFFERENTIAL OPERATORS . . . . . . . . . . . . . . . . . . . . . . . . 13 4.2 FILTRATIONS ON THE RING OF D-MODULES . . . . . . . . . . . . . . . . . . . . 16 4.3 MAKING A GRADED RING FROM A FILTERED RING . . . . . . . . . . . . . . . . 18 4.4 D-MODULE FILTRATIONS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 DIMENSION OF A MODULE 24 5.1 HILBERT POLYNOMIALS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 5.2 BOUNDS ON THE DIMENSION OF D-MODULES . . . . . . . . . . . . . . . . . . . 30 CONCLUSION 33 iii 1 1. INTRODUCTION An important concept in the research of mathematics is the idea of an invariant. Often, researchers want to understand a class of complicated mathematical objects. An invariant is some other (hopefully simpler) object which is associated to any complicated object, so that when the invariant is different, so are the complicated objects. Examples of invariants abound in mathematics include the characteristic of a field, cohomology groups, euler characteristic, etc. The best example of an invariant that undergraduate students see is the dimension of a vector space. Vector spaces are the complicated mathematical object in this case, and the invariant is dimension. If two vector spaces have different dimension, they are different vector spaces. This is the key piece of information that an invariant gives: it distinguishes one mathematical object from another. Just from knowing the invariant, we can tell apart two abstract entities which might otherwise be hard to reason about. Moreover, the invariant of dimension of a vector space tells us more: if two (finite-dimensional) vector spaces have the same dimension, they are isomorphic. This is a much stronger property, which many other mathematical invariants (including the ones we consider below) do not have. But this knowledge is extremely powerful: we essentially know everything we can about a (finite-dimensional) vector space just from its dimension. The niceness of this invariant is exactly why linear algebra is so widely used and taught to undergraduates. Here, we attempt to define a similar type of invariant, which we will also call dimension, in a case that is not as well-determined as the case of vector spaces. We will consider modules over the ring of differential operators, which are similar in definition to vector spaces, but the details are much more complicated. We won’t have any hope that two D-modules with the same dimension are isomorphic, but the existence of such an invariant is still a powerful tool. We first lay the groundwork for defining the ring of differential operators in sections 2 and 3. Section 2 gives broad background in general algebra, and section 3 details the specifics necessary for the study of D-modules. In section 4, we define the ring of differential operators D(n), and discuss modules over this ring, and structures which can be put on these modules. Finally, in section 5 we define the dimension of a module over the ring of differential operators, and prove the (surprising) fact that this dimension is bounded. Most of the information in the background sections is from [2] and [3]. The material on D-modules is almost exclusively from [5]. The style of writing is such that an undergraduate student who has taken two semesters of undergraduate abstract algebra should have no problem understanding the material. 2 2. REVIEW OF ALGEBRAIC STRUCTURES We assume knowledge of linear algebra. We also assume knowledge of the basic concept of quotients and homomorphisms. In this section, we state definitions and a few facts and examples, becoming much more precise starting in Section 3. 2.1. GROUPS Definition 2.1.1. A group is a set G endowed with a binary operation ⇤ : G ⇥ G ! G, which satisfies the following axioms 1. G is associative under ⇤ 2. There exists an identity element e 2 G, such that e ⇤ g = g ⇤ e = g for all g 2 G. 3. For every g 2 G, there is an inverse element g 1 such that g ⇤ g 1 = e and g 1 ⇤ g = e We will mostly be concerned with abelian groups, which have the following additional axiom: • for all g, h 2 G, g ⇤ h = h ⇤ g. We will also almost completely use additive notation for groups, i.e. the identity element is 0, the operation g ⇤ h is denoted g + h, and the inverse for g is g. Examples of abelian groups include Z, the integers, and Z/p for p a prime number, the integers mod p. A subgroup is a subset of a group which is closed under the operation, and is thus a group in its own right. A normal subgroup is one which is closed under conjugation, that is, H ⇢ G is normal if ghg 1 2 H for all g 2 G, h 2 H. A map f : G ! G0 between groups which preserves structure is a group homorophism. The kernel of a homomorphism, denoted ker f is the set {g 2 G | f (g) = 0}. Likewise, the image of a homomorhpism, denoted im f , is the set {g0 2 G0 | 9g 2 G so that f (g) = g0 }. We remind the reader that the image of any group homomorphism is a subgroup, and its kernel is a normal subgroup. An isomorphism is a group homomorphism that is also a bijection. 2.2. RINGS Definition 2.2.1. A ring is a set R, with two operations, +, and ⇤, which satisfy the following axioms: 1. R is an abelian group under addition 2. R is associative under ⇤ 3. There are distributive laws: a ⇤ (b + c) = ab + ac, and likewise (b + c) ⇤ a = ba + ca 3 Moreover, a ring R is commutative if R is commutative under ⇤. R has identity or is a ring with identity if there exists an element 1 2 R with 1 ⇤ a = a ⇤ 1 = a for all a 2 R. Here, 1 is called the multiplicative identity. From now on, when we say something is a ring we will assume that it has identity. We will also often assume that it is commutative unless otherwise stated. As such a (commutative) ring (with identity) can be thought of as a set which has suitable definitions for three of the for "basic" operations: +, , and ⇤. However, it does not (necessarily) have multiplicative inverses, so we can’t intuitively define what division is. Moreover, we note that the only difference between such a ring and a field is the lack of multiplicative inverses in the ring. An element r of a ring R is a unit if it divides 1. That is, there exists an element s 2 R with rs = 1. Units in any ring R are elements that have multiplicative inverses. Thus, one way of saying that a (commutative) ring is a field is saying that each nonzero element has an inverse. All of the following are rings: 1. Z is a ring with the normal operations of + and ⇤. The multiplicative inverses of the integers, like 1 1 2 , 3 , etc. are not in Z. 2. Any field F is a ring with it’s field operations + and ⇤. 3. F[x], the ring of polynomials in x over the field F, is a ring with polynomial addition and multiplication as defined by the distributive law. Note that any polynomial of degree more that 0 has no multiplicative inverse, but every degree zero polynomial (other that 0 itself) is a unit. We also see that every ring of the form F[x1 , . . . , xn ] is also a ring. 4. R[x] is also a ring, with addition and multiplication defined in the same way as (3), yet here we don’t necessarily have that every degree zero polynomial is a unit. Likewise, R[x1 , . . . , xn ] is a ring. 5. The set of n ⇥ n real matrices, or equivalently the set of linear operators from Rn to Rn , is a ring with component-wise addition and matrix multiplication. Notice that this ring is not commutative. The failure of commutativity can make reasoning about rings very hard. The main highlight section 3 will be considering one particular non-commutative ring (the ring of differential operators) which is not commutative, and reason about it by using related commutative rings. A ring homomorphism is defined as a map between rings which preserves both the addition and multiplication structure. The kernel and image of a homomorphism, and the notion of an isomorphism, are defined in an entirely analogous way to group homomorphisms. 2.3. IDEALS Definition 2.3.1. An ideal I is a subset of a ring R with the following properties: 1. I is an additive subgroup of R 4 2. I is closed under multiplication of any element: if a 2 I and r 2 R, then ra 2 I. An ideal is a somewhat analogous to a normal subgroup of a group. We notice that as (R, +) is abelian, any subgroup is normal, so I is a normal subgroup. We also recall that, much like normal subgroups, ideals are the kernels of ring homomorphisms. However, an ideal does not necessarily contain the multiplicative identity 1. Indeed, if any ideal I contains 1, it must be the whole ring: 1 2 I ) for any r 2 R, r ⇤ 1 = r 2 I. In this case, I is a trivial ideal. Definition 2.3.2. An ideal I in a ring R is maximal if it is not the subset of a nontrivial ideal in R. A maximal ideal can be thought of as a "largest" ideal in some sense, in that there is no ideal A ⇢ R with I ⇢ A ⇢ R. If we quotient a ring R by a maximal ideal I, the resulting ring R/I is a field. Definition 2.3.3. An ideal I is prime if the following condition holds: if ab 2 I, then a 2 I or b 2 I. Consider this example: given a ring R, for x 2 R we can define the ideal generated by an element x, denoted (x), as the set {rx |r 2 R}. We can extend this definition to work for more than one element: (x, y) is the set {rx + sy|r, s 2 R}. Likewise, (x1 , . . . , xn , . . .) is the set {r1 x1 + . . . + rn xn + . . . |r1 , . . . , rn , . . . 2 R}. Ideals of this form are important enough that we give them a specific definition: Definition 2.3.4. An ideal I ⇢ R is finitely generated if there exists a finite set {xi } ⇢ I such that any element in I can be written as a sum of products of these element. In other words, every element in I must have the form  ci xi i for ci 2 R. The notion of an object being finitely generated applies to more than just ideals, however this is a natural setting to introduce them. Analogous notions for Rings, Modules, and Algebras are left to the reader. As an example of an ideal which is not finitely generated, consider the ring R = k[x1 , x2 , . . . , ], the polynomial ring in infinitely many variables. Then, the ideal (x1 , x2 , . . . , ) cannot be written with a finite generating set. If there were such a set, it would contain finitely many of the xi , and then we could take j = max{i} + 1, and x j would not be generated by the set. It is worth noting that the example of an ideal being not finitely generated comes from a ring which is much more complicated that the other examples we have seen. This polynomial ring in infinitely many variables is "non-noetherian". We will shed some more light on the notion of a "noetherian" ring in the next section. A few other examples of ideals: 1. The whole ring R and the set containing zero {0} are trivial ideals 5 2. In the ring Z, the even numbers are an ideal. Moreover, for any a in Z, the set aZ of multiples of a is an ideal. if a is prime, then this ideal is maximal. 3. In F[x], the ideal (x) is maximal. 2.4. TYPES OF RINGS We now list some properties some rings have, which have nice properties. Definition 2.4.1. A ring R is an integral domain if it has the following property: For any a, b 2 R both nonzero, ab 6= 0. The reader may notice that one way of restating this property is to say that the ideal (0) is a prime ideal. Or, equivalently, an integral domain is a ring with no zero-divisors, that is elements which divide zero. Integral domains are "nice" to work in, because they have a cancellation property: that is, if a, b, c 2 R, then ac = bc ) a = b. For rings in general, this is not true. For example, in the ring Z/16Z, 4 ⇤ 4 = 8 ⇤ 4 = 0, but 4 6= 8. Definition 2.4.2. A ring R is noetherian if every strictly increasing sequence of ideals I1 ⇢ I2 ⇢ . . . ⇢ R (1) has finitely many elements. This definition may seem to be a bit arbitrary and unmotivated, however this definition is equivalent to something which is more enlightening: Theorem 2.4.3. A ring R is noetherian if and only if every ideal of R is finitely generated. This proof is slightly beyond the scope of this paper, see [3], pg. 75. However, we will prove the following: Corollary 2.4.4. Any noetherian ring is finitely generated. Proof. A ring R itself is an ideal, generated by the element 1. Apply 2.4.3. Noetherian rings also behave nicely with respect to many ring operations, for example: Lemma 2.4.5. Let R be a noetherian ring. Then any quotient ring R/I is noetherian. Proof. By the correspondence theorem of ideals, any ideal J¯ ⇢ R/I corresponds to an ideal J with I ⇢ J ⇢ R and then J¯ = J/I, so there is a surjective map from J ! J.¯ Take any increasing sequence of ideals in R/I, then these correspond to an increasing sequence in R, which must have finitely many elements. 6 One last reason why Noetherian rings are useful: this condition applies to a wide class of rings. To be an integral domain, for example, a ring has to be very special—it cannot have any elements which divide zero. However, the Noetherian condition is much less restrictive. For example, it can be shown that R[x] is a noetherian ring if R is noetherian (again, slightly beyond our scope, see [3]). This statement is true about integral domains as well, if R is an integral domain, so is R[x]. However, if we take some quotient of a polynomial ring, for example R[x]/(x2 ), then this ring will not be an integral domain, even if R is. To see this, notice that x will be mapped to an equivalence class in R[x]/(x2 ) that is not zero, as it is not in the ideal, but x ⇤ x will be zero. By our previous lemma, Noetherian rings do not have this "problem". The fact that polynomial rings are noetherian, combined with Lemma 2.4.5, shows why we resorted to very exotic rings to find a counterexample in the previous section. Any polynomial ring, and any quotient of a polynomial ring, is noetherian. 2.5. FIELDS Definition 2.5.1. A field is a ring where each element has a multiplicative inverse. Equivalently, a field is a set F with two operations, + and ⇤, where F is a group under +, and F r {0} is a group under ⇤. Of all the algebraic structures, fields are one of the nicest, because the existence of multiplicative inverses puts may restrictions on possibilities for fields. Examples: 1. Q, R, and C are fields 2. F p , the finite field with p elements, is isomorphic as a ring to Z/pZ. 3. F pn , the finite fields with pn elements, is not isomorphic to Z/pn Z for 1 < n. 2.6. MODULES Definition 2.6.1. An R-module (a module over a ring R) is set M with two operations: addition + : M ⇥ M ! M; and scalar multiplication: · : R ⇥ M ! M; which satisfies the following axioms: 1. M is an abelian group under + 2. r(m + n) = rm + rn 3. (r + s)m = rm + sm 4. (rs)m = r(sm) 5. 1R m = m, where 1R is the identity element in R 7 A module is an analogous structure to a vector space over a field, in the way it is defined. However, the lack of multiplicative inverses makes modules much less well-behaved than vector spaces. Now, we show a few examples of modules. In each example it is important to keep in mind what exactly scalar multiplication is. First, we see that any ring R is a module over itself, with addition as in the ring and scalar multiplication as multiplication in R. Similarly, any vector space over a field F as an F-module, with addition and scalar multiplication in the vector space. In fact, one definition of a vector space is a module over a field. In a non-commutative ring, when we define R as a module over itself, we must choose between right multiplication and left multiplication to be our scalar multiplication. This choice can be non-trivial, i.e. the set can be the same between the ring and the module, but the multiplication could be different. Any ideal of a ring is a module, with scalar multiplication again being multiplication in the ring. Here, the set is different but the multiplication is the same between the module and the ring. For a specific example of this, the ideal (x) is a F[x]-module, where F is a field. Given and abelian group G, we can consider G as a Z-module by defining scalar multiplication in the following way: for g 2 G, n 2 Z, ng = g + . . . + g (n time). In fact, an abelian group is precisesly a Z-module. If we take a product ring Rn = R ⇥ . . . ⇥ R (n times), then this is an R-module, where scaling by r 2 R is multiplying each entry by r. This is called a free R-module. 2.7. ALGEBRAS Definition 2.7.1. Given a ring R, an associative algebra over R (often just algebra over R) is a ring A that is also a vector space over some field R, with the following axioms 1. Addition is the same in the vector space and the ring 2. Zero is the same in the vector space and the ring 3. The multiplication is compatible with scalar multiplication: a(xy) = (ax)y = x(ay) (2) for x, y 2 A, a 2 R. Often, the rings and modules that we consider can trivially be made into algebras. For example, 1. The ring Z/nZ is an abelian group, and thus a Z-module. It is also a ring, and as it inherits its quotient mutliplication from Z, these structures are compatible. 2. The ring of n ⇥ n matrices over the real numbers is also a vector space over R with dimension n2 . Thus, it is an algebra over R. Here, scalar multiplication is scaling each component of the matrix, and multiplication is matrix multiplication. 8 3. The ring of polynomials F[x] is an infinite-dimensional F-vector space, and thus this set is also an algebra. Moreover, this algebra is commutative. In cases like this, whether we call such a structure an Algebra, Ring, or Module will depend on which aspect we want to emphasize, or which claims we need to finish a proof. 9 3. COMMUTATIVE ALGEBRA Now, we will expand the necessary preliminaries which we specifically need to understand D-modules. 3.1. EXACT SEQUENCES First of all, a sequence in our context is a collection of spaces {Mi } (these may be vector spaces, groups, rings, etc.) with a collection of the correct type of homomorphism mapping Mi ! Mi+1 for all i. We can express this in the following diagram: M1 ! . . . ! Mn 1 ! Mn ! . . . Here, each arrow is a map between Mi and Mi+1 . We could explicitly name each map, however this would become very cumbersome for what we want to do later. Therefore, whenever it is possible to infer what a map is by context, we will do so. For example, if M is a module, consider the sequence 0!M!0 In we can infer that the arrow on the left is the trivial injective homomorphism, as it is the only possible ring homomorphism between the zero module and M. Likewise, the arrow on the right is the trivial surjection. It is worth noting that we can add trivially extend any sequence by adding a zero on either end. In fact, we can add as many zeros as we want; the homomorphisms between them must be trivial. An sequence is exact at Mi if the image of the map Mi 1 ! Mi is equal to the kernel of the map Mi ! Mi+1 . If a sequence is exact at every single step, we call it an exact sequence. For an example of an exact sequence, take modules M and N, and a map f between them. Then, we can define f 0 ! ker f ! M ! N ! N/ im f ! 0 Notice that for every arrow except for f , we can tell what the map is simply from context; from left to right, we have the zero map; the canonical injection from the kernel into M; the map f which we couldn’t tell by context; the canonical surjection between N and the quotient, and the zero map. Notice that an exact sequence with two nonzero elements means that the map betwen those elements is necessarily an isomorphism. This case is not too interesting, so we rarely consider such sequences. When there are three nonzero elements, however, there is a definite structure. Sequences of this type are called short exact sequences. 0 ! M1 ! M2 ! M3 ! 0 We observe that in a short exact sequence, the first nontrivial map must be injective, the second must be surjective. In this context, that means that the third space must be the quotient M2 /M1 . 10 If we have an exact sequence of finite-dimensional vector spaces over some field k, we can consider the dimension function dimk from the set (category) of all vector spaces to Z+ . Observe that for any exact sequence of vector spaces 0 ! V1 ! V2 ! V3 ! 0 we have that dimk (V2 ) = dimk (V1 ) + dimk (V3 ), a result from linear algebra. We say that such a function is additive when it behaves this way for short exact sequences. Lemma 3.1.1. For any additive function l (such as dimk ), and any exact sequence 0 ! V1 ! V2 ! . . . ! Vn ! 0 we have the following identity: n  ( 1)i l (Vi ) = 0 i=1 Proof. We will use induction on n. For any additive function on a short exact sequence l (M1 ) l (M2 ) + l (M3 ) = 0 by definition. Now consider the long exact sequence, 0 ! V1 ! V2 ! . . . ! Vn 1 ! Vn ! 0 In this seqeunce, there is a surjective map Vn 1 ! Vn . Call it j. Now, considering ker j, we see that we can construct another sequence 0 ! V1 ! V2 ! . . . ! Vn 2 ! ker j ! 0 By exactness at Vn 1 , the map into ker j is surjective and this sequence is exact. Also, we can construct a short exact sequnence 0 ! ker j ! Vn 1 ! Vn ! 0 By additivity of short exact sequences, we have l (ker j) = l (Vn 1 ) hypothesis, we have ✓n 2 ◆ i 0 =  ( 1) l (Vi ) + ( 1)n+1 l (ker j) i=1 ✓n 2 ◆ i =  ( 1) l (Vi ) + ( 1)n+1 l (Vn 1 ) i=1 n =  ( 1)i l (Vi ) i=1 l (Vn ) l (Vn ). By the induction 11 3.2. GRADED AND FILTERED RINGS We will now describe two additional structures which can be put on a ring to give more information about them. The idea is to help understand "big" rings and algebras which are perhaps infinitedimensional as vector spaces over a field k. For example, the polynomial ring k[x] in one variable is infinite-dimensional as a vector space over k. A ring A is graded if it can be "broken up" as a direct sum of additive subgroups. That is, if A= • M An (3) n=0 where An are additive subgroups of A and An · Am ⇢ Am+n for all m, n 2 Z+ . We can see that A0 is thus a subring of A, and that each An is thus an A0 -module with multiplication in A. For example, the polynomial ring A = k[x] over a field is a graded ring, where each additive subgroup An is the set of terms of degree n. In fact, the same grading makes k[x1 , . . . , xn ] a graded ring. In a completely analogous way, we can define the notion of a graded algebra and a graded module. For example, k[x] is a graded algebra over the field k, and a graded module over itself. An element of a graded ring is homogeneous if it is contained in a single An . For example, take the ring A = k[x, y], where k is a field. In this case, x2 + y, is not a honmogeneous element, but x2 , y3 , and x4 + y4 are homogenous. Any element can be written as a sum of homogeneous elements. Lemma 3.2.1. If A = L• n=0 An is a graded ring that is finitely generated (as an A0 -algebra), then every An is finitely generated as a module over A0 . Moreover, if any graded module M = finitely generated A0 -module. L• n=0 Mn is a finitely generated A-module, then each M is a Proof. The first statement is a special case of the second, with M = A, as a module over itself. Take a set of homogeneous generators (given a set of non-homogeneous generators, we can simply split them into their homogeneous parts) for M. Call this generating set {mi }. Then, for any n, pick m 2 Mn . Because M is finitely generated, m = Âi yi mi , with yi 2 An deg mi . Now, A is finitely generated as an A0 -algebra, in particular as an A0 -vector space, we can then write yi =  j ai j z j , where ai j are in A0 . We can do this because A being finitely generated implies that yi can be written as a linear combination with coefficients in A0 . The fact that the product is in An deg mi means that each z j must be in An deg mi by the graded multiplication property. Finally, we can combine the two formulas to write mi = Âi, j ai j z j mi , a finite linear combination, as desired. Definition 3.2.2. An increasing filtration of a ring A is an increasing chain of subgroups (of the underlying abelian group structure) A0 ⇢ A1 ⇢ . . . ⇢ Am ⇢ . . . ⇢ A 12 where S• i=1 Ai = A. We say a ring is filtered if it admits such a filtration. Again, the easiest example is of the polynomial ring, where Am is the subring of polynomials of degreee less than or equal to m. Both gradings and filtrations serve a similar purpose. Giving a ring a grading breaks it into smaller, disjoint parts, which may be easier to study. On the other hand, each of the intermediate abelian subgroups of a filtration contain all of the information from every smaller part, which can be useful in certain contexts. We will need both tools in the following discussion. 13 4. RINGS AND MODULES OF DIFFERENTIAL OPERATORS 4.1. RINGS OF DIFFERENTIAL OPERATORS The concept of differential operators is a way of algebraically describing derivation. Based on the first definitions in basic calculus, we know that the derivative is a construction made mostly of analysis; it involves limits, continuity, and the euclidean metric. However, we would like to forget about these details and talk about the derivative in a purely algebraic perspective. We do this by considering the action of the derivative on some set of functions. Consider, for example, the algebra A of C• functions from R ! R. d Then we can talk about how the map dx : A ! A acts on these functions. For example: x2 7! 2x x4 + 4x3 + 2 7! 4x3 + 12x2 ex 7! ex d We can further describe the map dx (which I will refer to as ∂ from now on for convenience) by our derivative rules from calculus. For example, ∂ is linear with respect to addition and constant multiplication, making it an R-algebra homomorphism, and thus an endomorphism of A. We will call the space End(A) the space of operators on A. End(A) is an algebra, where the multiplication operation is in fact function composition. In addition to linearity, the derivative must obey Leibniz’ rule: ∂ ( f g) = (∂ f )g + f (∂ g) (4) In order interpret Leibniz’ rule in the context of the algebra End(A), we must introduce the multiplication endomorphism. For any function f 2 C• (R), we define the element f 2 End(A), to be multiplication by f : f applied to g is f g. This is not to be confused with function composition, which is the "mutliplication" of the ring End(A). Thus, C• (R) is a subset of End(A) as a set, but it is not a subalgebra, because of the different multiplications. With these rules in place, (4) becomes (∂ f )g = ∂ f g f ∂ g = [∂ , f ]g where [, ] is the standard commutator. Notice that End(A) is non-commutative, unlike A. As (∂ f ) is just a function, which commutes with every other function as application of a multiplication operator, we have [[∂ , f ], g] = 0 (5) 14 In fact, this rule is exactly equivalent to Leibniz’ rule, and is a purely algebraic description, as we sought earlier. We could attempt to encode other properties into our algebraic description, such as the quotient rule or the derivative of transcendental functions, but it turns out that this will be enough for our applications. In our case, we are concerned primarily with polynomial functions, i.e. A will be the algebra of polynomials over a field, not the algebra of C• functions. This, we use equation 5 to define our notion: Definition 4.1.1. Given an algebra A over a field k, a differential operator of order less than or equal to 1 on A is a member of End(A) which satisfies the equation 5. The inclusion of the phrase "order less than or equal to 1" in the definition suggests that we must to 2 d d 2 3 = ∂ has order one, dx be careful about orders. Intuitively, it would be nice if dx 2 = ∂ has order 2, ∂ has order 3, etc. First, let’s consider ∂ 2 for a second. This operator does NOT satisfy equation 5, as can be easily checked, but its commutator does have this property. [∂ 2 , f ]g = ∂ 2 ( f g) f ∂ 2g = ∂ ((∂ f )g + f ∂ g) f ∂ 2g = (∂ 2 f )g + ∂ ( f ∂ g) f ∂ 2g = (∂ 2 f )g + (∂ f )(∂ g) + f ∂ 2 g f ∂ 2g = (∂ 2 f )g + (∂ f )(∂ g) Here, both the first and the third step were due to the commutator of ∂ from equation 4. Now, inspecting the result, we see that the ∂ 2 terms which are applied to g are gone. Thus, this is in fact a differential operator of order 1. Thus, we can write an analogue of equation 5 for ∂ 2 : [[[∂ 2 , f ], g], h] = 0 for f , g, h, multiplication operators. This will be our definition of a differential operator of order less than or equal to 2. Note that any differential operator of order one must satisfy this property as well, because the inside commutator [[∂ , f ], g] would already be zero. Now we are ready to define differential operators in general. Definition 4.1.2. An element which satisfies the property [. . . [[D, f0 ], f1 ], . . . , fn ] = 0 for any f0 , . . . , fn 2 A is a differential operator of order less than or equal to n. (6) If an operator in y 2 End(A) is a differential operator of order less than n, but not a differential operator of order less than n 1, then we say y has order n. 15 The care taken with orders is necessitated by the fact that an operator which satisfies 4.1.2 for some n d n n0 does so for every n greater than n0 . Definition 4.1.2 also recovers our intuition that dx n = ∂ is a differential operator of order n. In addition, note that every multiplication map is a differential operator of order 0. It’s important to note that an element like x∂ (derive and then multiply by x) is also a differential operator of order 1 by this definition. The subalgebra generated by differential operators, denoted Di f f (A), and is called the ring of all differential operators on A. In this case, A is generated by ∂ and all multiplication operators. Having defined differential operators for a general algebra of functions A, for the remaining chapters we will restrict ourselves to the case when A is a ring of polynomials. Other spaces, like C• functions, don’t necessarily have the algebraic properties we’re looking for; for example, it is hard to write down a generating set. However, if we consider the algebra A = k[x1 , . . . , xn ] for some field k, then Di f f (A) is generated by x1 , . . . , xn and ∂1 , . . . , ∂n , where ∂i = dxd i . We call this ring the ring of differential operators with polynomial coefficients over the field k in n variables. For brevity, we write D(n) = Di f f (k[x1 , . . . , xn ]). We can also write down the relations which define D(n). By 5 and the fact that the derivative of x is 1, we have that [∂i , x j ] = di j , where di j is 1 when i = j and 0 otherwise. This encapsulates the fact that ∂i and x j commute, as they should, and that ∂i and xi obey Leibniz’ rule. Lemma 4.1.3. The center of D(n) is equal to k. Proof. By the commutator relations, we can write and differental operator T as a sum with multiplication operators on the left and ∂ ’s on the right: D =  PI (x1 , . . . , xn )∂I Here, PI is a polynomial in the x’s, and I is a multi-index of all possible subsets of {1, . . . , n} (i.e. i ∂I = ∂1i1 . . . ∂kk ). Say D is in the center of D(n). Then, if D contains any nonconstant xi term, it does not commute with ∂I . Likewise, if D has a nonconstant ∂i term, it does not commute with xi . Therefore, PI must be constant, and the only nonzero term in the sum is for I = 0. / Thus, only constant multiplication operators are in the center, and these are isomorphic to k. Why would differential operators be useful? We provide a simple, slightly tangential illustration. We will use the algebraic formulation of differential operators to better understand differential equations. Given two rings, A and B, and a ring homomorphism F : B ! A , we can think of any A-module M as a B-module. We do this by defining addition in the same way and scalar multiplication in the following way: 16 b ⇤ m = F(b)m as defined by the A-module multiplication. We see that this is a B-module, as axioms 1-2 (2.6.1) are inherited from the A-module structure and 3-5 follow from the fact that F is a homomorphism. Now, if we let A be D(1) =: D and B be k[t], then we can define the homomorphism F : B ! A which d takes the variable t to the operator dx . Then any D-module can be realized as a k[t]-module. Let M be the D-module of C• functions in one real variable, with action of operators as scalar multiplication. As an example, we evaluate the scalar mutliplication of (t 2 + t + 1): (t 2 + t + 1) f = d2 f d f +f + dx2 dx Now, we see that the annihilator of t 2 + t + 1 in M is the set of solutions to the differential equation d2 f + ddxf + f = 0. dx2 Likewise, for D(n), we can consider k[t1 , . . . ,tn ], and define the homomorphism tk 7! dxd , and think k of any D(n)-module as a k[t1 , . . . ,tn ]-module. If M is the D(n)-module of C• functions in n real variables, then the annihilator of an element in the ring tells us the solutions to a set of differential equations. For example, in D(3), the annihilator of t12 + t22 + t32 is the solution set to the Laplace equation. 4.2. FILTRATIONS ON THE RING OF D-MODULES Our goal is to learn more about the D(n), in particular we would like to define the notion of the dimension of a module over D(n). Our first step towards doing this is to put a filtration (see definition 3.2.2) on the ring. In order for a filtration to fit the structure of D(n) better than any arbitrary filtration, we require the following extra properties 1. 1 2 D0 2. Dn · Dm ⇢ Dn+m for any n, m 2 Z 3. [Dn , Dm ] ⇢ Dn+m 1 for any n, m 2 Z We will call such a filtration is called a structured filtration to denote its extra structure. There are two examples of structured filtrations that we will consider. First, in the ring D(1) we can write any element as  ai j xi ∂ j for 0 i, j < •, where only finitely many ai j are nonzero. Again, ∂ i is derivation with respect to xi . Then, we can define a filtration in the following way: Dm = n  ai j xi ∂ j j m o (7) That is, Dm is the set of differential operators of order less than m. We call this the characteristic filtration. We can also define another filtration, 17 n Dm =  ai j x i ∂ j i + j m o (8) This puts a restriction on the degree of the x term, which is useful in certain situations. This filtration is called the Bernstein filtration. For the more general ring D(n), we can define xI = x1i1 . . . xnin and ∂ J = ∂1j1 . . . ∂njn for I, J 2 Zn+ and write any element of D(n) as  aIJ xI ∂ J . Set |I| = i1 + . . . + in , |J| = j1 + . . . + jn . Now, we have the characteristic filtration n Dm =  aIJ xI ∂ J |J| m And we also have the corresponding Bernstein filtration: Dm = n o (9)  aIJ xI ∂ J |I| + |J| m Both of these filtrations are structured filtrations: o Because 1 is a differential operator of order 0, it is in D0 for the characteristic filtration, and for the Bernstein we see that I and J are both the empty set for the element 1, so we also have 1 2 D0 and (1) is satisfied. Now, for an two arbitrary differential operators, ( aI1 J1 xI1 ∂ J1 )( bI2 J2 xI2 ∂ J2 ) =  aI1 J1 bI2 J2 xI1 [I2 ∂ J1 [J2 + (lower degree terms) (10) which has degree |J1 | + |J2 | for the characteristic filtration and degree |I1 | + |I2 | + |J1 | + |J2 | in the Bernstein filtration. This wraps up (2). We will push the proof of (3) for the characteristic filtration to the next section, where the ideas developed there will make the proof easy. To show that the Bernstein filtration satisfies (3), we will prove a slightly stronger statement which we will use later. Lemma 4.2.1. The Bernstein filtration for D := D(n) satisfies the following property which is stronger that (5): [Dn , Dm ] ⇢ Dn+m 2 for any n, m 2 Z 0 0 Proof. It is enough to prove this for a basis. Take two basis elements xI ∂ J and xI ∂ J . Then, when we commute the commutator, we get 0 0 0 [xI ∂ J , xI ∂ J ] = xI ∂ J xI ∂ J = xI (∂ J xI 0 0 0 0 xI ∂ J xI ∂ J 0 0 xI ∂ J + xI ∂ J )∂ J 0 0 0 0 0 = xI [∂ J , xI ]∂ J + xI xI ∂ J ∂ J = xI [∂ J , xI ]∂ J 0 0 0 0 xI [∂ J , xI ]∂ J 0 0 xI (∂ J xI 0 0 xI [∂ J , xI ]∂ J 0 0 xI ∂ J + xI ∂ J )∂ J 0 0 xI xI ∂ J ∂ J 18 where we see that the last equality follows from the commutativity of ∂ s with each other, and of x’s with 0 each other. This computation shows that if we prove the claim for the simpler case [∂ J , xI ] we have proved the claim (the terms on the outside cannot "add" any more to the degree than the size of their multi-indices). We will drop the prime and call the elements in the simpler case ∂ I and xJ . Proceed by induction on |I|: if |I| = 1, then ∂ I = ∂ i for some i, and we have [∂ i , xJ ] is nonzero if and only if i 2 J. In this case, we have [∂ i , xJ ] = ∂ Jr{i} [∂ i , xi ] = ∂ Jr{i} , which is in D|J| 1 (n), proving the 0 base case. If 1 < |I|, we can write ∂ I = ∂ I ∂ i for some i, and some I 0 with |I 0 | = |I| 0 0 xJ ∂ I ∂ i 0 ∂ I xJ ∂ i + ∂ I xJ ∂ i [∂ I ∂ i , xJ ] = ∂ I ∂ i xJ = ∂ I ∂ i xJ 1. Then, 0 0 0 0 0 xJ ∂ I ∂ i 0 = ∂ I [∂ i , xJ ] + [∂ I , xJ ]∂ i 0 = ∂ I [∂ i , xJ ] 0 0 0 [∂ i , xJ ]∂ I + [∂ i , xJ ]∂ I + [∂ I , xJ ]∂ i 0 0 0 = [∂ I , [∂ i , xJ ]] + [∂ i , xJ ]∂ I + [∂ I , xJ ]∂ i And we now observe that each of the terms, by the induction hypothesis, must be in D|I|+|J| 2 (n), as desired. Corollary 4.2.2. The Bernstein filtration for D(n) satisfies (3). 4.3. MAKING A GRADED RING FROM A FILTERED RING The structured filtration allows for another construction that will be useful to us. We can make a graded ring with the filtration. If D is any such ring (with D(n) being the primary example, of course), then we define Gr D, the graded ring from D, as Gr D = M Dn /Dn 1 (11) n2Z Notice that the structured filtration properties are exactly the right properties to make this construction make sense. First, (1) shows that the identity element is in Gr D, and (2) shows that the grading property is satisfied. Property (3) precisely shows that this ring is commutative. That’s the whole point of the construction. The ring D(n) is not commutative at all (see 4.1.3). But Gr D is a commutative ring, which is much nicer. Here’s a concrete example: take D(1) with the characteristic filtration, the elements x∂ and ∂ x = x∂ + 1 are both in D1 and the in the quotient D1 /D0 these elements are equivalent. Also, we have a canonical map from D to Gr D, which we can think of as "keeping only the elements of highest order". The identity is in D0 . 19 We can think of of Gr D as a module over D0 , and as it is also a ring it is an algebra over D0 . We write Grn D = Dn /Dn 1 so that D0 = Gr0 D and Gr D = L n n2Z Gr D. Now, we will assume yet another set of conditions on D 1. Gr D is noetherian as a ring 2. Gr1 D generates Gr D as a D0 -algebra Then, we see that D0 is a noetherian ring, because is is the quotient of the ideal L• n n=1 Gr D (see lemma 2.4.5). Also, because Gr D is noetherian it is finitely generated (corollary 2.4.4), and thus every Grn D is finitely generated (lemma 3.2.1). In particular Gr1 D is finitely generated. Thus, Remark 4.3.1. We can choose finitely many x1 . . . xs 2 Gr1 D which generate Gr D as a D0 -algebra. This "fixes" the other side of the inclusion in the graded ring property, and we can write Grn+1 D = Gr1 D · Grn D (12) Dn+1 = D1 · Dn (13) and also, As a concrete example, if the ring of differential operators D(n) is equipped with the characteristic filtration, we can explicitly state the ring Gr D(n). We consider the map from D(n) to k[x1 , . . . , xn , x1 , . . . , xn ] defined in the following way: for a differential operator of order m, which we can write as  aIJ xI ∂ J with |J| m. Then, we define a map such that  aIJ xI ∂ J 7 !  aIJ xI x J |J|m |J|=m where x J = x j1 . . . x jn in accordance with the previous notation. As stated before, we can think of this as "picking out the highest degree terms" - this is the effect of taking the quotient of each filtration. Every term which does not commute with a particular ∂J is contained in D0 for the characteristic filtration, so these elements are quotiented away and do not interact with the xJ . From this, we see that Gr D(n) is isomorphic to the polynomial ring k[x1 , . . . , xn , x1 , . . . , xn ] in 2n variables. Therefore, it is noetherian and generated by Gr1 D(n), and satisfies all of the properties of the previous section. In particular, as polynomial rings are commutative, this implies that the for any two differential operators T, S 2 Dn , their commutator [T, S] is in Dn 1 (that is, it is zero in the quotient ring). This gives us the following lemma, finishing the proof that the characteristic filtration is a structured filtration. Lemma 4.3.2. The characteristic filtration for D := D(n) satisfies (3) from the definition of a structured filtration. 20 4.4. D-MODULE FILTRATIONS Earlier, we discussed filtrations of rings. We can define filtrations for modules in an analogous way: Definition 4.4.1. Let M be a D-module for a filtered ring D. An increasing filtration F M is a sequence of submodules . . . ⇢ F0 M ⇢ F1 M ⇢ . . . ⇢ Fm M ⇢ . . . ⇢ M. Furthermore, F M is a D-module filtration if Dn · Fm M ⇢ Fm+n M for all m, n 2 Z+ . Remark 4.4.2. If n = 0, the D-module filtration condition becomes D0 · Fm M ⇢ Fm M, making each Fm M a D0 -module. Notice that we allow for negative indices for m. This is mostly for aesthetic reasons, so that index shifts are not necessary. For most examples, F n M = 0 for all n 2 N. As in the case with rings, there are a few extra properties that we will want our D-module filtrations to have: A D-module filtration is hausdorff if A D-module filtration is exhaustive if T n2Z Fn M = 0. S n2Z Fn M = M. A D-module filtration is stable if there exists a m0 2 Z for which Dn · Fm M = Fm+n M for all n 2 Z+ , m0 m. Note that this requirement is stronger than the definition of D-module filtration, which only requires inclusion. However, a stable filtration need only have equality for large m. Put colloquially, exhaustive is a condition that tells us how big the filtration gets, while stable tells us that the filtration does not get too big, too fast. As a non-example of exhaustive, for a graded ring D consider the module M = D, and the filtration Fm M = I m D for I a proper ideal. Define F0 M to be zero. Then, S n2Z Fn M = I, a strict subset of M. Thus, this filtration is not stable. For our considerations, we will want filtrations which have both colloquial conditions; they must grow fast enough to exhaust the module, but slow enough that they give use information about the module. Let us make this statement more precise: A D-module filtration is good if 1. Fn M = 0 for n less than or equal to some n0 2. F M is exhaustive 3. Fn M are finitely generated as D0 -modules 4. F M is stable Notice, first, that by property (1) any good filtration is also hausdorff. Conditions (2) ensures that F M grows "fast enough", and conditions (3) and (4) ensure that it grows "slow enough". We will primarily be concerned with good filtrations for the purposes of defining dimension. Again, put colloquially, we will find a way to "measure the growth" of a filtration, and this will be what we call the dimension of a filtered module. 21 Let F be the Bernstein filtration of M = D(n) as a module over itself. Then F is good. By its definition, F is exhausitive and we will define F m M := 0 when m 2 N, so that F is hausdorff and satisfies property (1). Stability follows from remark 4.3.1, and (3) follows from the definition of F, i.e. there are always finitely many xI ∂ J for which |I| + |J| < m. We will now endeavor to describe what it means for a filtration to be equivalent to another filtration. First, however, we need a preliminary theorem: Theorem 4.4.3. Let M be a D-module. M is finitely generated if and only if there exists a good D-module filtration on M. The "only if" direction of this proof will turn out to be straightforward from the definition. However, the "if" direction requires a lemma: Lemma 4.4.4. Let F M be an exhaustive, hausdorff D-module filtration of M. The following are equivalent: 1. F M is a good filtration 2. Gr M is finitely generated as a Gr D-module Proof. (i) ) (ii) Because F M is stable, there exists an m0 2 Z with Dn · Fm0 M = Fm0 +n M for all n 2 Z+ . This implies that Grn D · Grm0 M = Grn+m0 M for all n 2 Z+ (Inclusion from left to right is by definition. From right to left, we see that if any member of Fm0 +n M can be written as a combination of the other two, then so can their images under the quotient maps). Thus, Lm0 0 Gr p M generates Gr M as a Gr D-module, as for any p with m0 p, we can generate Gr p D by plugging in p m0 for n in the previous equation. As Fm M are finitely generated D0 -modules by property (3) of good filtrations, so are the quotient spaces Grm M. And, as Fm M = 0 for negative m, Lm0 0 Gr p M must be a finitely generated D0 -module. As D0 ⇢ Gr D, this quantity is certainly finitely generated as a Gr D-module. (ii) ) (i) We must show that any filtration with (ii) has properties 1-4 of a good filtration. We already know by assumption that the filtration is exhaustive. For 1, we have by definition that Grn M = {0} for negative n. By 3.2.1, each Grn M is a finitely generated Gr D-modules, and as Gr D is a finitely generated D0 -module, each Grn M is also a finitely generated D0 -module. Take the largest n0 where Grn M = {0}, guaranteed to exist by the above. By definition, we have Grn0 M = Fn0 M/Fn0 1 M, so for this n, Fn0 M = Fn0 1 M. By induction, we have Fn M = Fn0 M for all n n0 . Thus, T n2Z Fn M = Fn0 M. By assumption, the filtration is hausdorff, so Fn0 M = {0}. We can see, by induction on n and the fact that each Grn M is finitely generated, that Fn M are finitely generated D0 -modules, proving 3. Now, we must prove that the filtration is stable. Choose an m0 so that L n nm0 Gr M generates as 22 a Gr D-module, as is shown to exist above. Now, we know because F is a D-module filtration, that Grn D · Grm M ⇢ Grn+m M. Combining these two facts, we can, for any m 2 Z, write Grm+1 M = M km0 Grm+1 k D · Grk M and, applying equation 12, we get M km0 Gr1 D · Grm k D · Grk M Now, if we apply the property of D-module filtrations twice more, we obtain Grm+1 M ⇢ Gr1 D · Grm M ⇢ Grm+1 M which implies equality of these two sets. Because of this, Fm+1 M = Grm+1 D + Fm M = D1 · Fm M + Fm M = D1 · Fm M By induction on n, we can write Fm+n M = D1 · . . . · D1 · Fm M ⇢ Dn · Fm M with the last inclusion being true by the property of the filtration D. And because F M is a D-module filtration, the other direction of containment is also satisfied, giving Fm+n M = Dn · Fm M for all n, hence F M is stable and thus is a good filtration. Proof. (of Theorem 4.4.3) First, we prove the only if: Let F M be a good filtration. By definition, S n2Z Fn M = M and Fm0 +n M = Dn · Fn M for all n 2 Z+ and some m0 2 Z (conditions 2 and 4). Thus, Fm0 M generates M as a D-module. By condition 3, Fm0 M is a finitely generated D0 -module, and thus certainly must be finitely generated as a D-module. Now, the if direction: Let U be the D0 -module generated by a finite generating set of M as a D-module. Define F M in the following way: Fn M = 0 for all n 0, and Fn M = Dn ·U for all n 2 Z+ . Then, we have, of course, that U = Gr0 M and Grn M = Fn M/Fn 1 M = (Dn ·U)/(Dn 1 ·U) ⇢ Grn D · Gr0 M where we can conclude that Grn M = Grn D · Gr0 M as the other inclusion is trivial. This works for all n, and as U = Gr0 M generates M, Gr M is finitely generated as a Gr D-module. Now, apply the lemma. 23 Now, we can move on to the definition of equivalence of filtrations. Definition 4.4.5. Given two filtrations F M and F’ M, we say that F M is finer that F’ M if there exists a k 2 Z+ such that Fn M ⇢ F’n+k M for all n 2 Z. If F M is finer that F’ M and F’ M is finer that F M, then the two filtrations are equivalent. The term finer is a precise definition of what it means for a filtration to "grow faster", i.e. a filtration F M is finer than F’ M if we can fit (up to a shift in index) each subring of the finer filtration into the "looser" one. As this relationship is symmetric for equivalent filtrations, we can in some sense say that they "grow at the same rate". Lemma 4.4.6. If F M is a good filtration on a finitely generated D-module M, then F M is finer than any other exhaustive filtration on M. Proof. Assume by shift in indices that Fn M = 0 for all n < 0. As F M is good, choose m0 2 Z+ such that Dn · Fm0 M = Fn+m0 M for all n 2 Z+ . Also, Fm0 M is finitely generated as a D0 -module. Now, if F’ M is another exhaustive filtration, then there exists some k where Fm0 M ⇢ F’k M. Now, we must check that for all m, Fm M ⇢ F’m+k M. If m < 0, then Fm M = 0 ⇢ F’m+k M and there is nothing to prove. If 0 m m0 , then k < m + k implies that Fm M ⇢ Fm0 M ⇢ F’k M ⇢ Fm+k M Lastly, for m0 < m, we have m m0 < m. Now, by the definition of D-module filtrations and the stability of good filtrations, we have Fm M = Dm m0 · Fm0 M ⇢ Dm · Fm0 M ⇢ Dm · F’k M ⇢ F’m+k M Corollary 4.4.7. Any two good filtrations on a finitely generated D-module are equivalent. Thus, to get the "most fine-grained" growth of a module possible, we need only take a good filtration of the module. Corollary 4.4.7 tells us that fineness is an invariant among good filtrations, and theorem 4.4.3 characterizes exactly when we can find such a filtration. Armed with these preliminaries, we can carry out the rest of our program: first, find a way to "measure" the growth of a filtration, then apply this to any good filtration. For a finitely generated module, we will always have such a filtration. 24 5. DIMENSION OF A MODULE 5.1. HILBERT POLYNOMIALS A vector space has a very well-defined notion of dimension. However, the same is not the case for D-modules or even modules in general. However, we would still like to have some notion of dimension. We cannot define dimension in terms of a "basis", so we go a different route. Definition 5.1.1. Let D be a graded ring, and M a finitely geterated module over D. The Poincaré series P(M,t) is P(M,t) =  l (M n )t n (14) n2Z where l is a function which is additive on exact sequences (see 3.1.1). In our situation, we have 1 2 D0 , and L• n=1 Dn is an ideal. We will call this ideal D+ . As D is noetherian, D+ is finitely generated, therefore we can write a homogeneous generating set {x1 , . . . , xs } for D+ . Each of these has an associated degree di , as it is a homogeneous element in the ring. That is, if xi lives in Ak , then di := k. Theorem 5.1.2. For any finitely generated D-module M, P(M,t) = f (t) s (1 t di ) ’t=1 (15) where f (t) is in Z[t,t 1 ] Proof. Induction on s. If s = 0, then D+ is empty and therefore D = D0 . Therefore, M is finitely generated over D0 , so it must have M n = 0 for large n (each homogeneous generator of positive degree must be nilpotent; otherwise, M n would not be finitely generated over D0 ). In this case, l (M n ) is zero eventually and P(M,t) is a polynomial in Z. Now, if A has generators x1 , . . . , xs . We want to consider the xs -multiplication map f : M ! M. This is an A-module endomorphism, and induces a homomorphism M n ! M n+ds . We can construct an exact sequenc of f by simply taking kernels and images 0 ! ker f ! M ! M ! M/(im f ) ! 0 For convenience, we will define K = ker f , L = M/(im f ), noting that both of these are also graded modules. Then, the induced homomorphism M n ! M n+ds gives the exact sequence 0 ! K n ! M n ! M n+dS ! Ln+ds ! 0 And by 3.1.1 we have 25 l (K n ) Now, we verify that (1 l (M n ) + l (M n+ds ) l (Ln+ds ) = 0 t ds )P(M,t) is in Z[t,t 1 ]. (1  l (Mn )t n+ds t ds )P(M,t) =  l (M n )t n n2Z =  (l (M n+ds n2Z ) l (M n ))t n =  (l (Ln+ds ) l (K n ))t n n2Z n2Z = P(L,t) t ds P(K,t) The index shift in the second line relies on the fact that for n << 0, M n = 0. Note that xs acts as zero on K and L by definition, as K consists of elements in M which are zero when multiplied by xs , and in L we have manually quotiented out all elements with a factor of xs . Therefore, both K and L are A/(xs )-modules, and the induction hypothesis applies to them. By combining denominators and dividing by (1 t ds ), we get the theorem. It should be slightly surprising that the Poincaré series has such a distinct form. Note that the proof heavily relies on the assumption of M being finitely generated, both in the base case and the inductive step. This should give us appreciation for how "nice" finitely generated D-modules are. In order to continue, we will need a result from combinatorics: Lemma 5.1.3. • ✓ ◆ 1 s+n 1 n t = (1 t)s s 1  n=0 Proof. By the geometric series identity, •  tn = 1 n=0 Taking s 1 t 1 derivatives, we obtain • (s 1)! n! tn = s + 1)! (1 t)s  (n n=s 1 Simplifying and changing variables to k = n • 1 (1 t)s =  n=s 1 ✓ s + 1 we get ◆ • ✓ k n t = n s 1 k=0 ◆ s+1 k t s 1 26 We will need a few other results from combinatorics, and unless the explanations are short, as in the previous lemma, most of those arguments will be cited and left to the reader to explore. Now, based on what we know about the Poincaré series, we can deduce information about the specific l (M n ). Theorem 5.1.4. If di = 1 for all i, then the function n 7! l (M n ) is a polynomial in n with rational coefficients for large n 2 Z. Proof. If all di are 1, Theorem 5.1.2 becomes P(M,t) = (1f (t)t)s . Let p0 be the order of zero of f at 1. Then, write f (t) = (1 t) p g(t), where g(1) 6= 0. Then, P(M,t) = (1g(t)t) p , where p := s p0 . Now, we write P(M,t) in terms of sums, using 5.1.3 and the fact that g 2 Z[t,t 1 ], N  akt P(M,t) = k= N k ! • ✓  k=0 ◆ ! p+k 1 k t p 1 We are concerned only about large n, so pick an n bigger than N. To find this term, we see that the kth term in the first sum will pair with the n kth term in the second, and this will be all terms of degree n. Equating the coefficient of the nth term, we get l (M n ) = N  ak k= N ✓ p + (n p k) + 1 1 ◆ which is a polynomial in n, as any n terms in the denominator of the binomial coefficient can be cancelled with the top. When the conditions for theorem 5.1.4 hold, the polynomial n 7! l (M n ) is called the Hilbert Polyno- mial with respect to the l . p 1 Corollary 5.1.5. The degree of Hilbert Polynomial is p 1, and its leading term has the form e (pn 1)! for some integer e, called the multiplicity. The quantity p 1 is called the dimension with respect to l . Proof. From the binomial coefficient, we have l (M n ) = N  ak (n k= N k + 1) . . . (n k + p (p 1)! 1) d 1 and the expansion of this product yields the first term (dn 1)! in each term, giving N ! np 1 1)!  ak (p k= N We notice that the sum on the right is the same as g(1), so we define e := g(1), giving exactly the form we want. Thus far, we have been operating in a relatively general context, with a general l which is additive on exact sequences. Now, consider these results in our specific situation of the ring D(n) over some field k. We want try to understand any given D-module M. Picking a filtration on M, by Lemma 3.2.1 we can conclude that every M n is a finitely generated k-module, that is, a finite-dimensional vector space. 27 Thus, we have a well-defined notion of dimension on the graded pieces of M. So we wish we could take l = dimk , and thus define a Hilbert Polynomial for M which depends only on the filtration. However, we cannot apply the results from this section to a D-module M, because this D is not a commutative ring. Therefore, we must use our tricks from the previous sections. Let M be finitely generated a D(n)-module. Then M admits a good filtration by Theorem 4.4.3, and by Lemma 4.4.4 Gr M is a finitely generated Gr D(n) module. As the ∂i and xi all have degree 1, the conditions of Theorem 5.1.4 are satisfied. In this case, Theorem 5.1.4 is telling us that the dimension of the graded pieces of Gr M grows as a polynomial for 0 << n. Now that we have a handle on how the ring Gr M grows, we can translate this back into the filtered pieces of M. We use the exact sequence 0 ! Fm 1 M ! Fm M ! Grm M ! 0 . This sequence make sense because each Fk M is a finite extension of the graded pieces Grk M, so these are all vector spaces over the base field k. Thus, our function l = dimk is additive, and dimk (Fm M) dimk (Fm 1 M) = dimk (Grm M) (16) . Thus, we have related the dimensions filtered pieces of M to the graded pieces of Gr M. Now, we would like to consider the growth of the filtered pieces as n gets big. Theorem 5.1.6. Let F be a function on Z where F(n) is a polynomial of degree d F(n 1) 1 for large values of n 2 Z. Then, F is equal to a polynomial in n of degree d for large values of n 2 Z. This theorem can be thought of as a discrete analogue of differentiation. If the derivative of some function f is a degree d that if a F(n) F(n 1 polynomial, then f is a degree d polynomial. Likewise, Theorem 5.1.6 says 1) is a polynomial of degree d 1, then F is a polynomial of degree d. To prove this theorem, we will use the following lemma: Lemma 5.1.7. If the polynomial ✓ ◆ ✓ ◆ ✓ ◆ x x x P(x) = c0 + c1 + . . . + cd d d 1 0 takes integral values for large n 2 Z, then each ci is an integer Note that dx can be defined for non-integer values of x, however we are not presently concerned with this issue, so the interested reader can see [6]. 28 Proof. We will use induction on d. First, if d = 0, then P(x) is a constant polynomial. For a constant polynomial to have integral values at all, then c0 must be an integer. To prove the inductive step, we will use a similar "discrete differentiation" idea and cite a result from combinatorics. Consider ◆ d ✓ ◆ x+1 x c  i d i d i i=0 i=0 ✓✓ ◆ ✓ ◆◆ d x+1 x =  ci d i d i i=0 d P(x) =  ci P(x + 1) ✓ We first remark that we can change the summation range from d to d 1, because the constant terms will cancel. At this point, we use the combinatorical result known as Pascal’s rule [8], which states that ✓ ◆ ✓ ◆ ✓ ◆ x+1 x x = + k k k 1 , and we get P(x + 1) d 1 P(x) =  ci i=0 d 1 =  ci i=0 ✓✓ d ✓ x i ◆ x d i 1 + ◆ ✓ x d i 1 ◆ ✓ ◆◆ x d i Now, we can apply the induction hypothesis, and conclude that c0 , . . . , cd are integers. But then for P to take integral values, the constant term cd must also be an integer, similar to our base case argument. Proof. (of Theorem 5.1.6) First, define the polynomial P(x) by P(x) = F(x + 1) F(x), so that P(x 1) = F(x) F(x 1) is the quantity we are interested in for large ingtegral values. By assumption, P is a polynomial of degree d 1. Now, we will write the term xs in a weird way: ✓ ◆ x x = s! + lower order terms s s . This allows us to write an arbitrary polynomial Q of degree s as Q(x) = c0 ✓ ◆ x + lower order terms s where of course c0 = s!a0 , where a0 is the leading coefficient of q. We can continue this process inductively, getting ✓ ◆ ✓ ◆ x x Q(x) = c0 + c1 + . . . + cs s s 1 29 Now, apply this general fact to our polynomial P(x). For large integers n greater than some N, and we get by the previous lemma that each ci is integral, and that d 1 P(n) =  ci i=0 ✓ n d i 1 ◆ . For any n strictly bigger than N, we can write F(n) in terms of P by using a telescoping sum: n F(n) = F(N) +  F(k) F(k n  P(k 1) = F(N) + k=N+1 k=N+1 n 1) = C +  P(k 1) k=d where C is some constant. We extended the sum to d because we will need to use that in our argument later, and the sum of these terms is simply a constant. We simply need to show that F(n) = Ânk=N+1 P(k 1) is a polynomial of degree d (we can ignore the constant term F(n)). The rest of the proof will be a hairy computation that shows precisely this claim. We shall need the following identity: ✓ ◆ n ✓ j n = s s j=s ◆ 1 1 . This follows quickly from Pascal’s rule, and many detailed proofs can be found in [7]. And now, n  P(k k=d n d 1 1) =   ci ✓ k 1 ◆ d i 1 ✓ ◆◆ d 1 n ✓ k 1 =  ci  i=0 k=d d i 1 ◆◆ d 1 ✓ d 1 ✓ ◆◆ d 1 ✓ n ✓ k 1 k 1 =  ci   ci  d i 1 i=0 i=1 k=d i d i 1 k=d i ◆ d 1 ✓ d 1 ✓ ◆◆ d 1 ✓ n k 1 c =  ci i   d i i=0 i=1 k=d i d i 1 k=d i=0 . At this point, we notice that the first sum is a polynomial in n of degree d, and the second term has no dependence on n, and so is constant. Thus, F(n) is indeed a polynomial of degree d. When we combine this theorem with equation 16, we see that we can take F(n) = dimk (Fn M). Then, by our previous work, F(n) F(n 1) = dimk (Grn M) is a polynomial of degree p 1 (where p comes from the proof of Theorem 5.1.4). By Theorem 5.1.6, F(n) is a polynomial of degree p. This p is the definition of the dimension of the D(n)-module M. Thus, the dimension of a module describes how its filtered pieces grow as n gets large. Theorem 5.1.8. The dimension of a module M over D(n) does not depend on the choice of filtration. 30 Proof. Let F M and G M be a good filtrations. Because all good filtrations are equivalent, there is some natural number k for which Fn M ⇢ Gn+k M ⇢ Fn+2k M . By the additivity of dimk (really this works for any l ), this means that dimk (Fn M) dimk (Gn+k M) dimk (Fn+2k M) (17) . To see this, we can notice that any relation A ⇢ B of modules implies an exact sequence 0 ! A ! B ! A/B ! 0, and the additivity on exact sequences implies equation 17. Now, we know that P(n) := dimk (Fn M) is a polynomial. Notice that the rightmost side of equation 17 is P(n + 2k), also a polynomial in n. Whatever polynomial P(n) is, we can expand out P(n + 2k) using binomial coefficients, and we will get that both polynomials have the same largest degree term. Likewise, Q(n) := dimk (Gn M) and Q(n + k) have the same first term. We can now write equation 17 as P(n) Q(n + k) P(n + 2k) and all we need to do is show that P(n) and Q(n) have the same leading term. In particular, this means Q(n+k) that Q(n+k) P(n) 1, and so the limit limn!• P(n) 1. This limit is the same as the ratio of the two leading Q(n+k) . But as the leading terms for P(n + 2k) and P(n) are the same, terms. Similarly, 1 limn!• P(n+2k) then these two limits are the same, so they must equal 1, showing that P(n) and Q(n + k) have the same leading term. By the above comment about Q(n) and Q(n + k) having the same leading term, we have our claim. When we use D(n) and the Bernstein filtration, our definition of dimension is often called the Bernstein dimension. By the above argument, the Bernstein dimension is the same as the dimension with respect to any other good filtration. Thus we have accomplished our goal: to define notion of dimension for this class of modules over a (particular) non-commutative ring. 5.2. BOUNDS ON THE DIMENSION OF D-MODULES Now that we have defined the dimension of a D(n)-module, we can observe some important properties. First, we notice that the dimension of a D(n)-module is NOT additive on exact sequences. Theorem 5.2.1. For any exact sequence 0!N!M!P!0 31 of filtered D(n)-modules, we have dim(M) = max{dim(N), dim(P)} Proof. As l = dimk is additive on exact sequences, the Hilbert polynomial for M (given any good filtration) will be the sum of the Hilbert polynomials for N and P (using the same filtration). The degree of the sum of two polynomials is the maximum of their two initial degrees, and as the dimension is the degree of the Hilbert polynomial by definition, we have the claim. Though the condition on Bernstein dimension is not additivity, we can still use exact sequences to draw conclusions about the dimension of modules, notably that given the exact sequence in the theorem above, dim(P) dim(M) and dim(N) dim(M). In particular, if we consider the exact sequence 0!M!M M!M!0 , we notice that M M (and by induction Ln i=1 M) has the same dimension as M. Next, let us calculate the dimension of the ring D(n) as a module over itself. As seen in Section 4.3, the graded ring Gr D(n) is isomorphic to a polynomial ring in 2n variables. For any polynomial ring Pr := k[x1 , . . . , xr ], we can count the number of monomials to get dimk (Grn Pr ) = ✓ ◆ n+r 1 r 1 , where details of this count can be found in [9], and when we expand out the binomial coefficient, we see that this is a polynomial in n of degree r 2n 1. This shows that dimk (Grm D(n)) is a polynomial of degree 1. By the work we did in the previous section, the dimension of D(n) as a module over itself is 2n. Now, considering that we can always surject a free module generated D(n)-module M, we have the following: Lp i=1 D(n) onto an arbitrary finitely Corollary 5.2.2. A D(n)-module has dimension less than or equal to 2n. This is justified by taking the exact sequence sequence, and using Theorem 5.2.1. Lp i=1 D(n) ! M ! 0, extending it to a short exact This gives an upper bound on the dimension of D-modules, but we can do even better. It turns out, there exists a lower bound on the dimension of D-modules. Theorem 5.2.3. (Bernstein) Let M be a finitely generated D(n)-module and M 6= 0. Then n d(M). This is surprising initially. Remember that the definition of dimension of a D-module is not the same as a vector space; D(n) itself is an infinite dimensional vector space. The Bernstein dimension somehow 32 measures "how infinite-dimensional" a vector space is by considering the growth of the dimension as the size of the filtration grows. To prove this theorem, we will need the following lemma. Lemma 5.2.4. The map D p (n) ! Homk (F p M, F2p M) which takes T 7! (m 7! T m) is injective. Proof. This is a map of abelian groups, so we only need to prove that if T maps to the zero map, then T is the zero operator. Without loss of generality, we may shift indices to assume that F0 M = 0. Now, for p = 0, we have D0 (n) = k and the linear map is m 7! am for some a 2 k. This is isomorphic onto its image. Proceeding by induction, assume that T 2 D p (n) maps to the zero map, that is, T m = 0 for all m 2 F p M. Let v be any element in F p 1 M. For any i, 1 i n, we have that T v = 0. Moreover, we have that T (xi v) = T (∂i v) = 0, because both xi v and ∂i v are in F p M by the filtration condition. Now, consider the commutators [xi , T ] and [∂i , T ]. If we evaluate them at v, we get the following identities: [xi , T ]v = xi T v T xi v = 0 [∂i , T ]v = ∂i T v T ∂i v = 0 We conclude that both commutators are zero evaluated at v because of the of the assumption that T m = 0 for all m 2 F p M. As v was an arbitrary element in F p M, it certainly holds if we restrict to F p 1 M. Then, both [xi , T ] and [∂i , T ] satisfy the induction assumption, and we have [xi , T ] = [∂i , T ] = 0 for all i. Therefore, T is in the center of D(n), and by Lemma 4.1.3 it is in k. Then, the assumtption T m = 0 for all m implies that T = 0, and the map is injective. Proof. (of Bernstein’s Theorem) By Lemma 5.2.4, we have the following equation from linear algebra: dimk (D p (n)) dimk (Homk (F p M, F2p M)) = dimk (F p M) · dimk (F2p M) (18) By Theorem 5.1.6 and the above discussion, the right hand side is a polynomial of degree 2n for large p. The both terms on the left hand side are equal to a polynomial of degree d(M) for large p, and their product thus has degree 2d(M). Therefore, 2n < 2d(M) which implies the theorem. 33 6. CONCLUSION We have now completed our goal of defining a notion of dimension on a finitely generated D-module, and proving that this dimension is bounded. While it doesn’t correspond precisely to what "dimension" means for vector spaces, the fact that our notion is based on the growth of vector space dimension in the filtration of the D-module, it is similar enough to warrant the use of the same word. There are many other aspects of D-module theory which could be further directions. One can consider special classes of D-modules and try to understand more complicated D-modules in terms of these classes. A D(n)-module of minimal dimension n is called holonomic. This is an important class of modules which is expanded upon in [5]. Another possible further direction is to reformulate the theory of D-modules in terms of sheaves. A sheaf is a concept that is foundational to the field of Algebraic Geomertry. The theory of differential operators can be reformulated in terms of sheaves, and consequently used as a tool in the field of Algebraic Geometry. [4] is a standard introduction to the language of sheaves, and [5] expands on how D-modules can be used in this context. 34 REFERENCES [1] Paolo Aluffi. Algebra : chapter 0. Graduate studies in mathematics; v. 104. American Mathematical Society, 2009. [2] Michael Artin. Algebra. PHI Learning Private Limited, Delhi, 2nd ed.. edition, 2014. [3] Michael Francis Atiyah and Ian G. Macdonald. Introduction to commutative algebra. AddisonWesley series in mathematics. Addison-Wesley Publishing Company, 1969. [4] Robin Hartshorne. Algebraic geometry. Graduate texts in mathematics ; 52. 1977. [5] Dragan Milicic. Lectures on the Algebraic Theory of D-modules. electronic, http://www.math.utah.edu/ milicic/Eprints/dmodules.pdf. [6] Wikipedia contributors. Binomial coefficient — Wikipedia, the free encyclopedia, 2019. [Online; accessed 6-May-2019]. [7] Wikipedia contributors. Hockey-stick identity — Wikipedia, the free encyclopedia, 2019. [Online; accessed 6-May-2019]. [8] Wikipedia contributors. Pascal’s rule — Wikipedia, the free encyclopedia, 2019. [Online; accessed 6-May-2019]. [9] Wikipedia contributors. Stars and bars (combinatorics) — Wikipedia, the free encyclopedia, 2019. [Online; accessed 10-May-2019]. |
| Reference URL | https://collections.lib.utah.edu/ark:/87278/s6xdx07s |



