This is not far off, I suppose, but it is not entirely true. Take, for instance, the question: Is there a cardinal number strictly between aleph-0 and the cardinality of the continuum? This is not answerable in the canonical axiomatic structure of mathematics; in math lingo it is said to be independent of ZFC. Moreover, the implications of mathematics can often be disputed. If you had told me that Gromov-Witten invariants would be important for science and presented me with the paper introducing them but nothing else, I probably would have said that they would not be, at least not for a long time. But almost immediately they were put to use in physics!And while it is true that proof is absolute in that given the axioms as premises, the conclusion (theorem) must follow due to the validity of the proof, many mathematicians and philosophers still argue about what our premises should be. Are our axioms even consistent? What about completeness? Furthermore, what is interesting is very subjective, and this drives mathematics. Few people still study constructions via algebra these days because it is not deemed all that important except for historical reasons perhaps, but there is still plenty to be done in that field if one so wished to pursue it. Also, what if mathematicians cannot follow a peer’s work? This has been an issue lately with Shinichi Mochizuki’s work on deformation theory (“Inter-Universal Teichmuller theory”).Finally, mathematicians are people, and people make mistakes, and sometimes those mistakes are missed. Nerdighteria’s resident mathematician Daniel Biss is actually best known in the math community for being an incredibly promising, bright, and hard-working student and young mathematician who published several massively influential papers which were later found to be false due to human error. (This is not to say he was a bad mathematician or anything of that sort. I chose him only because he is probably the best known modern example.)Similar gripes could be made with saying any discipline is so decisive. In a way, math shows us that such decisiveness can never occur.

# Monthly Archives: February 2016

# Math Makes Physics Easy

When freshmen undergraduates have to take introductory physics perhaps the most common complaint, aside from the incredibly creative “this is hard,” is that the material is too mathematically involved. Classical mechanics, electrodynamics, and the other topics covered in introductory physics almost exclusively involve calculus (differential equations, really), elementary algebra, and some basic geometry and trigonometry. Traditionally, calculus is the mathematics course undergraduates at large struggle with. So, it appears these students think the calculus makes it hard. While certainly students who struggle with calculus will struggle with physics, those who are more mathematically inclined will probably argue quite the opposite: the math is the easy part! Quickly and accurately assessing physical situations irrespective of mathematical abstraction is where the difficulty lies.

Of course, before Newton and others applied mathematics to physics the subject was very limited. But, it is also true that the more math that has been introduced to physics, the easier it has become. The somewhat unfortunate part about freshmen physics is that oftentimes there is little mathematical complexity available that would both be reasonably tame as to be teachable and be beneficial for solving actual physics problems. Sure, we could all learn Lagrangian and Hamiltonian mechanics from the beginning, but only the former can help make *some* problems easier, and both are clearly much more mathematically complicated than the traditional approach. This is why no freshmen course I have yet seen has even attempted the approach; it is a horrible idea.

Inexperienced students often have the misconception that quantum mechanics or statistical mechanics, because of their now quite popular weirdness, will be the most difficult physics courses they will take. But for many, the mathematically inclined in particular, these courses will actually be quite *easy*! The same could be said for, say, relativity as well; although, the mathematical background required for general relativity is pretty substantial in and of itself. In much of the well-understood “modern physics” the physical situation is complicated and very weird, but we have enough mathematics at this point to do quite well in solving problems.

Now, quantum mechanics is not a generically easy course, but the point I am trying to make is that math is easy but physics is hard. Moreover, even potentially great physicists are quite susceptible to struggling in freshmen physics, because it is hard, period.

# Quotients and Homomorphisms for Beginners: Part 3, Nomenclature and the Integers Modulo n

The last post culminated in the definition of the quotient group. We saw that this group is constructed by essentially “dividing” the group into cosets and working with those. This is one of the reasons we deem the group the q*uotient group*. One should note, however, that some authors use the alternative terminology *factor group*, *e.g.* Lang’s famous *Algebra* uses the term, but it is worth noting that he often uses odd conventions, for instance he introduces but never names the “Isomorphism theorems.”

There is another reason for the name “quotient,” which is probably where the term comes from in the first place. Before explaining the name, however, we should introduce the quintessential example of a quotient group: the integers modulo . When working in general algebra, the integers are almost always the example we care about, because algebra was developed in no small part to do number theory. Let us quickly prove a theorem about the group of the integers under addition before proceeding.

**Theorem.** The subgroups of are of the form for an integer.

*Proof*. We will only prove that the subsets of that form are indeed subgroups. The proof that these are the only such subgroups will be left as an exercise (*Hint:* recall the Euclidean algorithm).

First, we show closure under addition. If , then , which is in . Clearly, the identity is , and the inverse of is .

It is easy to see that the subgroups are normal, because the subgroups are more than that: they are *abelian*. Therefore, we have the quotient group also known as the integers mod and sometimes denoted (not to be confused with the -adic numbers). In my opinion, it is actually a little weird to see how this group fits in with quotient groups in general. Understanding the group is easy, but the notation and how we can figure out what it is just based on the general definition of was not instant for me when I was first introduced to the subject. So, I will take a bit of care to show how the two coincide, in case I am not the only person susceptible to this little struggle. Either way, it will take some effort to truly internalize all of this.

Generally, we say with addition mod , but this is a bit misleading. Let us start from the beginning. The subgroup is , and so the cosets are

We claim that these are all the cosets, because if , then

But, this is already in the list, because of the restriction on .

So, for simplicity we generally identify coset with .

And, so we can compute the Cayley table (multiplication table) for small values of quite easily. Below is the Cayley table for .

We can now see where the integers come in the naming of this group. When one divides, for instance, 15 by 5, one obtaains 3 because one can regroup 15 objects into 3 groups (in the colloquial sense) of 5 objects. In the quotient group, we have the same thing but with additional structure, namely the group structure of the integers.

Next time, we will digress a bit before continuing with the theory of quotient groups. Eventually, we will make our way to the isomorphism theorems and more.

# Contemporary Research in Algebra: A Note

Mathematics students across the country take courses such as “abstract algebra” and “modern algebra.” The content of these courses is incredibly standardized at this point. Generally, students cover group theory with emphasis on finite groups, perhaps “up to” Sylow’s theorems, then they move on to some linear algebra over a general, fixed field (rather than or ), then they cover some elementary ring and ideal theory, and finally some courses will delve into field theory and Galois theory, culminating with such classic results as the insolubility of the general quintic. As with most mathematics courses, after having taken elementary algebra students may get the feeling that the subject has been mostly exhausted, but this is simply not the truth. In fact, even some basic questions in finite group theory are still unresolved.

However, this is not to say that the primary topic of interest is finite group theory, for instance. Truthfully, we do have an excellent understanding of finite groups. This is why, in so many related subjects, we seek to translate hard questions into group theoretic terms, rendering them simple by comparison. There are any topics which use algebra to study other domains, *e.g.* algebraic topology, algebraic geometry, and algebraic number theory. In fact, it seems few topics in modern mathematics have gone untouched by algebraic influence.

But, what of “pure algebra,” as dubious as the idea of such a classification may be? Well, one major research area I feel most would say fits this description is the theory of Lie groups (and Lie algebras), which are groups which have additional structure, namely that of a smooth manifold. Many of the groups you are familiar with *are* Lie groups, for instance , , and . This document will give you some citations of fairly recent developments. And, if you are really interested, it is frankly not difficult to pick up the basics of the subject if you have a decent background in algebra and differential geometry.

A fairly recent development in *finite *group theory was the classification of finite simple groups. This was a massive collaborative effort spanning decades, which ended in 2004 and which took tens of thousands of pages contained in hundreds of articles and produced by over a hundred authors. While one needs to be an expert (with a *lot* of time) to understand the classification theorem’s proof, its statement is very simple.

**Theorem.** Every finite simple group is isomorphic to one of the following: a cyclic group of prime order, an alternating group of degree at least , a simple Lie group, or the 26 sporadic simple groups.

But, this was clearly a monumental task. Are there smaller problems in “classical” group theory which have only recently been solved? Why, yes, there are! There is still some work to be done for finite groups, but frankly the number of mathematicians working in that area has been and will continue to be dropping quite quickly. If we turn to groups of infinite order, however, we can find several interesting, difficult problems. For instance, there have been several recent developments with respect to the so-called “inverse Galois group problem,” though none that I know of have been particularly groundbreaking (full disclosure: I am *not* a group theorist). A good note on the subject was written by Zywina, and the same mathematician authored a proof in 2012 that the fundamental group solves the inverse Galois problem for .

But, what of ring, ideal, and module theory? Is this still an area of research? Certainly, commutative algebra and related subjects are now used by many, many mathematicians, and there have been too many developments from too many areas to even begin to discuss any in detail here. Nonetheless, here is a link to a somewhat recent MSRI workshop on the subject, where you may be able to find some guidance.

Even linear algebra, which many see as now becoming more computational and applied, is still an area of current research in pure mathematics. Moreover, derivatives of linear algebra such as representation theory remain very active. In fact, representation theory has played a major role in the famous Langlands program.

And, indeed, Galois theory too is an active area of research, though it is becoming increasingly uncommon for that terminology to be used. A good sampling of what mathematicians care about related to Galois theory can be found at this page for a program held at the University of Pennsylvania in 2006.

So, while it seems everyone’s research involves some algebra these days, from geometry to number theory to combinatorics, there are still plenty of “pure” algebraists out there. Hopefully, this post has given the reader some idea as to what it is these people do.

**Note:** My work involves a lot of algebra, but I am *not* an algebraist.

# Quotients and Homomorphisms for Beginners: Part 2, Introducing the Quotient Group

Consider the group where is the set structure of and is the group operation. Now, suppose we partition into . Can we make a group with as elements? In other words, can we divide into parts and make a group out of those parts?

Recall the definition of the (left) coset from the last post. We claim that the cosets of a group represent a partition of the group into disjoint subsets. This shall be proved presently with little more than the introduction of an alternative definition. But, first we must review the basic theory of equivalence classes.

**Definition.** An *equivalence relation* is a binary operation on a set such that for all (reflexive), for any if , then (symmetric), and for any if and , then (transitive).

The equivalence class of an element is defined as . The set of all equivalence classes in (for a given relation ) is generally denoted (read: modulo ).

A more intuitive way of thinking about equivalence relations is to think of them as different ways of partitioning a set, *i.e.* separating a set into disjoint subsets.

**Theorem. **An equivalence relation on a set partitions .

*Proof.* Clearly, every belongs to the class . Suppose . If , then , so by transitivity. Hence, . The same argument can be applied for . Therefore, . Moreover, suppose we have that and are *not* equivalent under , then if , then is in both equivalence classes, hence and . By symmetry, , but similarly . Thus, we have a contradiction, and so . We have therefore shown that distinct equivalence classes are disjoint subsets. To finally prove this is a partition, we must show that the union of all subsets is equal to the original set. This is clear because .

It turns out that the converse is also true: every partition of a set is due to an equivalence relation! A formal proof of this fact is left to the reader as an exercise.

Now, let us define the coset using different language.

**Definition. **The *left cosets* of in are equivalence classes under the relation on given by , or equivalently if for some .

To ensure the reader is involved, we leave the verification that the given relations above are equivalence relations and are indeed equivalent. It therefore follows that the cosets of in partition .

Now, can we use cosets to make a group? Let us try. Suppose we have a group and a subgroup , then . Naturally, the operation should be . The issue with this is that it is not consistent for coset representatives! If , then we cannot guarentee that ! We need to fix coset representatives for the operation to be well-defined.

It turns out that normality is a necessary and sufficient condition for this to occur. For, if left and right cosets are equivalent, then

Associativity is easily verified, the identity is simply , and the inverse is .

We call this important group the *quotient group*, and it is one of the most important groups in mathematics.

# An Accessible Introduction to Sheaves

This has been reproduced from my Personal Blog. It will remain there as well, but since breaking my one blog into three, it is now best suited here.

Below is reproduced a post that will be featured on the blog cozilikethinking at https://cozilikethinking.wordpress.com/ as a part of a series of posts on introductory algebraic geometry. This reproduction is partly for the sake of ensuring there are no errors and partly because I would like to soon begin blogging again, for the first time here. The aforementioned blog, cozilikethinking, is run by Ayush Khaitan, who I have had great pleasure communicating with and who writes exceptionally about a wide variety of mathematical topics, which are typically within the realm of undergraduate interest. Without further ado, below is the post.

Let me just preface this by saying that I look forward to writing, in an accessible way, about the realm of algebraic geometry with Ayush on this blog.

In the study of algebraic geometry, one often hopes to delve into abstract notions, research tools, and topics such as those of cohomology, schemes, orbifolds, and stacks (or maybe you just want to prove that there are 27 lines on a cubic surface). But, all of these things probably seem very far away, because algebraic geometry is a very rich, very technical field. It is also one that was inaccessible to most until recently, despite often being concerned with rather simple ideas and objects. Luckily, today we all need not read the SGA and EGA. While I think it unwise to jump ahead to something as modern as schemes just yet, it should be possible to study one of algebraic geometry’s most important and deceivingly intuitive tools: sheaves.

In not-so-technical terms, a sheaf is a device used to track locally defined information attached to the open sets of some (topological) space. Basically, this is a nice tool to organize information. To get a bit more formal, we’ll need to first define what a *presheaf* is.

*Presheaf: *A presheaf on a topological space is a functor with values in some category defined as follows:

- For each open set , there is a corresponding object in
- For each inclusion of sets there exists a corresponding morphism, called the restriction morphism, in
- must satisfy the following:
- For all , is the identity morphism on
- For open sets , , i.e. restriction can be done all at once or in steps.

Presheaves are certainly important, but I will stay focused on our goal to understand sheaves instead of dealing with details about presheaves. With that having been said, recall the property of locality from our loose definition of a sheaf, as it is also our first axiom for sheaves, with the other being concatenation or gluing. These two axioms may also be thought of as ensuring existence and uniqueness.

*Sheaf*: A presheaf satisfying the following:

- (Locality) If is an open covering of the open , and if such that for each , then .
- (Gluing) Suppose is an open cover of . Further suppose that if for all a section is given such that for each pairing of the covering sets , then there exists where .

In other words, a sheaf is a presheaf if we can uniquely “glue” pieces together.

For the sake of brevity and simplicity, we will ignore some technical details henceforth. We will also focus on functions and let the reader work out the details for the same reason.

For our first example, consider topological spaces and with a “rule” such that open is associated with

This is a sheaf. It is actually a pretty well-known example too. (*Hint*: In justifying that this is a sheaf, it may be a good strategy to begin by considering restriction maps.)

For our second and final example, we shall consider the sheaf of infinitely differentiable ( smooth) functions. The sheaf of infinitely differentiable functions of a differential manifold has two properties:

- For all open sets the ring of functions is associated from to
- The restriction map is a function restriction

Again, it’s worth playing around with this sheaf a bit.

So, to review a sheaf is a tool used in algebraic geometry and other fields, and it serves as a sort of data collection method such that the data is all put in one place, i.e. a point.

**Notes**

- It’s worth noting that there are
*tons*of different definitions of presheaves and sheaves, but I think the provided one is most intuitive and works well for many instances. - An alternate notation for is ; keep this in mind if you plan to read more on this topic.
- Be sure to keep functions in mind while you’re developing an understanding of sheaves. Another example to consider is the sheaf of regular functions.