# Category: Lectures

• ## Deck Transformations

Deck transformations are important because we want to study the fundamental group of the base space (X) by looking at subgroups induced by the image of the covering map (q: E —> X).

(I don’t like the way I phrased the first sentence so I will rewrite this lecture with pictures and come back to it later.

Later: Literally why did I write any of this? This doesn’t explain anything to me. It describes the mechanics but it doesn’t explain or give a purpose behind the construction. I bring this up now because I want to know exactly why we need covering spaces and what they are good for. Does anyone have a concrete example? Particularly from topological neuroscience.)

There are really nice circumstances where the calculation of these subgroups are easier:

1a. When the covering of X is the Universal Cover (C), it’s Automorphism Group (also called group of deck transformations or covering isomorphisms) is the identity group. [Identity is a subgroup of every group but this does not guarantee a universal cover for this space. To overcome that, we created “nice spaces” where everything works such that universal cover for our space exists.

1b. The group of deck transformations of a universal cover (C) for the space (X) is isomophic to the fundamental group of X. So to calculate the fundamental group for a space that is kind of difficult, you go to its universal cover to to see if you can calulate the fundamental group that way. A great example of this is why we take the universal cover of S1 which is R and study the deck transformations from R to R to see if we can calculate the fundamental group. Another example is taking the universal cover of the torus (T) which is S2. You find Aut_q(S2) and will come to the answer ZxZ. Something else that’s useful here is to study the product topology and products of covering spaces.

2. Given a “nice space” X, the subgroups of the fundamental group of the base space (X) will corespond to coverings of X. This is good for a couple of reasons: Calculating subgroups of a group that you know may give you more information on finding a universal cover. Also, if you want to know more about the fundamental group of X, you can look at coverings of X and use a bit more group theory to make deductions about the group structure of pi_1(x).

3. When the covering space (E) is simply connected, the automorphism group is isomorphic to the fundamental group of X.

4a. A covering map defines a conjugacy class of subgroups of the fundamental group of X. Equivalent covers of X define the same conjugacy class of subgroups. By this we mean isomorphic covers will have subgroups that only differ by conjugation. Conjugation in group theory is a “change of perspective.” It’s basically looking at what happens within the group when you “stand from a different view” (sending x to y would be like moving from one side of the room to the other.) If you’ve studied linear algebra, this is what we mean by change of basis. (This is the part where the group action stuff we do in class is important. p. 287-end of chapter 11 is important for chapter 12.)

4b. For any point in X, at any point in the fiber of x, the set of induced subgroups are in exactly one conjugacy class of the fundamental group of X (this is because we are changing the base point throughout different points in the fiber). Conjugacy is again like “shining the light” to a different point on the stage. This time our lights point towards different elements in the fiber and the stage is the covering space.

4c. Normal subgroups are groups that are conjugate to themselves. In other words, if we have two induced subgroups at different respective points in the fiber and they are the same subgroup, the covering map is normal.

5. Normal covering maps make computing fundamental groups easier for this reason [equivalent statements]:

5a. The subgroup induced under the image of q is normal at some point e /in E.

5b. For some points x in X, the subgroups are the same for every point in the fiber over x.

5c: For all points x in X, the subgroups are the same for every point in the fiber over x.

5d. The subgroup induced under the image of q is normal for every point e /in E.

In particular, if the subgroup is normal at every point in the fiber then no matter which point we select as a basepoint, our information about this particular induced subgroup will be the same. This means the fundamental group at one point in the fiber is enough to give us what the induced subgroup will be (this will be important later).

6a. Given two basepoints in two covering spaces of X such that their covering maps both agree on some point in X, then there exists a (necessarily unique) covering isomorphism between the coverings if and only if their induced fundamental groups are the same at those basepoints. [[The covering isomorphism criterion is so important because we can use this when taking covering automorphisms.]]

6b. Two coverings are isomorphic if and only if for some x in X, the conjugacy classes of the induced subgroups of the fundamental group of X based at x are the same (ie both of the subgroups are sitting in the same conjugacy class). Recall: conjugacy classes form a partition of a set. Not to mention, conjugacy corresponds to orbits. Normality corresponds to stabilizers. If two subgroups are in the same conjugacy class, that means they’re in the same orbit under the action of conjugation.

The idea is something like this (assume all of this is happening at specific point):

covering map —> conjugacy class of subgroups of pi_1(X,x)

normal covering map —> normal subgroup of pi_1(X,x)

covering map (over points in fiber) —> conjugate subgroups of pi_1(X,x)

normal covering map (over points in fiber) —> normal subgroup of pi_1(X,x)

Two coverings are the same if their subgroups are in the same conjugacy class. A normal covering gives a normal subgroup (invariant under conjugation). Covering maps over points in the fiber give conjugate subgroups and if you have a normal covering you have exactly one subgroup corresponding to every point in the fiber.

Yes, there are two levels of conjugacy here. For a covering map, conjugate classes are the set of induced subgroups that vary over changing the basepoint. For a normal covering map, no matter where the basepoint was chosen, the subgroup is still the same. For a covering map over the fiber, the conjugate classes are the sets of induced subgroups that vary over taking different points in the fiber (conjugation is an inner-automorphism). For a normal covering map over the fiber, no matter which point we pick (in fiber), the induced subgroup will be the same, ie, the induced subgroup is completely determined by what happens at one point in the fiber.

7. If we take two points in the same fiber, then there exists a covering automorphism if and only if the induced subgroups are the same. This makes sense: it is a special case of 6a.

8. Normal coverings have transitive automorphism groups. This means that for every pair of points in the fiber, there exists a covering automorphism sending one point to another point. The subgroups are the same for every point in fiber over x.

9a. If q is a normal covering, then for any point in X and any point in the fiber over x in X, the group of desk transformations are isomorphic to pi_1(X,x) [mod] (normal subgroup induced by q).

9b. If E is simply connected, Aut(E) is isomorphic to pi_1(X,x).

[learn_press_profile]

• ## Intractable Integrals (AMATH 112)

[learn_press_profile]

• ## Introduction to Julia Programming (MATH 173)

Prerequisites: Calculus 1-3, Linear Algebra, Mathematical Foundations

[learn_press_profile]

• ## Koch Curve, Fractal Geometry, and Measures (MATH 334)

I ended up back here coincidentally, and I think my intuition about this was on the right track. I just need to review it a bit. This is entirely a note to self.

[learn_press_profile]

• ## Introduction to Algebraic Cycles (MATH 568)

Required Reading: Chapter 1 of Hartshorne, Algebraic Geometry

Grothendieck, A. (1969), “Standard Conjectures on Algebraic Cycles”, Algebraic Geometry (Internat. Colloq., Tata Inst. Fund. Res., Bombay, 1968) (PDF), Oxford University Press, pp. 193–199, MR 0268189.

Grothendieck, A. (1958), “Sur une note de Mattuck-Tate”, J. Reine Angew. Math.1958 (200): 208–215, doi:10.1515/crll.1958.200.208MR 0136607S2CID 115548848

Background

In mathematics, an algebraic cycle on an algebraic variety V is a formal linear combination of subvarieties of V. These are the part of the algebraic topology of V that is directly accessible by algebraic methods. Understanding the algebraic cycles on a variety can give profound insights into the structure of the variety.

The most trivial case is codimension zero cycles, which are linear combinations of the irreducible components of the variety. The first non-trivial case is of codimension one subvarieties, called divisors. The earliest work on algebraic cycles focused on the case of divisors, particularly divisors on algebraic curves. Divisors on algebraic curves are formal linear combinations of points on the curve. Classical work on algebraic curves related these to intrinsic data, such as the regular differentials on a compact Riemann surface, and to extrinsic properties, such as embeddings of the curve into projective space.

[learn_press_profile]

• ## Polar Coordinates and Coxeter Groups (MATH 127)

Course Textbook: Reflection Groups and Coxeter Groups, James E. Humphreys,
Preparing for this Section (Review)
Rectangular Coordinates
Definition of the Trigonometric Functions
The Distance Formula
Inverse Tangent Function
Completing the Square

[learn_press_profile]

• ## Persistent Homology (MATH 528)

Course Website: http://graphics.stanford.edu/courses/cs468-09-fall/

Persistent homology is a method for computing topological features of a space at different spatial resolutions. More persistent features are detected over a wide range of spatial scales and are deemed more likely to represent true features of the underlying space rather than artifacts of sampling, noise, or particular choice of parameters.[1]

To find the persistent homology of a space, the space must first be represented as a simplicial complex. A distance function on the underlying space corresponds to a filtration of the simplicial complex, that is a nested sequence of increasing subsets.

## Definition

Formally, consider a real-valued function on a simplicial complex  that is non-decreasing on increasing sequences of faces, so  whenever  is a face of  in . Then for every  the sublevel set  is a subcomplex of K, and the ordering of the values of  on the simplices in  (which is in practice always finite) induces an ordering on the sublevel complexes that defines a filtration

When , the inclusion  induces a homomorphism  on the simplicial homology groups for each dimension . The  persistent homology groups are the images of these homomorphisms, and the  persistent Betti numbers  are the ranks of those groups.[2] Persistent Betti numbers for  coincide with the size function, a predecessor of persistent homology.[3]

Any filtered complex over a field  can be brought by a linear transformation preserving the filtration to so called canonical form, a canonically defined direct sum of filtered complexes of two types: one-dimensional complexes with trivial differential  and two-dimensional complexes with trivial homology .[4]

persistence module over a partially ordered set  is a set of vector spaces  indexed by , with a linear map  whenever , with  equal to the identity and  for . Equivalently, we may consider it as a functor from  considered as a category to the category of vector spaces (or -modules). There is a classification of persistence modules over a field  indexed by :

Multiplication by  corresponds to moving forward one step in the persistence module. Intuitively, the free parts on the right side correspond to the homology generators that appear at filtration level  and never disappear, while the torsion parts correspond to those that appear at filtration level  and last for  steps of the filtration (or equivalently, disappear at filtration level ).[5][4]

Each of these two theorems allows us to uniquely represent the persistent homology of a filtered simplicial complex with a barcode or persistence diagram. A barcode represents each persistent generator with a horizontal line beginning at the first filtration level where it appears, and ending at the filtration level where it disappears, while a persistence diagram plots a point for each generator with its x-coordinate the birth time and its y-coordinate the death time. Equivalently the same data is represented by Barannikov’s canonical form,[4] where each generator is represented by a segment connecting the birth and the death values plotted on separate lines for each .

[learn_press_profile]

• ## An Automorphism of de Moivre’s Theorem (Math 347)

Introduction to de Moivre’s Theorem:
https://brilliant.org/wiki/de-moivres-theorem/

Practice Problem:

Books: PreCalculus text, Dummit and Foote

Abraham de Moivre (French pronunciation: ​[abʁaam də mwavʁ]; 26 May 1667 – 27 November 1754) was a French mathematician known for de Moivre’s formula, a formula that links complex numbers and trigonometry, and for his work on the normal distribution and probability theory.

In mathematicsde Moivre’s formula (also known as de Moivre’s theorem and de Moivre’s identity) states that for any real number x and integer n it holds that

where i is the imaginary unit (i2 = −1). The formula is named after Abraham de Moivre, although he never stated it in his works.[1] The expression cos x + i sin x is sometimes abbreviated to cis x.

The formula is important because it connects complex numbers and trigonometry. By expanding the left hand side and then comparing the real and imaginary parts under the assumption that x is real, it is possible to derive useful expressions for cos nx and sin nx in terms of cos x and sin x.

As written, the formula is not valid for non-integer powers n. However, there are generalizations of this formula valid for other exponents. These can be used to give explicit expressions for the nth roots of unity, that is, complex numbers z such that zn = 1.

[learn_press_profile]

• ## Newton’s minimal resistance problem (Phys 540)

Newton’s Minimal Resistance Problem is a problem of finding a solid of revolution which experiences a minimum resistance when it moves through a homogeneous fluid with constant velocity in the direction of the axis of revolution, named after Isaac Newton, who studied the problem in 1685 and published it in 1687 in his Principia Mathematica. This is the first example of a problem solved in what is now called the calculus of variations, appearing a decade before the brachistochrone problem.[2] Newton published the solution in Principia Mathematica without his derivation and David Gregory was the first person who approached Newton and persuaded him to write an analysis for him. Then the derivation was shared with his students and peers by Gregory.[3]

According to I Bernard Cohen, in his Guide to Newton’s Principia, “The key to Newton’s reasoning was found in the 1880s, when the earl of Portsmouth gave his family’s vast collection of Newton’s scientific and mathematical papers to Cambridge University. Among Newton’s manuscripts they found the draft text of a letter, … in which Newton elaborated his mathematical argument. [This] was never fully understood, however, until the publication of the major manuscript documents by D. T. Whiteside [1974], whose analytical and historical commentary has enabled students of Newton not only to follow fully Newton’s path to discovery and proof, but also Newton’s later (1694) recomputation of the surface of least resistance”.[4][5]

Even though Newton’s model for the fluid was wrong as per our current understanding, the fluid he had considered finds its application in Hypersonic flow theory as a limiting case.[6]

Homework: https://assets.cambridge.org/97805210/45858/frontmatter/9780521045858_frontmatter.pdf

How to Prepare for Part III: https://www.maths.cam.ac.uk/postgrad/part-iii/prospective/preparation/resources

Past Examination Papers (DO ALL): https://www.maths.cam.ac.uk/undergrad/pastpapers/past-ia-ib-and-ii-examination-papers

[learn_press_profile]

• ## Hilbert’s twenty-third problem (Math 540)

Hilbert’s twenty-third problem is the last of Hilbert problems set out in a celebrated list compiled in 1900 by David Hilbert. In contrast with Hilbert’s other 22 problems, his 23rd is not so much a specific “problem” as an encouragement towards further development of the calculus of variations. His statement of the problem is a summary of the state-of-the-art (in 1900) of the theory of calculus of variations, with some introductory comments decrying the lack of work that had been done of the theory in the mid to late 19th century.

The first part to solving this problem, is correctly stating it.

So far, I have generally mentioned problems as definite and special as possible…. Nevertheless, I should like to close with a general problem, namely with the indication of a branch of mathematics repeatedly mentioned in this lecture-which, in spite of the considerable advancement lately given it by Weierstrass, does not receive the general appreciation which, in my opinion, it is due—I mean the calculus of variations.

In mathematics, the Weierstrass function is an example of a real-valued function that is continuous everywhere but differentiable nowhere. It is an example of a fractal curve. It is named after its discoverer Karl Weierstrass.

The calculus of variations is a field of mathematical analysis that uses variations, which are small changes in functions and functionals, to find maxima and minima of functionals: mappings from a set of functions to the real numbers.[a] Functionals are often expressed as definite integrals involving functions and their derivatives. Functions that maximize or minimize functionals may be found using the Euler–Lagrange equation of the calculus of variations.

The Weierstrass function has historically served the role of a pathological function, being the first published example (1872) specifically concocted to challenge the notion that every continuous function is differentiable except on a set of isolated points.[1] Weierstrass’s demonstration that continuity did not imply almost-everywhere differentiability upended mathematics, overturning several proofs that relied on geometric intuition and vague definitions of smoothness. These types of functions were denounced by contemporaries: Henri Poincaré famously described them as “monsters” and called Weierstrass’ work “an outrage against common sense”, while Charles Hermite wrote that they were a “lamentable scourge”. The functions were impossible to visualize until the arrival of computers in the next century, and the results did not gain wide acceptance until practical applications such as models of Brownian motion necessitated infinitely jagged functions (nowadays known as fractal curves).[2]

A simple example of such a problem is to find the curve of shortest length connecting two points. If there are no constraints, the solution is a straight line between the points. However, if the curve is constrained to lie on a surface in space, then the solution is less obvious, and possibly many solutions may exist. Such solutions are known as geodesics. A related problem is posed by Fermat’s principle: light follows the path of shortest optical length connecting two points, where the optical length depends upon the material of the medium. One corresponding concept in mechanics is the principle of least/stationary action.

Many important problems involve functions of several variables. Solutions of boundary value problems for the Laplace equation satisfy the Dirichlet’s principlePlateau’s problem requires finding a surface of minimal area that spans a given contour in space: a solution can often be found by dipping a frame in a solution of soap suds. Although such experiments are relatively easy to perform, their mathematical interpretation is far from simple: there may be more than one locally minimizing surface, and they may have non-trivial topology.

Real analysis, as we now understand it, acquired a different emphasis from classical analysis in the second half of the nineteenth century. In his famous remark to Thomas Stieltjes, Charles Hermite wrote: “I turn with terror and horror from this lamentable scourge of continuous functions with no derivatives”. Nowadays, non-regular functions and their generalizations are of fundamental importance in mathematics and physics. The study of non-regular functions requires a better theory of integration; historically, that was the main motivation for the development of measure theory.

The course will cover the prescribed syllabus:

Abstract measure theory, outer measure, Lebesgue measure on the real line, measurable functions, Egorov and Lusin theorems, the Lebesgue integral, differentiation, L-spaces, elementary Hilbert space theory and trigonometric series. Other topics will be included as time permits.

Texts (required): H.L. Royden and Patrick Fitzpatrick, Real Analysis (4th edition), Pearson, 2010.
G. B. Folland, Real Analysis, John Wiley & Sons.W.
Rudin, Real and Complex Analysis, McGraw-Hill.
Princeton Lecture Series in Analysis

HOMEWORK: https://math.berkeley.edu/~brent/files/104_weierstrass.pdf

[learn_press_profile]