2017-04-11

Solitons: a gateway drug

Alright so you absolutely must go read this little book:

[Search for it on Library Genesis, but support the author if at all possible; given the amazing book he produced, he really deserves it.]

I will now proceed to just unapologetically fawn over this goddamned masterpiece until it gets relatively awkward, uhhh, continue doing that for a bit longer, and then finally quickly talk about what this crazy story involves.


The book

It's been forever since I had so much trouble putting down a math book, or for that matter any kind of technical book. Soliton theory is, today, a very sophisticated branch of mathematical physics (belonging to the theory of integrable systems) that draws heavily on all sorts of high-powered machinery, mostly from algebraic geometry: moduli spaces, theta functions, loop groups, and so on.

Despite all this insane stuff lurking just beneath the surface, the book somehow manages to read like a novel. Like many books in the AMS STML series, the tone is very conversational, and an undergrad could read this fresh out of a basic calculus/linear algebra sequence and understand just about all of it. Kasman's exposition is a cathartic and utterly masterful tour de force. Highlights include:
  • plenty of nice diagrams to help you see what's going on,
  • Mathematica code you can run to play around with this stuff to your heart's content, and
  • historical interludes that seem to be injected extremely judiciously so as to give the reader's mind the occasional break from the mathematical development.
To top it all off, the book includes pointers to research monographs and papers on these topics, making it very easy to go as far as you like down the rabbit hole (which, here, really is endless).

Solitons




There are so many worlds colliding here that frankly it's a little hard to even decide where to start, but the concept of a soliton is probably as good a place as any. Long story short, back in the 19th century, some guy John Scott Russell was watching a boat getting towed down a very narrow canal by a pair of horses, and noticed that when the boat suddenly stopped, a very well-defined hump of water formed, which he followed on horseback as it propagated, seemingly undisturbed, down the canal. He found this fascinating; it seems most people at the time responded along the lines "cool, but like, whatever".

About 50 years later, in 1895, the Kortewegde Vries ("KdV") paper appeared, where they came up with the following equation as a model for shallow water waves: \[ \frac{\partial u}{\partial t} - 6u \frac{\partial u}{\partial x} + \frac{\partial^3 u}{\partial x^3} = 0. \] Okay, it wasn't exactly like that in their paper; they had a bunch of physical constants in there, but that's what it boils down to mathematically. It's called the KdV equation. When I first saw this I was immediately annoyed by the seemingly random "6", but it turns out it's only there for conventional reasons; in fact by applying simple transformations you can get whatever coefficients you want to appear in front of the three terms. Anyway, the upshot is that once you fix an "initial profile" \( u_0(x) = u(x,0) \), solving the KdV equation tells you what happens as time passes, i.e. it tells you \( u(x,t) \).

The KdV equation is a partial differential equation (PDE). Using the subscript notation for partial derivatives we can rewrite it as \( u_t - 6uu_x + u_{xxx} = 0 \). Due to the second term, which involves \( uu_x \), it is nonlinear. As a consequence, the superposition \( u+v \) of two solutions \( u \) and \( v \) to this equation will in general not be another solution. On the other hand, due to the third term \( u_{xxx} \), it is dispersive. To understand this, suppose for a moment that there was no nonlinear term. Then if you have some nice, localized "hump", and then you watch how it evolves, then typically this serendipitous equilibrium would quickly be destroyed, and your "hump" will fall apart into a horrendous mess: dispersion. This is the miracle of KdV: due to the presence of both the nonlinear and dispersive terms, they "compete" with each other in some sense, and thus allow the possibility for solitary waves that simply propagate without changing shape (and even pass through each other virtually unaffected)! Hence, the term soliton was coined.

Cool! So how do we solve it? Well, that's the thing, people found one relatively simple solution, and then pretty much had no idea how to proceed. Even though there is only one spatial dimension, it's a nonlinear PDE, and solving those typically ranges from "sort of tricky", to "Japanese hard mode", to "E8 ROOT SYSTEM SCRAWLED IN CHALK ON FLOOR BURNING SPRINGER BOOKS ON ALTAR WHILE HISSING IN TONGUES".

So, for over 50 years, the KdV equation laid more or less dormant, and nearly even forgotten. People moved on with their lives. However, as we will see, the story didn't end there. Things were destined to take an absolutely unexpected turn, leading us on a wild chase through a mysterious jungle, to a singular insight that would revolutionize the area forever.

Integrability

Integrability is a property that certain dynamical systems possess. Very roughly, it means they evolve in a "nice", rather than a chaotic, way. Their solutions can also be explicitly written down, for varying degrees of "explicit". For Hamiltonian systems, there is a single, all-encompassing definition of integrability, and the theory here is very well-developed. Passing from here to the setting of PDEs is basically passing from classical mechanics (e.g. studying the motion of some finite number of particles), where we have only finitely many degrees of freedom, to field theory (e.g. studying the motion of a vibrating string), where we have infinitely many degrees of freedom. This complicates matters considerably: so far there seems to be no "perfect" definition of integrability for PDEs. We do have some ideas, though. Three in particular stand out: 
  • the existence of a so-called Lax representation, which roughly speaking is a way of expressing the PDE in question as the compatibility condition of an overdetermined linear system;
  • the so-called Painlevé property, a certain condition on how much "dependence" is exhibited by the singularities of a differential equation; and 
  • perhaps most promisingly, expressibility as a dimensional reduction of the anti-self-dual Yang–Mills equations (almost all famous examples in 2 and 3 dimensions fall under this umbrella). 
However, even this last concept isn't perfect  as Hitchin put it, the KP equation (see below) must be "ruthlessly hacked and stretched to fit the Procrustean bed of self-duality".

The GGKM paper: the rise of "inverse scattering"

In 1967, there appeared the paper titled Method for Solving the KortewegdeVries Equation by Gardner, Greene, Kruskal and Miura, describing at last an ingenious method of solving the KdV equation. Surprisingly enough, it turns out that if you take a solution of the KdV equation and interpret it as a potential of a Schrödinger operator (wait, whaaa?!), then all kinds of magic happens. As you let the potential evolve according to KdV, the spectrum of the corresponding Schrödinger operator does not change!

Now, if you know the "scattering data" of the Schrödinger operator at a given time (which has to do with how an incoming disturbance is reflected off, or transmitted through, the potential), it turns out that you can in fact recover the potential at that time (thus we say you are "solving the inverse scattering problem"; this is essentially how physicists draw conclusions about elementary particles from particle accelerator data). Then, since the scattering data turns out to evolve in a nice way that we can understand, by solving an integral equation, we can recover the evolution of the potential, in other words, we can solve the KdV equation for the given initial data.

Like most good ideas, this inverse scattering method turned out to be much broader in scope than initially expected. In fact it can be used to attack a wide array of other nonlinear PDEs, such as the KadomtsevPetviashvili equation (a generalization of KdV to two spatial dimensions), the nonlinear Schrödinger equation, the Ernst equation (equivalent to the vacuum Einstein field equations for a stationary, axisymmetric spacetime), and the sine-Gordon equation. These equations have been derived over and over from a host of natural problems  some coming from physics, some coming from pure geometry. The key, really, is the Lax representation: in the case of KdV, it's what provides the link to the Schrödinger equation (which is linear!).

In fact, if you read Kasman's book, you will see there is a sense in which the KP equation falls right out of the Plücker relations that cut out the Grassmann cone. Let that sink in for a moment: mathematical physics on the one hand (KP is a nonlinear PDE modelling WATER WAVES!), pure algebraic geometry on the other. One just can't help but marvel at this! In fact, there is a whole KP hierarchy which arises just as naturally; see Segal–Wilson.

Schottky problem

A Riemann surface is a one-dimensional complex manifold: a geometric object that locally looks like a patch of the complex plane. These surfaces historically arose from scissors-and-glue constructions (pasting several complex planes together along "branch cuts") that were used to understand the behaviour of multi-valued functions of a complex variable, such as the square root "function" \( z \mapsto \sqrt{z} \), which is really two-valued, or the logarithm, which is actually infinite-valued (and therefore properly defined on a kind of "infinite parking lot"). Every compact Riemann surface \( \Sigma \) is topologically either a sphere, or just a \( g \)-holed "donut" for \( g \geq 1 \). We can then choose a family of curves \( a_1, \ldots, a_g, b_1, \ldots, b_g \) on the surface, that look like this:



It turns out that the holomorphic differentials (1-forms) on \( \Sigma \) form a vector space of dimension precisely \( g \). We can choose a basis \( \{ \omega_1, \ldots, \omega_g \} \) for this space which is "adapted" to the system of curves above in the sense that the integrals \( \oint_{a_k} \omega_j = \delta_{jk} \). Then the "interesting information" is contained in the integrals \( \pi_{jk} := \oint_{b_k} \omega_j \). Thus we have extracted a matrix \( \Pi = (\pi_{jk}) \) of complex numbers called the period matrix (there is a sense in which the choices we made above didn't really matter), and we can show this matrix is symmetric, and has positive-definite imaginary part. The set of all \( g \times g \) complex matrices with these two properties is called the Siegel upper half-space \( \mathfrak{S}_g \) of genus \( g \). For the number theorists in the audience, incidentally this is the object on which the fascinating Siegel modular forms live. Anyway, one then uses the columns of \( \Pi \) to form a lattice \( \Lambda \) in \( \mathbf{C}^g \cong \mathbf{R}^{2g} \), and then goes on to define the Jacobian \( J(\Sigma) := \mathbf{C}^g / \Lambda \) of the curve, determines conditions under which meromorphic functions exist on the surface with prescribed behaviours, and so on. This stuff is all very classical and was worked out in the 19th and 20th centuries. It beautifully encapsulates all the (previously somewhat scattered) knowledge about elliptic integrals (and more generally, abelian integrals) into an abstract, geometric framework.

There is a problem in algebraic geometry that asks for a characterization of Jacobian varieties among all abelian varieties. Stated differently: can we characterize the locus of period matrices of Riemann surfaces in the Siegel upper half-space \( \mathfrak{S}_g \)?

This question is a very classical one, and has been studied extensively. To this day, there is a sense in which it still hasn't been solved in a completely satisfactory way, but the answer we have so far is already rather striking: to formulate it, note that for any \( \tau \) in \( \mathfrak{S}_g \), we can define a function of several complex variables called its Riemann theta-function \( \Theta_\tau : \mathbf{C}^g \to \mathbf{C} \): \[ \Theta_\tau(\mathbf{z}) := \sum_{\mathbf{m} \in \mathbf{Z}^g} \exp \left( 2\pi i \left( \frac{1}{2} \mathbf{m}^\top \tau \mathbf{m} + \mathbf{m}^\top \mathbf{z} \right) \right). \]Then the period locus consists precisely of those matrices \( \tau \) in \( \mathfrak{S}_g \) whose corresponding Riemann theta-function \( \Theta_\tau \) satisfies the KP equation!

...

Alright, I'm tired. There are many more ingredients to discuss, even at this fundamental level, such as \( \tau \)-functions, and the Wronskian determinant that allows us to "combine" solutions despite the nonlinearity (!), and so on, but that's enough for today.

Go read Kasman's book, and then once you're starving for more, you can move on for example to Hitchin–Segal–Ward's book Integrable Systems: Twistors, Loop Groups, and Riemann Surfaces.

2017-03-31

Slow

I recently met with my advisor to tell him about the solution to a problem he had given me. The crux of the problem turned out to be determining a property of a somewhat messy integral (nonetheless, it was really just a single-variable integral; something perhaps like the most challenging integral a first-year calculus student might see). So naturally, I walked into his office, went up to the board, wrote the integral down and immediately began doing the necessary calculus. After a few minutes, I had a result that was clearly slightly incorrect. Perhaps I had dropped a term or missed a factor of 2 somewhere. Feeling suddenly apprehensive, my eyes began darting around the expressions on the board, as I desperately sought the error, but I couldn't think straight. The anxiety foiled me. I was too busy thinking about how I looked from my advisor's perspective. "Surely to someone so EXPERIENCED," a voice in my head boomed, singlehandedly drowning out the adorably innocent yelps and cries of exponents falling and factors cancelling, "watching me trying to fix a silly error in a first-year calculus computation must be at best profoundly boring and at worst downright irritating." I stood there in this state, transfixed and all but helpless, for a painfully long minute or so, until deliverance finally came: "Anyway, I think it is clear your solution works. Just be sure to look over the calculation one last time, for your own sake."

Phew.

He then told me about how after enough years, one begins making mistakes like this more and more frequently while lecturing to a class, but also said that at my age, I should rarely be making them. I completely agree; I'm sure it would be wise for me to bite down and grind out computations more often.

In fact, a few weeks ago while I was teaching a course on arithmetic sequences, the class corrected an arithmetic error I made, twice in a row, and the numbers involved were only two digits. It's odd how often I tend to make arithmetic and algebraic mistakes, and end up having to rewrite things, while on the other hand I virtually never make conceptual errors (for example, yes, even after years and years of experience with them, I am absolutely, sweating-like-a-Saturday-morning-cartoon-character biting-my-nails-on-both-hands rigid with terror whenever manipulating infinite series).

I understand what I've learned pretty deeply, but I've never been fast with the answers. I've never focussed much on learning, for example, the sorts of mnemonics (SOH-CAH-TOA! All-Students-Take-Calculus! wheeee!) my students seem to love so dearly, or rules-of-thumb for quick estimation, or anything like that. I am slow, sometimes very much so, but it's something I own completely: I would never, ever claim to someone that I'd solved a problem that I hadn't, or try to bullshit an answer to an assignment question in a desperate attempt to salvage the marks (I've marked a lot of papers that did, though). I likely don't work as quickly as most of my classmates did and won't try to deny that nor apologize for it; it's just the way it is (my time management skills are also admittedly lacking, yet on the other hand, I regularly work for days straight barely sleeping on things that deeply enthrall me). Mathematics is hard. It's something we're not naturally good at, and our experience with the real world doesn't really help us with. Take Lev Pontryagin for example: guy was blinded by a primus stove explosion as a teenager, and then with the help of his mother, went on to an illustrious career as a geometer/topologist! Clearly, his eyesight was merely shackling him to the lowly trenches of our pitiful three-dimensional reality. Praise to that stove, oh that most honourable of stoves, that set him free, right? Elevateth thee to the most glorious heaven of stoves! Too far? Okay, too far.

I'd like to refer to the following quote from here, surely a sort of "consolation" for mathematics graduate students the world over who had to endure the dismal experience of not growing up as exalted, IMO-gold-medal-toting prodigies (slight irony here, as I believe the person referred to did in fact fit this profile):

"Grisha was different. He thought deeply. His answers were always correct. He always checked very, very carefully." Burago added, "He was not fast. Speed means nothing. Math doesn’t depend on speed. It is about deep." 

Now OK, I'm clearly not comparing myself to Perelman; even as an insufferable high schooler I was orders of magnitude away from such arrogance. In fact the only part of this quote I really wanted was the ending, but I figured the context was interesting. In any case, Burago's observation is highly accurate. Math really is all about deep, maybe even more so than any other human endeavour. Though I have to admit (or as wiser men say, I must confess...), art in its various forms can come pretty damn close.

So, you know, I may be slow but I'm dangerous. I'm exactly the kind of dog you gotta watch out for, because we just stay silent for years and years, you know it's like Snoop said man, "still waters run deep", so we out here acting all benign and shiz, and next thing you know it's like BAM, PAPER IN THE ANNALS WHAT UP, and they all gon' be lyk "Daaayum! Where the hell did that come from? Who is this guy?!"

Nah, I'm just playing. You know, in case that somehow wasn't obvious.

C'mon, a guy can dream though. ;)

2016-10-28

Institute of Holistic Nutrition

Today, by chance, Facebook showed me a post from the Institute of Holistic Nutrition's page, advertising a lecture they were hosting called "Quantum Human Biology" (yeah, you already know where this is going) given by a certain Brian Clement of the so-called "Hippocrates Health Institute", a noted quack. See for example this article, from which I quote:

"It’s horrible," Gollin says. "I could have printed him a degree on a laser printer and it would be … just as indicative of training and skills. What I think is terrible is that he’s using this, as I understand it, to treat patients who are desperately sick children."

Now, I know this kind of pseudoscientific garbage is, well, strewn all over the internet and other forms of media, and we'll never manage to defeat it completely, but this alone makes me quite confident that the Institute of Holistic Nutrition (IHN) is a threat to the public good. Since they don't even have a Wikipedia article (my elementary school has a Wikipedia article) and a Google search for their name returns virtually no results apart from their own homepage (a red flag if there ever was one), I figure my blog may as well be one of them. So let me just record here my disappointment: I left a perfectly civil and by no means vulgar comment on the aforementioned post, and promptly (and I mean promptly -- within under a minute!) found that not only had it been deleted by the page's administrators, but they had also completely blocked my ability to comment on any of their posts.

No remotely reputable educational institution conducts their business in this way. A place of learning should encourage open discussion, skepticism and critical thought, not censor "unwanted" opinions. How is it actually legal for places like this to operate, charging people thousands of dollars in "tuition", and (if their choice of speakers is any indication of their scientific standards) indoctrinating them with witch-doctor hocus-pocus disguised as an education?

Clearly, they are striving to maintain a carefully curated echo chamber. Considering their status as a decidedly for-profit entity, this comes as no surprise. This kind of behaviour, no doubt par for the course in the world of medical quacks and shysters, is at its core fundamentally the same as what's going on in North Korea, and also the same thing that allowed Heaven's Gate and similar nonsense to happen. Normally I wouldn't waste my time, but since this kind of idiocy is responsible for thousands of innocent people dying or having their lives ruined (Google "L1$4 MCPHERS0N", after making the obvious substitutions), just so someone out there can turn a profit, it makes me very angry (similar story with those "Faith Healers" that have the nerve to go on TV and say "God wants you to send us all your money NOW! Do it... in the name of Jesus!" and then they go out and buy a hyper-opulent private jet or something). These people are unapologetic and dangerous criminals - the worst of the worst. Not to mention that all of these people make sure to write "PhD" after their name, so that more people will be prone to blindly accepting their lies as facts (their PhDs are almost always from random unaccredited degree mills, although notably, in at least one example I've read, it was from an Ivy League institution -- anyone can become a crackpot if they choose to discard intellectual rigour).

It's cute how people like this always whine that academics have "hidden agendas" and are part of some big bad elaborate conspiracy that's somehow rigged against them, and yet when you start digging into the various organizations involved in all this holistic stuff, they can typically all be traced back to the same group of questionable individuals. It's all just quack schools being accredited by quack accreditation bodies being accredited by quack accreditation-accreditation bodies, ad infinitum. What a SCAM.

To prospective students: caveat emptor.