Tag Archives: mathematics

The Way to a Mathematician’s Heart Is through a Pi

Want to win over a mathematician? Bake them a pi.

Of course, presentation counts. You can’t just pour a spew of digits.

1200px-pi_tau_digit_runs-svg

If you have to, at least season it with 9’s

Ideally, you’ve baked your pi at home, in a comfortable physical theory. You lay out a graph to give it structure, then wrap it in algebraic curves before baking under an integration.

(Sometimes you can skip this part. My mathematician will happily eat graphs and ignore the pi.)

At this point, if your motives are pure (or at least mixed Tate), you have your pi. To make it more interesting, be sure to pair with a well-aged Riemann zeta value. With the right preparation, you can achieve a truly cosmic pi.

whirled-pies-54

Fine, that last joke was a bit of a stretch. Hope you had a fun pi day!

Have You Given Your Kids “The Talk”?

If you haven’t seen it yet, I recommend reading this delightful collaboration between Scott Aaronson (of Shtetl-Optimized) and Zach Weinersmith (of Saturday Morning Breakfast Cereal). As explanations of a concept beyond the standard popular accounts go, this one is pretty high quality, correcting some common misconceptions about quantum computing.

I especially liked the following exchange:

ontology

I’ve complained before about people trying to apply ontology to physics, and I think this gets at the root of one of my objections.

People tend to think that the world should be describable with words. From that perspective, mathematics is just a particular tool, a system we’ve created. If you look at the world in that way, mathematics looks unreasonably effective: it’s ability to describe the real world seems like a miraculous coincidence.

Mathematics isn’t just one tool though, or just one system. It’s all of them: not just numbers and equations, but knots and logic and everything else. Deep down, mathematics is just a collection of all the ways we’ve found to state things precisely.

Because of that, it shouldn’t surprise you that we “put complex numbers in our ontologies”. Complex numbers are just one way we’ve found to make precise statements about the world, one that comes in handy when talking about quantum mechanics. There doesn’t need to be a “correct” description in words: the math is already stating things as precisely as we know how.

That doesn’t mean that ontology is a useless project. It’s worthwhile to develop new ways of talking about things. I can understand the goal of building up a philosophical language powerful enough to describe the world in terms of words, and if such a language was successful it might well inspire us to ask new scientific questions.

But it’s crucial to remember that there’s real work to be done there. There’s no guarantee that the project will work, that words will end up sufficient. When you put aside our best tools to make precise statements, you’re handicapping yourself, making the problem harder than it needed to be. It’s your responsibility to make sure you’re getting something worthwhile out of it.

Congratulations to Thouless, Haldane, and Kosterlitz!

I’m traveling this week in sunny California, so I don’t have time for a long post, but I thought I should mention that the 2016 Nobel Prize in Physics has been announced. Instead of going to LIGO, as many had expected, it went to David Thouless, Duncan Haldane, and Michael Kosterlitz. LIGO will have to wait for next year.

Thouless, Haldane, and Kosterlitz are condensed matter theorists. While particle physics studies the world at the smallest scales and astrophysics at the largest, condensed matter physics lives in between, explaining the properties of materials on an everyday scale. This can involve inventing new materials, or unusual states of matter, with superconductors being probably the most well-known to the public. Condensed matter gets a lot less press than particle physics, but it’s a much bigger field: overall, the majority of physicists study something under the condensed matter umbrella.

This year’s Nobel isn’t for a single discovery. Rather, it’s for methods developed over the years that introduced topology into condensed matter physics.

Topology often gets described in terms of coffee cups and donuts. In topology, two shapes are the same if you can smoothly change one into another, so a coffee cup and a donut are really the same shape.

mug_and_torus_morphMost explanations stop there, which makes it hard to see how topology could be useful for physics. The missing part is that topology studies not just which shapes can smoothly change into each other, but which things, in general, can change smoothly into each other.

That’s important, because in physics most changes are smooth. If two things can’t change smoothly into each other, something special needs to happen to bridge the gap between them.

There are a lot of different sorts of implications this can have. Topology means that some materials can be described by a number that’s conserved no matter what (smooth) changes occur, leading to experiments that see specific “levels” rather than a continuous range of outcomes. It means that certain physical setups can’t change smoothly into other ones, which protects those setups from changing: an idea people are investigating in the quest to build a quantum computer, where extremely delicate quantum states can be disrupted by even the slightest change.

Overall, topology has been enormously important in physics, and Thouless, Haldane, and Kosterlitz deserve a significant chunk of the credit for bringing it into the spotlight.

Thought Experiments, Minus the Thought

My second-favorite Newton fact is that, despite inventing calculus, he refused to use it for his most famous work of physics, the Principia. Instead, he used geometrical proofs, tweaked to smuggle in calculus without admitting it.

Essentially, these proofs were thought experiments. Newton would start with a standard geometry argument, one that would have been acceptable to mathematicians centuries earlier. Then, he’d imagine taking it further, pushing a line or angle to some infinite point. He’d argue that, if the proof worked for every finite choice, then it should work in the infinite limit as well.

These thought experiments let Newton argue on the basis of something that looked more rigorous than calculus. However, they also held science back. At the time, only a few people in the world could understand what Newton was doing. It was only later, when Newton’s laws were reformulated in calculus terms, that a wider group of researchers could start doing serious physics.

What changed? If Newton could describe his physics with geometrical thought experiments, why couldn’t everyone else?

The trouble with thought experiments is that they require careful setup, setup that has to be thought through for each new thought experiment. Calculus took Newton’s geometrical thought experiments, and took out the need for thought: the setup was automatically a part of calculus, and each new researcher could build on their predecessors without having to set everything up again.

This sort of thing happens a lot in science. An example from my field is the scattering matrix, or S-matrix.

The S-matrix, deep down, is a thought experiment. Take some particles, and put them infinitely far away from each other, off in the infinite past. Then, let them approach, close enough to collide. If they do, new particles can form, and these new particles will travel out again, infinite far away in the infinite future. The S-matrix then is a metaphorical matrix that tells you, for each possible set of incoming particles, what the probability is to get each possible set of outgoing particles.

In a real collider, the particles don’t come from infinitely far away, and they don’t travel infinitely far before they’re stopped. But the distances are long enough, compared to the sizes relevant for particle physics, that the S-matrix is the right idea for the job.

Like calculus, the S-matrix is a thought experiment minus the thought. When we want to calculate the probability of particles scattering, we don’t need to set up the whole thought experiment all over again. Instead, we can start by calculating, and over time we’ve gotten very good at it.

In general, sub-fields in physics can be divided into those that have found their S-matrices, their thought experiments minus thought, and those that have not. When a topic has to rely on thought experiments, progress is much slower: people argue over the details of each setup, and it’s difficult to build something that can last. It’s only when a field turns the corner, removing the thought from its thought experiments, that people can start making real collaborative progress.

What Does It Mean to Know the Answer?

My sub-field isn’t big on philosophical debates. We don’t tend to get hung up on how to measure an infinite universe, or in arguing about how to interpret quantum mechanics. Instead, we develop new calculation techniques, which tends to nicely sidestep all of that.

If there’s anything we do get philosophical about, though, any question with a little bit of ambiguity, it’s this: What counts as an analytic result?

“Analytic” here is in contrast to “numerical”. If all we need is a number and we don’t care if it’s slightly off, we can use numerical methods. We have a computer use some estimation trick, repeating steps over and over again until we have approximately the right answer.

“Analytic”, then, refers to everything else. When you want an analytic result, you want something exact. Most of the time, you don’t just want a single number: you want a function, one that can give you numbers for whichever situation you’re interested in.

It might sound like there’s no ambiguity there. If it’s a function, with sines and cosines and the like, then it’s clearly analytic. If you can only get numbers out through some approximation, it’s numerical. But as the following example shows, things can get a bit more complicated.

Suppose you’re trying to calculate something, and you find the answer is some messy integral. Still, you’ve simplified the integral enough that you can do numerical integration and get some approximate numbers out. What’s more, you can express the integral as an infinite series, so that any finite number of terms will get close to the correct result. Maybe you even know a few special cases, situations where you plug specific numbers in and you do get an exact answer.

It might sound like you only know the answer numerically. As it turns out, though, this is roughly how your computer handles sines and cosines.

When your computer tries to calculate a sine or a cosine, it doesn’t have access to the exact solution all of the time. It does have some special cases, but the rest of the time it’s using an infinite series, or some other numerical trick. Type in a random sine into your calculator and it will be just as approximate as if you did a numerical integration.

So what’s the real difference?

Rather than how we get numbers out, think about what else we know. We know how to take derivatives of sines, and how to integrate them. We know how to take limits, and series expansions. And we know their relations to other functions, including how to express them in terms of other things.

If you can do that with your integral, then you’ve probably got an analytic result. If you can’t, then you don’t.

What if you have only some of the requirements, but not the others? What if you can take derivatives, but don’t know all of the identities between your functions? What if you can do series expansions, but only in some limits? What if you can do all the above, but can’t get numbers out without a supercomputer?

That’s where the ambiguity sets in.

In the end, whether or not we have the full analytic answer is a matter of degree. The closer we can get to functions that mathematicians have studied and understood, the better grasp we have of our answer and the more “analytic” it is. In practice, we end up with a very pragmatic approach to knowledge: whether we know the answer depends entirely on what we can do with it.

Source Your Common Sense

When I wrote that post on crackpots, one of my inspirations was a particularly annoying Twitter conversation. The guy I was talking to had convinced himself that general relativity was a mistake. He was especially pissed off by the fact that, in GR, energy is not always conserved. Screw Einstein, energy conservation is just common sense! Right?

Think a little bit about why you believe in energy conservation. Is it because you run into a lot of energy in your day-to-day life, and it’s always been conserved? Did you grow up around something that was obviously energy? Or maybe someone had to explain it to you?

Teacher Pointing at Map of World

Maybe you learned about it…from a physics teacher?

A lot of the time, things that seem obvious only got that way because you were taught them. “Energy” isn’t an intuitive concept, however much it’s misused that way. It’s something defined by physicists because it solves a particular role, a consequence of symmetries in nature. When you learn about energy conservation in school, that’s because it’s one of the simpler ways to explain a much bigger concept, so you shouldn’t be surprised if there are some inaccuracies. If you know where your “common sense” is coming from, you can anticipate when and how it might go awry.

Similarly, if, like one of the commenters on my crackpot post, you’re uncomfortable with countable and uncountable infinities, remember that infinity isn’t “common sense” either. It’s something you learned about in a math class, from a math teacher. And just like energy conservation, it’s a simplification of a more precise concept, with epsilons and deltas and all that jazz.

It’s not possible to teach all the nuances of every topic, so naturally most people will hear a partial story. What’s important is to recognize that you heard a partial story, and not enshrine it as “common sense” when the real story comes knocking.

Don’t physicists use common sense, though? What about “physical intuition”?

Physical intuition has a lot of mystique behind it, and is often described as what separates us from the mathematicians. As such, different people mean different things by it…but under no circumstances should it be confused with pure “common sense”. Physical intuition uses analogy and experience. It involves seeing a system and anticipating the sorts of things you can do with it, like playing a game and assuming there’ll be a save button. In order for these sorts of analogies to work, they generally aren’t built around everyday objects or experiences. Instead, they use physical systems that are “similar” to the one under scrutiny in important ways, while being better understood in others. Crucially, physical intuition involves working in context. It’s not just uncritical acceptance of what one would naively expect.

So when your common sense is tingling, see if you can provide a source. Is that source relevant, experience with a similar situation? Or is it in fact a half-remembered class from high school?

Symbology 101

I work with functions called polylogarithms. There’s a whole field of techniques out there for manipulating these functions, and for better or worse people often refer to them as symbology.

My plan for this post is to give a general feel for how symbology works: what we know how to do, and why. It’s going to be a lot more technical than my usual posts, so the lay reader may want to skip this one. At the same time, I’m not planning to go through anything rigorously. If you want that sort of thing there are plenty of good papers on the subject, here’s one of mine that covers the basics. Rather, I’m going to draw what I hope is an illuminating sketch of what it is we do.

Still here? Let’s start with an easy question.

What’s a log?

balch_park_hollow_log

Ok, besides one of these.

For our purposes, a log is what happens when you integrate dx/x.

\log x=\int \frac{dx}{x}

 Schematically, a polylog is then what happens when you iterate these integrations:

G=\int \frac{dx_1}{x_1} \int \frac{dx_2}{x_2}\ldots

The simplest thing you can get from this is of course just a product of logs. The next most simple thing is one of the classical polylogarithms. But in general, this is a much wider class of functions, known as multiple, or Goncharov, polylogarithms.

The number of integrations is the transcendental weight. Naively, you’d expect an L-loop Feynman integral in four dimensions to give you something with transcendental weight 4L. In practice, that’s not the case: some of the momentum integrations end up just giving delta functions, so in the end an L-loop amplitude has transcendental weight 2L.

In most theories, you get a mix of functions: some with weight 2L, some with weight 2L-1, etc., all the way down to rational functions. N=4 super Yang-Mills is special: there, everything is at the maximum transcendental weight. In either case, though, being able to manipulate transcendental functions is very useful, and the symbol is one of the simplest ways to do so.

The core idea of the symbol is pretty easy to state, though it takes a bit more technology to state it rigorously. Essentially, we take our schematic polylog from above, and just list the logs:

\mathcal{S}(G)=\ldots\otimes x_2\otimes x_1

(Here I have switched the order in order to agree with standard conventions.)

What does that do? Well, it reminds us that these aren’t just some weird functions we don’t understand: they’re collections of logs, and we can treat them like collections of logs.

In particular, we can do this with logs,

\log (x y)=\log x+\log y

so we can do it with symbols as well:

x_1\otimes x y\otimes x_3=x_1\otimes x \otimes x_3+x_1\otimes y\otimes x_3

Similarly, we can always get rid of unwelcome exponents, like so:

\log (x^n)=n\log x

x_1\otimes x^n\otimes x_3=n( x_1\otimes x \otimes x_3)

This means that, in general, we can always factorize any polynomial or rational function that appears in a symbol. As such, we often express symbols in terms of some fixed symbol alphabet, a basis of rational functions that can be multiplied to get any symbol entry in the function we’re working with. In general, it’s a lot easier to calculate amplitudes when we know the symbol alphabet beforehand. For six-particle amplitudes in N=4 super Yang-Mills, the symbol alphabet contains just nine “letters”, which makes it particularly easy to work with.

That’s arguably the core of symbol methods. It’s how Spradlin and Volovich managed to get a seventeen-page expression down to two lines. Express a symbol in the right alphabet, and it tends to look a lot more simple. And once you know the right alphabet, it’s pretty straightforward to build an ansatz with it and constrain it until you get a candidate function for whatever you’re interested in.

There’s more technical detail I could give here: how to tell whether a symbol actually corresponds to a function, how to take limits and do series expansions and take derivatives and discontinuities…but I’m not sure whether anyone reading this would be interested.

As-is, I’ll just mention that the symbol is only part of the story. In particular, it’s a special case of something called a coproduct, which breaks up polylogarithms into various chunks. Break them down fully until each chunk is just an individual log, and you get the symbol. Break them into larger chunks, and you get other components of the coproduct, consisting of tensor products of polylogarithms with lower transcendental weight. These larger chunks mean we can capture as much of a function’s behavior as we like, while still taking advantage of these sorts of tricks. While in older papers you might have seen mention of “beyond-the-symbol” terms that the symbol couldn’t capture, this doesn’t tend to be a problem these days.