Tag Archives: mathematics

KITP Conference Retrospective

I’m back from the conference in Santa Barbara, and I thought I’d share a few things I found interesting. (For my non-physicist readers: I know it’s been a bit more technical than usual recently, I promise I’ll get back to some general audience stuff soon!)

James Drummond talked about efforts to extend the hexagon function method I work on to amplitudes with seven (or more) particles. In general, the method involves starting with a guess for what an amplitude should look like, and honing that guess based on behavior in special cases where it’s easier to calculate. In one of those special cases (called the multi-Regge limit), I had thought it would be quite difficult to calculate for more than six particles, but James clarified for me that there’s really only one additional piece needed, and they’re pretty close to having a complete understanding of it.

There were a few talks about ways to think about amplitudes in quantum field theory as the output of a string theory-like setup. There’s been progress pushing to higher quantum-ness, and in understanding the weird web of interconnected theories this setup gives rise to. In the comments, Thoglu asked about one part of this web of theories called Z theory.

Z theory is weird. Most of the theories that come out of this “web” come from a consistent sort of logic: just like you can “square” Yang-Mills to get gravity, you can “square” other theories to get more unusual things. In possibly the oldest known example, you can “square” the part of string theory that looks like Yang-Mills at low energy (open strings) to get the part that looks like gravity (closed strings). Z theory asks: could the open string also come from “multiplying” two theories together? Weirdly enough, the answer is yes: it comes from “multiplying” normal Yang-Mills with a part that takes care of the “stringiness”, a part which Oliver Schlotterer is calling “Z theory”. It’s not clear whether this Z theory makes sense as a theory on its own (for the experts: it may not even be unitary) but it is somewhat surprising that you can isolate a “building block” that just takes care of stringiness.

Peter Young in the comments asked about the Correlahedron. Scattering amplitudes ask a specific sort of question: if some particles come in from very far away, what’s the chance they scatter off each other and some other particles end up very far away? Correlators ask a more general question, about the relationships of quantum fields at different places and times, of which amplitudes are a special case. Just as the Amplituhedron is a geometrical object that specifies scattering amplitudes (in a particular theory), the Correlahedron is supposed to represent correlators (in the same theory). In some sense (different from the sense above) it’s the “square” of the Amplituhedron, and the process that gets you from it to the Amplituhedron is a geometrical version of the process that gets you from the correlator to the amplitude.

For the Amplituhedron, there’s a reasonably smooth story of how to get the amplitude. News articles tended to say the amplitude was the “volume” of the Amplituhedron, but that’s not quite correct. In fact, to find the amplitude you need to add up, not the inside of the Amplituhedron, but something that goes infinite at the Amplituhedron’s boundaries. Finding this “something” can be done on a case by case basis, but it get tricky in more complicated cases.

For the Correlahedron, this part of the story is missing: they don’t know how to define this “something”, the old recipe doesn’t work. Oddly enough, this actually makes me optimistic. This part of the story is something that people working on the Amplituhedron have been trying to avoid for a while, to find a shape where they can more honestly just take the volume. The fact that the old story doesn’t work for the Correlahedron suggests that it might provide some insight into how to build the Amplituhedron in a different way, that bypasses this problem.

There were several more talks by mathematicians trying to understand various aspects of the Amplituhedron. One of them was by Hugh Thomas, who as a fun coincidence actually went to high school with Nima Arkani-Hamed, one of the Amplituhedron’s inventors. He’s now teamed up with Nima and Jaroslav Trnka to try to understand what it means to be inside the Amplituhedron. In the original setup, they had a recipe to generate points inside the Amplituhedron, but they didn’t have a fully geometrical picture of what put them “inside”. Unlike with a normal shape, with the Amplituhedron you can’t just check which side of the wall you’re on. Instead, they can flatten the Amplituhedron, and observe that for points “inside” the Amplituhedron winds around them a specific number of times (hence “Unwinding the Amplituhedron“). Flatten it down to a line and you can read this off from the list of flips over your point, an on-off sequence like binary. If you’ve ever heard the buzzword “scattering amplitudes as binary code”, this is where that comes from.

They also have a better understanding of how supersymmetry shows up in the Amplituhedron, which Song He talked about in his talk. Previously, supersymmetry looked to be quite central, part of the basic geometric shape. Now, they can instead understand it in a different way, with the supersymmetric part coming from derivatives (for the specialists: differential forms) of the part in normal space and time. The encouraging thing is that you can include these sorts of derivatives even if your theory isn’t supersymmetric, to keep track of the various types of particles, and Song provided a few examples in his talk. This is important, because it opens up the possibility that something Amplituhedron-like could be found for a non-supersymmetric theory. Along those lines, Nima talked about ways that aspects of the “nice” description of space and time we use for the Amplituhedron can be generalized to other messier theories.

While he didn’t talk about it at the conference, Jake Bourjaily has a new paper out about a refinement of the generalized unitarity technique I talked about a few weeks back. Generalized unitarity involves matching a “cut up” version of an amplitude to a guess. What Jake is proposing is that in at least some cases you can start with a guess that’s as easy to work with as possible, where each piece of the guess matches up to just one of the “cuts” that you’re checking.  Think about it like a game of twenty questions where you’ve divided all possible answers into twenty individual boxes: for each box, you can just ask “is it in this box”?

Finally, I’ve already talked about the highlight of the conference, so I can direct you to that post for more details. I’ll just mention here that there’s still a fair bit of work to do for Zvi Bern and collaborators to get their result into a form they can check, since the initial output of their setup is quite messy. It’s led to worries about whether they’ll have enough computer power at higher loops, but I’m confident that they still have a few tricks up their sleeves.

The Way to a Mathematician’s Heart Is through a Pi

Want to win over a mathematician? Bake them a pi.

Of course, presentation counts. You can’t just pour a spew of digits.

1200px-pi_tau_digit_runs-svg

If you have to, at least season it with 9’s

Ideally, you’ve baked your pi at home, in a comfortable physical theory. You lay out a graph to give it structure, then wrap it in algebraic curves before baking under an integration.

(Sometimes you can skip this part. My mathematician will happily eat graphs and ignore the pi.)

At this point, if your motives are pure (or at least mixed Tate), you have your pi. To make it more interesting, be sure to pair with a well-aged Riemann zeta value. With the right preparation, you can achieve a truly cosmic pi.

whirled-pies-54

Fine, that last joke was a bit of a stretch. Hope you had a fun pi day!

Have You Given Your Kids “The Talk”?

If you haven’t seen it yet, I recommend reading this delightful collaboration between Scott Aaronson (of Shtetl-Optimized) and Zach Weinersmith (of Saturday Morning Breakfast Cereal). As explanations of a concept beyond the standard popular accounts go, this one is pretty high quality, correcting some common misconceptions about quantum computing.

I especially liked the following exchange:

ontology

I’ve complained before about people trying to apply ontology to physics, and I think this gets at the root of one of my objections.

People tend to think that the world should be describable with words. From that perspective, mathematics is just a particular tool, a system we’ve created. If you look at the world in that way, mathematics looks unreasonably effective: it’s ability to describe the real world seems like a miraculous coincidence.

Mathematics isn’t just one tool though, or just one system. It’s all of them: not just numbers and equations, but knots and logic and everything else. Deep down, mathematics is just a collection of all the ways we’ve found to state things precisely.

Because of that, it shouldn’t surprise you that we “put complex numbers in our ontologies”. Complex numbers are just one way we’ve found to make precise statements about the world, one that comes in handy when talking about quantum mechanics. There doesn’t need to be a “correct” description in words: the math is already stating things as precisely as we know how.

That doesn’t mean that ontology is a useless project. It’s worthwhile to develop new ways of talking about things. I can understand the goal of building up a philosophical language powerful enough to describe the world in terms of words, and if such a language was successful it might well inspire us to ask new scientific questions.

But it’s crucial to remember that there’s real work to be done there. There’s no guarantee that the project will work, that words will end up sufficient. When you put aside our best tools to make precise statements, you’re handicapping yourself, making the problem harder than it needed to be. It’s your responsibility to make sure you’re getting something worthwhile out of it.

Congratulations to Thouless, Haldane, and Kosterlitz!

I’m traveling this week in sunny California, so I don’t have time for a long post, but I thought I should mention that the 2016 Nobel Prize in Physics has been announced. Instead of going to LIGO, as many had expected, it went to David Thouless, Duncan Haldane, and Michael Kosterlitz. LIGO will have to wait for next year.

Thouless, Haldane, and Kosterlitz are condensed matter theorists. While particle physics studies the world at the smallest scales and astrophysics at the largest, condensed matter physics lives in between, explaining the properties of materials on an everyday scale. This can involve inventing new materials, or unusual states of matter, with superconductors being probably the most well-known to the public. Condensed matter gets a lot less press than particle physics, but it’s a much bigger field: overall, the majority of physicists study something under the condensed matter umbrella.

This year’s Nobel isn’t for a single discovery. Rather, it’s for methods developed over the years that introduced topology into condensed matter physics.

Topology often gets described in terms of coffee cups and donuts. In topology, two shapes are the same if you can smoothly change one into another, so a coffee cup and a donut are really the same shape.

mug_and_torus_morphMost explanations stop there, which makes it hard to see how topology could be useful for physics. The missing part is that topology studies not just which shapes can smoothly change into each other, but which things, in general, can change smoothly into each other.

That’s important, because in physics most changes are smooth. If two things can’t change smoothly into each other, something special needs to happen to bridge the gap between them.

There are a lot of different sorts of implications this can have. Topology means that some materials can be described by a number that’s conserved no matter what (smooth) changes occur, leading to experiments that see specific “levels” rather than a continuous range of outcomes. It means that certain physical setups can’t change smoothly into other ones, which protects those setups from changing: an idea people are investigating in the quest to build a quantum computer, where extremely delicate quantum states can be disrupted by even the slightest change.

Overall, topology has been enormously important in physics, and Thouless, Haldane, and Kosterlitz deserve a significant chunk of the credit for bringing it into the spotlight.

Thought Experiments, Minus the Thought

My second-favorite Newton fact is that, despite inventing calculus, he refused to use it for his most famous work of physics, the Principia. Instead, he used geometrical proofs, tweaked to smuggle in calculus without admitting it.

Essentially, these proofs were thought experiments. Newton would start with a standard geometry argument, one that would have been acceptable to mathematicians centuries earlier. Then, he’d imagine taking it further, pushing a line or angle to some infinite point. He’d argue that, if the proof worked for every finite choice, then it should work in the infinite limit as well.

These thought experiments let Newton argue on the basis of something that looked more rigorous than calculus. However, they also held science back. At the time, only a few people in the world could understand what Newton was doing. It was only later, when Newton’s laws were reformulated in calculus terms, that a wider group of researchers could start doing serious physics.

What changed? If Newton could describe his physics with geometrical thought experiments, why couldn’t everyone else?

The trouble with thought experiments is that they require careful setup, setup that has to be thought through for each new thought experiment. Calculus took Newton’s geometrical thought experiments, and took out the need for thought: the setup was automatically a part of calculus, and each new researcher could build on their predecessors without having to set everything up again.

This sort of thing happens a lot in science. An example from my field is the scattering matrix, or S-matrix.

The S-matrix, deep down, is a thought experiment. Take some particles, and put them infinitely far away from each other, off in the infinite past. Then, let them approach, close enough to collide. If they do, new particles can form, and these new particles will travel out again, infinite far away in the infinite future. The S-matrix then is a metaphorical matrix that tells you, for each possible set of incoming particles, what the probability is to get each possible set of outgoing particles.

In a real collider, the particles don’t come from infinitely far away, and they don’t travel infinitely far before they’re stopped. But the distances are long enough, compared to the sizes relevant for particle physics, that the S-matrix is the right idea for the job.

Like calculus, the S-matrix is a thought experiment minus the thought. When we want to calculate the probability of particles scattering, we don’t need to set up the whole thought experiment all over again. Instead, we can start by calculating, and over time we’ve gotten very good at it.

In general, sub-fields in physics can be divided into those that have found their S-matrices, their thought experiments minus thought, and those that have not. When a topic has to rely on thought experiments, progress is much slower: people argue over the details of each setup, and it’s difficult to build something that can last. It’s only when a field turns the corner, removing the thought from its thought experiments, that people can start making real collaborative progress.

What Does It Mean to Know the Answer?

My sub-field isn’t big on philosophical debates. We don’t tend to get hung up on how to measure an infinite universe, or in arguing about how to interpret quantum mechanics. Instead, we develop new calculation techniques, which tends to nicely sidestep all of that.

If there’s anything we do get philosophical about, though, any question with a little bit of ambiguity, it’s this: What counts as an analytic result?

“Analytic” here is in contrast to “numerical”. If all we need is a number and we don’t care if it’s slightly off, we can use numerical methods. We have a computer use some estimation trick, repeating steps over and over again until we have approximately the right answer.

“Analytic”, then, refers to everything else. When you want an analytic result, you want something exact. Most of the time, you don’t just want a single number: you want a function, one that can give you numbers for whichever situation you’re interested in.

It might sound like there’s no ambiguity there. If it’s a function, with sines and cosines and the like, then it’s clearly analytic. If you can only get numbers out through some approximation, it’s numerical. But as the following example shows, things can get a bit more complicated.

Suppose you’re trying to calculate something, and you find the answer is some messy integral. Still, you’ve simplified the integral enough that you can do numerical integration and get some approximate numbers out. What’s more, you can express the integral as an infinite series, so that any finite number of terms will get close to the correct result. Maybe you even know a few special cases, situations where you plug specific numbers in and you do get an exact answer.

It might sound like you only know the answer numerically. As it turns out, though, this is roughly how your computer handles sines and cosines.

When your computer tries to calculate a sine or a cosine, it doesn’t have access to the exact solution all of the time. It does have some special cases, but the rest of the time it’s using an infinite series, or some other numerical trick. Type in a random sine into your calculator and it will be just as approximate as if you did a numerical integration.

So what’s the real difference?

Rather than how we get numbers out, think about what else we know. We know how to take derivatives of sines, and how to integrate them. We know how to take limits, and series expansions. And we know their relations to other functions, including how to express them in terms of other things.

If you can do that with your integral, then you’ve probably got an analytic result. If you can’t, then you don’t.

What if you have only some of the requirements, but not the others? What if you can take derivatives, but don’t know all of the identities between your functions? What if you can do series expansions, but only in some limits? What if you can do all the above, but can’t get numbers out without a supercomputer?

That’s where the ambiguity sets in.

In the end, whether or not we have the full analytic answer is a matter of degree. The closer we can get to functions that mathematicians have studied and understood, the better grasp we have of our answer and the more “analytic” it is. In practice, we end up with a very pragmatic approach to knowledge: whether we know the answer depends entirely on what we can do with it.

Source Your Common Sense

When I wrote that post on crackpots, one of my inspirations was a particularly annoying Twitter conversation. The guy I was talking to had convinced himself that general relativity was a mistake. He was especially pissed off by the fact that, in GR, energy is not always conserved. Screw Einstein, energy conservation is just common sense! Right?

Think a little bit about why you believe in energy conservation. Is it because you run into a lot of energy in your day-to-day life, and it’s always been conserved? Did you grow up around something that was obviously energy? Or maybe someone had to explain it to you?

Teacher Pointing at Map of World

Maybe you learned about it…from a physics teacher?

A lot of the time, things that seem obvious only got that way because you were taught them. “Energy” isn’t an intuitive concept, however much it’s misused that way. It’s something defined by physicists because it solves a particular role, a consequence of symmetries in nature. When you learn about energy conservation in school, that’s because it’s one of the simpler ways to explain a much bigger concept, so you shouldn’t be surprised if there are some inaccuracies. If you know where your “common sense” is coming from, you can anticipate when and how it might go awry.

Similarly, if, like one of the commenters on my crackpot post, you’re uncomfortable with countable and uncountable infinities, remember that infinity isn’t “common sense” either. It’s something you learned about in a math class, from a math teacher. And just like energy conservation, it’s a simplification of a more precise concept, with epsilons and deltas and all that jazz.

It’s not possible to teach all the nuances of every topic, so naturally most people will hear a partial story. What’s important is to recognize that you heard a partial story, and not enshrine it as “common sense” when the real story comes knocking.

Don’t physicists use common sense, though? What about “physical intuition”?

Physical intuition has a lot of mystique behind it, and is often described as what separates us from the mathematicians. As such, different people mean different things by it…but under no circumstances should it be confused with pure “common sense”. Physical intuition uses analogy and experience. It involves seeing a system and anticipating the sorts of things you can do with it, like playing a game and assuming there’ll be a save button. In order for these sorts of analogies to work, they generally aren’t built around everyday objects or experiences. Instead, they use physical systems that are “similar” to the one under scrutiny in important ways, while being better understood in others. Crucially, physical intuition involves working in context. It’s not just uncritical acceptance of what one would naively expect.

So when your common sense is tingling, see if you can provide a source. Is that source relevant, experience with a similar situation? Or is it in fact a half-remembered class from high school?