The Many Worlds of Condensed Matter

Physics is the science of the very big and the very small. We study the smallest scales, the fundamental particles that make up the universe, and the largest, stars on up to the universe as a whole.

We also study the world in between, though.

That’s the domain of condensed matter, the study of solids, liquids, and other medium-sized arrangements of stuff. And while it doesn’t make the news as often, it’s arguably the biggest field in physics today.

(In case you’d like some numbers, the American Physical Society has divisions dedicated to different sub-fields. Condensed Matter Physics is almost twice the size of the next biggest division, Particles & Fields. Add in other sub-fields that focus on medium-sized-stuff, like those who work on solid state physics, optics, or biophysics, and you get a majority of physicists focused on the middle of the distance scale.)

When I started grad school, I didn’t pay much attention to condensed matter and related fields. Beyond the courses in quantum field theory and string theory, my “breadth” courses were on astrophysics and particle physics. But over and over again, from people in every sub-field, I kept hearing the same recommendation:

“You should take Solid State Physics. It’s a really great course!”

At the time, I never understood why. It was only later, once I had some research under my belt, that I realized:

Condensed matter uses quantum field theory!

The same basic framework, describing the world in terms of rippling quantum fields, doesn’t just work for fundamental particles. It also works for materials. Rather than describing the material in terms of its fundamental parts, condensed matter physicists “zoom out” and talk about overall properties, like sound waves and electric currents, treating them as if they were the particles of quantum field theory.

This tends to confuse the heck out of journalists. Not used to covering condensed matter (and sometimes egged on by hype from the physicists), they mix up the metaphorical particles of these systems with the sort of particles made by the LHC, with predictably dumb results.

Once you get past the clumsy journalism, though, this kind of analogy has a lot of value.

Occasionally, you’ll see an article about string theory providing useful tools for condensed matter. This happens, but it’s less widespread than some of the articles make it out to be: condensed matter is a huge and varied field, and string theory applications tend to be of interest to only a small piece of it.

It doesn’t get talked about much, but the dominant trend is actually in the other direction: increasingly, string theorists need to have at least a basic background in condensed matter.

String theory’s curse/triumph is that it can give rise not just to one quantum field theory, but many: a vast array of different worlds obtained by twisting extra dimensions in different ways. Particle physicists tend to study a fairly small range of such theories, looking for worlds close enough to ours that they still fit the evidence.

Condensed matter, in contrast, creates its own worlds. Pick the right material, take the right slice, and you get quantum field theories of almost any sort you like. While you can’t go to higher dimensions than our usual four, you can certainly look at lower ones, at the behavior of currents on a sheet of metal or atoms arranged in a line. This has led some condensed matter theorists to examine a wide range of quantum field theories with one strange behavior or another, theories that wouldn’t have occurred to particle physicists but that, in many cases, are part of the cornucopia of theories you can get out of string theory.

So if you want to explore the many worlds of string theory, the many worlds of condensed matter offer a useful guide. Increasingly, tools from that community, like integrability and tensor networks, are migrating over to ours.

It’s gotten to the point where I genuinely regret ignoring condensed matter in grad school. Parts of it are ubiquitous enough, and useful enough, that some of it is an expected part of a string theorist’s background. The many worlds of condensed matter, as it turned out, were well worth a look.

Pop Goes the Universe and Other Cosmic Microwave Background Games

(With apologies to whoever came up with this “book”.)

Back in February, Ijjas, Steinhardt, and Loeb wrote an article for Scientific American titled “Pop Goes the Universe” criticizing cosmic inflation, the proposal that the universe underwent a period of rapid expansion early in its life, smoothing it out to achieve the (mostly) uniform universe we see today. Recently, Scientific American published a response by Guth, Kaiser, Linde, Nomura, and 29 co-signers. This was followed by a counterresponse, which is the usual number of steps for this sort of thing before it dissipates harmlessly into the blogosphere.

In general, string theory, supersymmetry, and inflation tend to be criticized in very similar ways. Each gets accused of being unverifiable, able to be tuned to match any possible experimental result. Each has been claimed to be unfairly dominant, its position as “default answer” more due to the bandwagon effect than the idea’s merits. All three tend to get discussed in association with the multiverse, and blamed for dooming physics as a result. And all are frequently defended with one refrain: “If you have a better idea, what is it?”

It’s probably tempting (on both sides) to view this as just another example of that argument. In reality, though, string theory, supersymmetry, and inflation are all in very different situations. The details matter. And I worry that in this case both sides are too ready to assume the other is just making the “standard argument”, and ended up talking past each other.

When people say that string theory makes no predictions, they’re correct in a sense, but off topic: the majority of string theorists aren’t making the sort of claims that require successful predictions. When people say that inflation makes no predictions, if you assume they mean the same thing that people mean when they accuse string theory of making no predictions, then they’re flat-out wrong. Unlike string theorists, most people who work on inflation care a lot about experiment. They write papers filled with predictions, consequences for this or that model if this or that telescope sees something in the near future.

I don’t think Ijjas, Steinhardt, and Loeb were making that kind of argument.

When people say that supersymmetry makes no predictions, there’s some confusion of scope. (Low-energy) supersymmetry isn’t one specific proposal that needs defending on its own. It’s a class of different models, each with its own predictions. Given a specific proposal, one can see if it’s been ruled out by experiment, and predict what future experiments might say about it. Ruling out one model doesn’t rule out supersymmetry as a whole, but it doesn’t need to, because any given researcher isn’t arguing for supersymmetry as a whole: they’re arguing for their particular setup. The right “scope” is between specific supersymmetric models and specific non-supersymmetric models, not both as general principles.

Guth, Kaiser, Linde, and Nomura’s response follows similar lines in defending inflation. They point out that the wide variety of models are subject to being ruled out in the face of observation, and compare to the construction of the Standard Model in particle physics, with many possible parameters under the overall framework of Quantum Field Theory.

Ijjas, Steinhardt, and Loeb’s article certainly looked like it was making this sort of mistake. But as they clarify in the FAQ of their counter-response, they’ve got a more serious objection. They’re arguing that, unlike in the case of supersymmetry or the Standard Model, specific inflation models do not lead to specific predictions. They’re arguing that, because inflation typically leads to a multiverse, any specific model will in fact lead to a wide variety of possible observations. In effect, they’re arguing that the multitude of people busily making predictions based on inflationary models are missing a step in their calculations, underestimating their errors by a huge margin.

This is where I really regret that these arguments usually end after three steps (article, response, counter-response). Here Ijjas, Steinhardt, and Loeb are making what is essentially a technical claim, one that Guth, Kaiser, Linde, and Nomura could presumably respond to with a technical response, after which the rest of us would actually learn something. As-is, I certainly don’t have the background in inflation to know whether or not this point makes sense, and I’d love to hear from someone who does.

One aspect of this exchange that baffled me was the “accusation” that Ijjas, Steinhardt, and Loeb were just promoting their own work on bouncing cosmologies. (I put “accusation” in quotes because while Ijjas, Steinhardt, and Loeb seem to treat it as if it were an accusation, Guth, Kaiser, Linde, and Nomura don’t obviously mean it as one.)

“Bouncing cosmology” is Ijjas, Steinhardt, and Loeb’s answer to the standard “If you have a better idea, what is it?” response. It wasn’t the focus of their article, but while they seem to think this speaks well of them (hence their treatment of “promoting their own work” as if it were an accusation), I don’t. I read a lot of Scientific American growing up, and the best articles focused on explaining a positive vision: some cool new idea, mainstream or not, that could capture the public’s interest. That kind of article could still have included criticism of inflation, you’d want it in there to justify the use of a bouncing cosmology. But by going beyond that, it would have avoided falling into the standard back and forth that these arguments tend to, and maybe we would have actually learned from the exchange.

What Makes Light Move?

Light always moves at the speed of light.

It’s not alone in this: anything that lacks mass moves at the speed of light. Gluons, if they weren’t constantly interacting with each other, would move at the speed of light. Neutrinos, back when we thought they were massless, were thought to move at the speed of light. Gravitational waves, and by extension gravitons, move at the speed of light.

This is, on the face of it, a weird thing to say. If I say a jet moves at the speed of sound, I don’t mean that it always moves at the speed of sound. Find it in its hangar and hopefully it won’t be moving at all.

And so, people occasionally ask me, why can’t we find light in its hangar? Why does light never stand still? What makes light move?

(For the record, you can make light “stand still” in a material, but that’s because the material is absorbing and reflecting it, so it’s not the “same” light traveling through. Compare the speed of a wave of hands in a stadium versus the speed you could run past the seats.)

This is surprisingly tricky to explain without math. Some people point out that if you want to see light at rest you need to speed up to catch it, but you can’t accelerate enough unless you too are massless. This probably sounds a bit circular. Some people talk about how, from light’s perspective, no time passes at all. This is true, but it seems to confuse more than it helps. Some people say that light is “made of energy”, but I don’t like that metaphor. Nothing is “made of energy”, nor is anything “made of mass” either. Mass and energy are properties things can have.

I do like game metaphors though. So, imagine that each particle (including photons, particles of light) is a character in an RPG.


For bonus points, play Light in an RPG.

You can think of energy as the particle’s “character points”. When the particle builds its character it gets a number of points determined by its energy. It can spend those points increasing its “stats”: mass and momentum, via the lesser-known big brother of E=mc^2, E^2=p^2c^2+m^2c^4.

Maybe the particle chooses to play something heavy, like a Higgs boson. Then they spend a lot of points on mass, and don’t have as much to spend on momentum. If they picked something lighter, like an electron, they’d have more to spend, so they could go faster. And if they spent nothing at all on mass, like light does, they could use all of their energy “points” boosting their speed.

Now, it turns out that these “energy points” don’t boost speed one for one, which is why low-energy light isn’t any slower than high-energy light. Instead, speed is determined by the ratio between energy and momentum. When they’re proportional to each other, when E^2=p^2c^2, then a particle is moving at the speed of light.

(Why this is is trickier to explain. You’ll have to trust me or wikipedia that the math works out.)

Some of you may be happy with this explanation, but others will accuse me of passing the buck. Ok, a photon with any energy will move at the speed of light. But why do photons have any energy at all? And even if they must move at the speed of light, what determines which direction?

Here I think part of the problem is an old physics metaphor, probably dating back to Newton, of a pool table.


A pool table is a decent metaphor for classical physics. You have moving objects following predictable paths, colliding off each other and the walls of the table.

Where people go wrong is in projecting this metaphor back to the beginning of the game. At the beginning of a game of pool, the balls are at rest, racked in the center. Then one of them is hit with the pool cue, and they’re set into motion.

In physics, we don’t tend to have such neat and tidy starting conditions. In particular, things don’t have to start at rest before something whacks them into motion.

A photon’s “start” might come from an unstable Higgs boson produced by the LHC. The Higgs decays, and turns into two photons. Since energy is conserved, these two each must have half of the energy of the original Higgs, including the energy that was “spent” on its mass. This process is quantum mechanical, and with no preferred direction the photons will emerge in a random one.

Photons in the LHC may seem like an artificial example, but in general whenever light is produced it’s due to particles interacting, and conservation of energy and momentum will send the light off in one direction or another.

(For the experts, there is of course the possibility of very low energy soft photons, but that’s a story for another day.)

Not even the beginning of the universe resembles that racked set of billiard balls. The question of what “initial conditions” make sense for the whole universe is a tricky one, but there isn’t a way to set it up where you start with light at rest. It’s not just that it’s not the default option: it isn’t even an available option.

Light moves at the speed of light, no matter what. That isn’t because light started at rest, and something pushed it. It’s because light has energy, and a particle has to spend its “character points” on something.


KITP Conference Retrospective

I’m back from the conference in Santa Barbara, and I thought I’d share a few things I found interesting. (For my non-physicist readers: I know it’s been a bit more technical than usual recently, I promise I’ll get back to some general audience stuff soon!)

James Drummond talked about efforts to extend the hexagon function method I work on to amplitudes with seven (or more) particles. In general, the method involves starting with a guess for what an amplitude should look like, and honing that guess based on behavior in special cases where it’s easier to calculate. In one of those special cases (called the multi-Regge limit), I had thought it would be quite difficult to calculate for more than six particles, but James clarified for me that there’s really only one additional piece needed, and they’re pretty close to having a complete understanding of it.

There were a few talks about ways to think about amplitudes in quantum field theory as the output of a string theory-like setup. There’s been progress pushing to higher quantum-ness, and in understanding the weird web of interconnected theories this setup gives rise to. In the comments, Thoglu asked about one part of this web of theories called Z theory.

Z theory is weird. Most of the theories that come out of this “web” come from a consistent sort of logic: just like you can “square” Yang-Mills to get gravity, you can “square” other theories to get more unusual things. In possibly the oldest known example, you can “square” the part of string theory that looks like Yang-Mills at low energy (open strings) to get the part that looks like gravity (closed strings). Z theory asks: could the open string also come from “multiplying” two theories together? Weirdly enough, the answer is yes: it comes from “multiplying” normal Yang-Mills with a part that takes care of the “stringiness”, a part which Oliver Schlotterer is calling “Z theory”. It’s not clear whether this Z theory makes sense as a theory on its own (for the experts: it may not even be unitary) but it is somewhat surprising that you can isolate a “building block” that just takes care of stringiness.

Peter Young in the comments asked about the Correlahedron. Scattering amplitudes ask a specific sort of question: if some particles come in from very far away, what’s the chance they scatter off each other and some other particles end up very far away? Correlators ask a more general question, about the relationships of quantum fields at different places and times, of which amplitudes are a special case. Just as the Amplituhedron is a geometrical object that specifies scattering amplitudes (in a particular theory), the Correlahedron is supposed to represent correlators (in the same theory). In some sense (different from the sense above) it’s the “square” of the Amplituhedron, and the process that gets you from it to the Amplituhedron is a geometrical version of the process that gets you from the correlator to the amplitude.

For the Amplituhedron, there’s a reasonably smooth story of how to get the amplitude. News articles tended to say the amplitude was the “volume” of the Amplituhedron, but that’s not quite correct. In fact, to find the amplitude you need to add up, not the inside of the Amplituhedron, but something that goes infinite at the Amplituhedron’s boundaries. Finding this “something” can be done on a case by case basis, but it get tricky in more complicated cases.

For the Correlahedron, this part of the story is missing: they don’t know how to define this “something”, the old recipe doesn’t work. Oddly enough, this actually makes me optimistic. This part of the story is something that people working on the Amplituhedron have been trying to avoid for a while, to find a shape where they can more honestly just take the volume. The fact that the old story doesn’t work for the Correlahedron suggests that it might provide some insight into how to build the Amplituhedron in a different way, that bypasses this problem.

There were several more talks by mathematicians trying to understand various aspects of the Amplituhedron. One of them was by Hugh Thomas, who as a fun coincidence actually went to high school with Nima Arkani-Hamed, one of the Amplituhedron’s inventors. He’s now teamed up with Nima and Jaroslav Trnka to try to understand what it means to be inside the Amplituhedron. In the original setup, they had a recipe to generate points inside the Amplituhedron, but they didn’t have a fully geometrical picture of what put them “inside”. Unlike with a normal shape, with the Amplituhedron you can’t just check which side of the wall you’re on. Instead, they can flatten the Amplituhedron, and observe that for points “inside” the Amplituhedron winds around them a specific number of times (hence “Unwinding the Amplituhedron“). Flatten it down to a line and you can read this off from the list of flips over your point, an on-off sequence like binary. If you’ve ever heard the buzzword “scattering amplitudes as binary code”, this is where that comes from.

They also have a better understanding of how supersymmetry shows up in the Amplituhedron, which Song He talked about in his talk. Previously, supersymmetry looked to be quite central, part of the basic geometric shape. Now, they can instead understand it in a different way, with the supersymmetric part coming from derivatives (for the specialists: differential forms) of the part in normal space and time. The encouraging thing is that you can include these sorts of derivatives even if your theory isn’t supersymmetric, to keep track of the various types of particles, and Song provided a few examples in his talk. This is important, because it opens up the possibility that something Amplituhedron-like could be found for a non-supersymmetric theory. Along those lines, Nima talked about ways that aspects of the “nice” description of space and time we use for the Amplituhedron can be generalized to other messier theories.

While he didn’t talk about it at the conference, Jake Bourjaily has a new paper out about a refinement of the generalized unitarity technique I talked about a few weeks back. Generalized unitarity involves matching a “cut up” version of an amplitude to a guess. What Jake is proposing is that in at least some cases you can start with a guess that’s as easy to work with as possible, where each piece of the guess matches up to just one of the “cuts” that you’re checking.  Think about it like a game of twenty questions where you’ve divided all possible answers into twenty individual boxes: for each box, you can just ask “is it in this box”?

Finally, I’ve already talked about the highlight of the conference, so I can direct you to that post for more details. I’ll just mention here that there’s still a fair bit of work to do for Zvi Bern and collaborators to get their result into a form they can check, since the initial output of their setup is quite messy. It’s led to worries about whether they’ll have enough computer power at higher loops, but I’m confident that they still have a few tricks up their sleeves.

Scattering Amplitudes at KITP

I’ve been visiting the Kavli Institute for Theoretical Physics in Santa Barbara for a program on scattering amplitudes. This week they’re having a conference, so I don’t have time to say very much.


The conference logo, on the other hand, seems to be saying quite a lot

We’ve had talks from a variety of corners of amplitudes, with major themes including the web of theories that can sort of be described by string theory-esque models, the amplituhedron, and theories you can “square” to get other theories. I’m excited about Zvi Bern’s talk at the end of the conference, which will describe the progress I talked about last week. There’s also been recent progress on understanding the amplituhedron, which I will likely post about in the near future.

We also got an early look at Whispers of String Theory, a cute short documentary filmed at the IGST conference.

The Road to Seven-Loop Supergravity

There’s an obvious way to put together a theory of quantum gravity. And it doesn’t work.

Do the same thing you would with any other theory, and you get infinity. You get repeated infinities, an infinity of infinities. And while you could fix one or two infinities, fixing an infinite number requires giving up an infinity of possible predictions, so in the end your theory predicts nothing.

String theory fixes this with its own infinity, the infinite number of ways a string can vibrate. Because this infinity is organized and structured and well-understood, you’re left with a theory that is still at least capable of making predictions.

(Note that this is an independent question from whether string theory can make predictions for experiments in the real world. This is a much more “in-principle” statement: if we knew everything we might want to about physics, all the fields and particles and shapes of the extra dimensions, we could use string theory to make predictions. Even if we knew all of that, we still couldn’t make predictions from naive quantum gravity.)

Are there ways to fix the problem that don’t involve an infinity of vibrations? Or at least, to fix part of the problem?

That’s what Zvi Bern, John Joseph Carrasco, Henrik Johansson, and a growing cast of collaborators have been trying to find out.

They’re investigating N=8 supergravity, a theory that takes gravity and adds on a host of related particles. It’s one of the easiest theories to get from string theory, by curling up extra dimensions in a particularly simple way and ignoring higher-energy vibrations.

Bern, along with Lance Dixon and David Kosower, invented the generalized unitarity technique I talked about last week. Along with Carrasco and Johansson, he figured out another important trick: the idea that you can do calculations in gravity by squaring the appropriate part of calculations in Yang-Mills theory. For N=8 supergravity, the theory you need to square is my favorite theory, N=4 super Yang-Mills.

Using this, they started pushing forward, calculating approximations to greater and greater precision (more and more loops).

What they found, at each step, was that N=8 supergravity behaved better than expected. In fact, it behaved like N=4 super Yang-Mills.

N=4 super Yang-Mills is special, because in four dimensions (three space and one time, the dimensions we’re used to in daily life) there are no infinities to fix. In a world with more dimensions, though, you start getting infinities, and with more and more loops you need fewer and fewer dimensions to see them.

N=8 supergravity, unexpectedly, was giving infinities in the same dimensions that N=4 super Yang-Mills did (and no earlier). If it kept doing that, you might guess that it also had no infinities in four dimensions. You might wonder if, at least loop by loop, N=8 supergravity could be a way to fix quantum gravity without string theory.

Of course, you’d only really know if you could check in four dimensions.

If you want to check in four dimensions, though, you run into a problem. The fewer dimensions you’re looking at, the more loops you need before N=8 supergravity could possibly give infinity. In four dimensions, you need a forbidding seven loops of precision.

(To compare, the highest precision of things we’ve actually tested in the real world is four loops.)

Still, Bern, Carrasco, and Johansson were up to the challenge. Along with Lance Dixon, David Kosower, and Radu Roiban, they looked at three loops, calculating an interaction of four gravitons, and the pattern continued. Four loops, and it was still going strong.

At around this time, I had just started grad school. My first project was a cumbersome numerical calculation. To keep me motivated, my advisor mentioned that the work I was doing would be good preparation for a much grander project: the calculation of whether the four-graviton interaction in N=8 supergravity diverges at seven loops. All I’d have to do was wait for Bern and collaborators to get there.

I named this blog “4 gravitons and a grad student”, and hoped I would get a chance to contribute.

And then something unexpected happened. They got stuck at five loops.

The method they were using, generalized unitarity, is an ansatz-based method. You start with a guess, then refine it. As such, the method is ultimately only as good as your guess.

Their guesses, in general, were pretty good. The trick they were using, squaring N=4 to get N=8, requires a certain type of guess: one in which the pieces they square have similar relationships to the different types of charge in Yang-Mills theory. There’s still an infinite number of guesses that can obey this, so they applied more restrictions, expectations based on other calculations, to get something more manageable. This worked at three loops, and worked (with a little extra thought) at four loops.

But at five loops they were stuck. They couldn’t find anything, with their restrictions, that gave the correct answer when “cut up” by generalized unitarity. And while they could drop some restrictions, if they dropped too many they’d end up with far too general a guess, something that could take months of computer time to solve.

So they stopped.

They did quite a bit of interesting work in the meantime. They found more theories they could square to get gravity theories, of more and more unusual types. They calculated infinities in other theories, and found surprises there too, other cases where infinities didn’t show up when they were “supposed” to. But for some time, the N=8 supergravity calculation was stalled.

And in the meantime, I went off in another direction, which long-time readers of this blog already know about.

Recently, though, they’ve broken the stall.

What they realized is that the condition on their guess, that the parts they square be related like Yang-Mills charges, wasn’t entirely necessary. Instead, they could start with a “bad” guess, and modify it, using the failure of those relations to fill in the missing pieces.

It looks like this is going to work.

We’re all at an amplitudes program right now in Santa Barbara. Walking through the halls of the KITP, I overhear conversations about five loops. They’re paring things down, honing their code, getting rid of the last few bugs, and checking their results.

They’re almost there, and it’s exciting. It looks like finally things are moving again, like the train to seven loops has once again left the station.

Increasingly, they’re beginning to understand the absent infinities, to see that they really are due to something unexpected and new.

N=8 supergravity isn’t going to be the next theory of everything. (For one, you can’t get chiral fermions out of it.) But if it really has no infinities at any loop, that tells us something about what a theory of quantum gravity is allowed to be, about the minimum necessary to at least make sense on a loop-by-loop level.

And that, I think, is worth being excited about.

Generalized Unitarity: The Frankenstein Method for Amplitudes

This is going to be a bit more technical than my usual, but you were warned.

There are a few things you’ll need to know to understand this post.

First, you should know that when we calculate probabilities of things happening in particle physics, we can do it by drawing Feynman diagrams, pictures of particles traveling and interacting. These diagrams can have loops, and the particle in the loop can have any momentum, from zero on up to infinity: you have to add up all the possibilities to get whatever you’re trying to calculate.

Second, you should understand that the “particles” in these loops aren’t really particles. They’re “virtual particles”, better understood as disturbances in quantum fields. Matt Strassler has a very nice article about this. In particular, these “particles” don’t have to obey E=mc^2 (or rather, if we include kinetic energy, E^2=p^2 c^2+m^2 c^4, where p is the momentum).

You can imagine a space that the momentum and energy “live in”. It’s got three dimensions for the three directions momentum can have, and one more dimension for the energy. Virtual particles can live anywhere in this four-dimensional space, but real particles have to live on a “shell” of points that obey E^2=p^2 c^2+m^2 c^4. If you’ve heard physicists say “on-shell” or “off-shell”, they’re referring to whether a particle is virtual, a quantum mechanical disturbance (and thus lives anywhere in the space) or a real classical particle (living on this “shell”).

Third, you should appreciate that in quantum physics, in Scott Aaronson’s words, we put complex numbers in our ontologies. Often, quantum weirdness shows itself when we look at our calculations as functions of complex numbers.

Let’s say I’m calculating an amplitude with one loop, and I draw a diagram like this:


Unitarity is how particle physicists say “all probabilities have to add up to one”. Since we have complex numbers in our ontologies, this statement is more complicated than it looks. One thing it ends up implying is that if I calculate an amplitude from the one-loop diagram above, its imaginary part will be given by multiplying together two simpler amplitudes:


Here you can imagine that I took a pair of scissors and “cut” the diagram in two along the dashed line. Now that the diagram has been “cut”, the particles I cut through are no longer part of a loop, so they’re no longer virtual: they’re real, on-shell particles.

If I wanted, I could keep “cutting” the diagram, generalizing this implication of unitarity. (For those who know some complex analysis, this involves taking residues.) I could cut all of the lines in the loop, like this:


Now something interesting happens. Here I’ve forced all four of the particles in the loop to be “on-shell”, to obey E^2=p^2 c^2+m^2 c^4. Previously, the momentum and energy in the loop was entirely free, living in its four-dimensional space. Now, though, it must obey four equations. And for those who’ve seen some algebra, four independent equations and four unknowns gives us one solution. By cutting all of these particles, we’ve killed all of the freedom that the loop momentum had. Instead of the living, quantum amplitude we had, we’ve cut it up into a bunch of dead, classical parts.

Why do this?

Well, suppose we have a guess for what the full amplitude should be. We’ve still got some uncertainty in our guess: it’s an ansatz.

If we wanted to check our guess, to fix the uncertainty in our ansatz, we could compare it to the full amplitude. But then we’d have to calculate the full quantum amplitude, and that’s hard.

It’s a lot easier, though, to calculate those “dead” classical amplitudes.

That’s the method we call “generalized unitarity”. We stitch together these easier-to-calculate, “dead” amplitudes. Enough different stitching patterns, and we can fix all the uncertainty in our ansatz, ending up with a unique correct answer without ever doing the full quantum calculation. Like Frankenstein, from dead parts we’ve assembled a living thing.


It’s off-shell!

How well does this work?

That depends on how good the ansatz is. The ansatze for one loop are very well understood, and for two loops the community is getting there. For higher loops, you have to be either smart or lucky. I happen to know some people who are both, I’ll be talking about them next week.