Tag Archives: quantum field theory

More Travel

I’m visiting the Niels Bohr Institute this week, on my way back from Amplitudes.

IMG_20170719_152906

You might recognize the place from old conference photos.

Amplitudes itself was nice. There weren’t any surprising new developments, but a lot of little “aha” moments when one of the speakers explained something I’d heard vague rumors about. I figured I’d mention a few of the things that stood out. Be warned, this is going to be long and comparatively jargon-heavy.

The conference organizers were rather daring in scheduling Nima Arkani-Hamed for the first talk, as Nima has a tendency to arrive at the last minute and talk for twice as long as you ask him to. Miraculously, though, things worked out, if only barely: Nima arrived at the wrong campus and ran most of the way back, showing up within five minutes of the start of the conference. He also stuck to his allotted time, possibly out of courtesy to his student, Yuntao Bai, who was speaking next.

Between the two of them, Nima and Yuntao covered an interesting development, tying the Amplituhedron together with the string theory-esque picture of scattering amplitudes pioneered by Freddy Cachazo, Song He, and Ellis Ye Yuan (or CHY). There’s a simpler (and older) Amplituhedron-like object called the associahedron that can be thought of as what the Amplituhedron looks like on the surface of a string, and CHY’s setup can be thought of as a sophisticated map that takes this object and turns it into the Amplituhedron. It was nice to hear from both Nima and his student on this topic, because Nima’s talks are often high on motivation but low on detail, so it was great that Yuntao was up next to fill in the blanks.

Anastasia Volovich talked about Landau singularities, a topic I’ve mentioned before. What I hadn’t appreciated was how much they can do with them at this point. Originally, Juan Maldacena had suggested that these singularities, mathematical points that determine the behavior of amplitudes first investigated by Landau in the 60’s, might explain some of the simplicity we’ve observed in N=4 super Yang-Mills. They ended up not being enough by themselves, but what Volovich and collaborators are discovering is that with a bit of help from the Amplithedron they explain quite a lot. In particular, if they start with the Amplituhedron and do a procedure similar to Landau’s, they can find the simpler set of singularities allowed by N=4 super Yang-Mills, at least for the examples they’ve calculated. It’s still a bit unclear how this links to their previous investigations of these things in terms of cluster algebras, but it sounds like they’re making progress.

Dmitry Chicherin gave me one of those minor “aha” moments. One big useful fact about scattering amplitudes in N=4 super Yang-Mills is that they’re “dual” to different mathematical objects called Wilson loops, a fact which allows us to compare to the “POPE” approach of Basso, Sever, and Vieira. Chicherin asks the question: “What if you’re not calculating a scattering amplitude or a Wilson loop, but something halfway in between?” Interestingly, this has an answer, with the “halfway between” objects having a similar duality among themselves.

Yorgos Papathansiou talked about work I’ve been involved with. I’ll probably cover it in detail in another post, so now I’ll just mention that we’re up to six loops!

Andy Strominger talked about soft theorems. It’s always interesting seeing people who don’t traditionally work on amplitudes giving talks at Amplitudes. There’s a range of responses, from integrability people (who are basically welcomed like family) to work on fairly unrelated areas that have some “amplitudes” connection (met with yawns except from the few people interested in the connection). The response to Strominger was neither welcome nor boredom, but lively debate. He’s clearly doing something interesting, but many specialists worried he was ignorant of important no-go results in the field that could hamstring some of his bolder conjectures.

The second day focused on methods for more practical calculations, and had the overall effect of making me really want to clean up my code. Tiziano Peraro’s finite field methods in particular look like they could be quite useful. There were two competing bases of integrals on display, Von Manteuffel’s finite integrals and Rutger Boels’s uniform transcendental integrals later in the conference. Both seem to have their own virtues, and I ended up asking Rob Schabinger if it was possible to combine the two, with the result that he’s apparently now looking into it.

The more practical talks that day had a clear focus on calculations with two loops, which are becoming increasingly viable for LHC-relevant calculations. From talking to people who work on this, I get the impression that the goal of these calculations isn’t so much to find new physics as to confirm and investigate new physics found via other methods. Things are complicated enough at two loops that for the moment it isn’t feasible to describe what all the possible new particles might do at that order, and instead the goal is to understand the standard model well enough that if new physics is noticed (likely based on one-loop calculations) then the details can be pinned down by two-loop data. But this picture could conceivably change as methods improve.

Wednesday was math-focused. We had a talk by Francis Brown on his conjecture of a cosmic Galois group. This is a topic I knew a bit about already, since it’s involved in something I’ve been working on. Brown’s talk cleared up some things, but also shed light on the vagueness of the proposal. As with Yorgos’s talk, I’ll probably cover more about this in a future post, so I’ll skip the details for now.

There was also a talk by Samuel Abreu on a much more physical picture of the “symbols” we calculate with. This is something I’ve seen presented before by Ruth Britto, and it’s a setup I haven’t looked into as much as I ought to. It does seem at the moment that they’re limited to one loop, which is a definite downside. Other talks discussed elliptic integrals, the bogeyman that we still can’t deal with by our favored means but that people are at least understanding better.

The last talk on Wednesday before the hike was by David Broadhurst, who’s quite a character in his own right. Broadhurst sat in the front row and asked a question after nearly every talk, usually bringing up papers at least fifty years old, if not one hundred and fifty. At the conference dinner he was exactly the right person to read the Address to the Haggis, resurrecting a thick Scottish accent from his youth. Broadhurst’s techniques for handling high-loop elliptic integrals are quite impressively powerful, leaving me wondering if the approach can be generalized.

Thursday focused on gravity. Radu Roiban gave a better idea of where he and his collaborators are on the road to seven-loop supergravity and what the next bottlenecks are along the way. Oliver Schlotterer’s talk was another one of those “aha” moments, helping me understand a key difference between two senses in which gravity is Yang-Mills squared ( the Kawai-Lewellen-Tye relations and BCJ). In particular, the latter is much more dependent on specifics of how you write the scattering amplitude, so to the extent that you can prove something more like the former at higher loops (the original was only for trees, unlike BCJ) it’s quite valuable. Schlotterer has managed to do this at one loop, using the “Q-cut” method I’ve (briefly) mentioned before. The next day’s talk by Emil Bjerrum-Bohr focused more heavily on these Q-cuts, including a more detailed example at two loops than I’d seen that group present before.

There was also a talk by Walter Goldberger about using amplitudes methods for classical gravity, a subject I’ve looked into before. It was nice to see a more thorough presentation of those ideas, including a more honest appraisal of which amplitudes techniques are really helpful there.

There were other interesting topics, but I’m already way over my usual post length, so I’ll sign off for now. Videos from all but a few of the talks are now online, so if you’re interested you should watch them on the conference page.

Bootstrapping in the Real World

I’ll be at Amplitudes, my subfield’s big yearly conference, next week, so I don’t have a lot to talk about. That said, I wanted to give a shout-out to my collaborator and future colleague Andrew McLeod, who is a co-author (along with Øyvind Almelid, Claude Duhr, Einan Gardi, and Chris White) on a rather cool paper that went up on arXiv this week.

Andrew and I work on “bootstrapping” calculations in quantum field theory. In particular, we start with a guess for what the result will be based on a specific set of mathematical functions (in my case, “hexagon functions” involving interactions of six particles). We then narrow things down, using other calculations that by themselves only predict part of the result, until we know the right answer. The metaphor here is that we’re “pulling ourselves up by our own bootstraps”, skipping a long calculation by essentially just guessing the answer.

This method has worked pretty well…in a toy model anyway. The calculations I’ve done with it use N=4 super Yang-Mills, a simpler cousin of the theories that describe the real world. There, fewer functions can show up, so our guess is much less unwieldy than it would be otherwise.

What’s impressive about Andrew and co.’s new paper is that they apply this method, not to N=4 super Yang-Mills, but to QCD, the theory that describes quarks and gluons in the real world. This is exactly the sort of thing I’ve been hoping to see more of, these methods built into something that can help with real, useful calculations.

Currently, what they can do is still fairly limited. For the particular problem they’re looking at, the functions required ended up being relatively simple, involving interactions between at most four particles. So far, they’ve just reproduced a calculation done by other means. Going further (more “loops”) would involve interactions between more particles, as well as mixing different types of functions (different “transcendental weight”), either of which make the problem much more complicated.

That said, the simplicity of their current calculation is also a reason to be optimistic.  Their starting “guess” had just thirteen parameters, while the one Andrew and I are working on right now (in N=4 super Yang-Mills) has over a thousand. Even if things get a lot more complicated for them at the next loop, we’ve shown that “a lot more complicated” can still be quite doable.

So overall, I’m excited. It looks like there are contexts in which one really can “bootstrap” up calculations in a realistic theory, and that’s a method that could end up really useful.

You Can’t Smooth the Big Bang

As a kid, I was fascinated by cosmology. I wanted to know how the universe began, possibly disproving gods along the way, and I gobbled up anything that hinted at the answer.

At the time, I had to be content with vague slogans. As I learned more, I could match the slogans to the physics, to see what phrases like “the Big Bang” actually meant. A large part of why I went into string theory was to figure out what all those documentaries are actually about.

In the end, I didn’t end up working on cosmology due my ignorance of a few key facts while in college (mostly, who Vilenkin was). Thus, while I could match some of the old popularization stories to the science, there were a few I never really understood. In particular, there were two claims I never quite saw fleshed out: “The universe emerged from nothing via quantum tunneling” and “According to Hawking, the big bang was not a singularity, but a smooth change with no true beginning.”

As a result, I’m delighted that I’ve recently learned the physics behind these claims, in the context of a spirited take-down of both by Perimeter’s Director Neil Turok.

neil20turok_cropped_photo20credit20jens20langen

My boss

Neil held a surprise string group meeting this week to discuss the paper I linked above, “No smooth beginning for spacetime” with Job Feldbrugge and Jean-Luc Lehners, as well as earlier work with Steffen Gielen. In it, he talked about problems in the two proposals I mentioned: Hawking’s suggestion that the big bang was smooth with no true beginning (really, the Hartle-Hawking no boundary proposal) and the idea that the universe emerged from nothing via quantum tunneling (really, Vilenkin’s tunneling from nothing proposal).

In popularization-speak, these two proposals sound completely different. In reality, though, they’re quite similar (and as Neil argues, they end up amounting to the same thing). I’ll steal a picture from his paper to illustrate:

neilpaperpic

The picture on the left depicts the universe under the Hartle-Hawking proposal, with time increasing upwards on the page. As the universe gets older, it looks like the expanding (de Sitter) universe we live in. At the beginning, though, there’s a cap, one on which time ends up being treated not in the usual way (Lorentzian space) but on the same footing as the other dimensions (Euclidean space). This lets space be smooth, rather than bunching up in a big bang singularity. After treating time in this way the result is reinterpreted (via a quantum field theory trick called Wick rotation) as part of normal space-time.

What’s the connection to Vilenkin’s tunneling picture? Well, when we talk about quantum tunneling, we also end up describing it with Euclidean space. Saying that the universe tunneled from nothing and saying it has a Euclidean “cap” then end up being closely related claims.

Before Neil’s work these two proposals weren’t thought of as the same because they were thought to give different results. What Neil is arguing is that this is due to a fundamental mistake on Hartle and Hawking’s part. Specifically, Neil is arguing that the Wick rotation trick that Hartle and Hawking used doesn’t work in this context, when you’re trying to calculate small quantum corrections for gravity. In normal quantum field theory, it’s often easier to go to Euclidean space and use Wick rotation, but for quantum gravity Neil is arguing that this technique stops being rigorous. Instead, you should stay in Lorentzian space, and use a more powerful mathematical technique called Picard-Lefschetz theory.

Using this technique, Neil found that Hartle and Hawking’s nicely behaved result was mistaken, and the real result of what Hartle and Hawking were proposing looks more like Vilenkin’s tunneling proposal.

Neil then tried to see what happens when there’s some small perturbation from a perfect de Sitter universe. In general in physics if you want to trust a result it ought to be stable: small changes should stay small. Otherwise, you’re not really starting from the right point, and you should instead be looking at wherever the changes end up taking you. What Neil found was that the Hartle-Hawking and Vilenkin proposals weren’t stable. If you start with a small wiggle in your no-boundary universe you get, not the purple middle drawing with small wiggles, but the red one with wiggles that rapidly grow unstable. The implication is that the Hartle-Hawking and Vilenkin proposals aren’t just secretly the same, they also both can’t be the stable state of the universe.

Neil argues that this problem is quite general, and happens under the following conditions:

  1. A universe that begins smoothly and semi-classically (where quantum corrections are small) with no sharp boundary,
  2. with a positive cosmological constant (the de Sitter universe mentioned earlier),
  3. under which the universe expands many times, allowing the small fluctuations to grow large.

If the universe avoids one of those conditions (maybe the cosmological constant changes in the future and the universe stops expanding, for example) then you might be able to avoid Neil’s argument. But if not, you can’t have a smooth semi-classical beginning and still have a stable universe.

Now, no debate in physics ends just like that. Hartle (and collaborators) don’t disagree with Neil’s insistence on Picard-Lefschetz theory, but they argue there’s still a way to make their proposal work. Neil mentioned at the group meeting that he thinks even the new version of Hartle’s proposal doesn’t solve the problem, he’s been working out the calculation with his collaborators to make sure.

Often, one hears about an idea from science popularization and then it never gets mentioned again. The public hears about a zoo of proposals without ever knowing which ones worked out. I think child-me would appreciate hearing what happened to Hawking’s proposal for a universe with no boundary, and to Vilenkin’s proposal for a universe emerging from nothing. Adult-me certainly does. I hope you do too.

An Amplitudes Flurry

Now that we’re finally done with flurries of snow here in Canada, in the last week arXiv has been hit with a flurry of amplitudes papers.

kitchener-construction

We’re also seeing a flurry of construction, but that’s less welcome.

Andrea Guerrieri, Yu-tin Huang, Zhizhong Li, and Congkao Wen have a paper on what are known as soft theorems. Most famously studied by Weinberg, soft theorems are proofs about what happens when a particle in an amplitude becomes “soft”, or when its momentum becomes very small. Recently, these theorems have gained renewed interest, as new amplitudes techniques have allowed researchers to go beyond Weinberg’s initial results (to “sub-leading” order) in a variety of theories.

Guerrieri, Huang, Li, and Wen’s contribution to the topic looks like it clarifies things quite a bit. Previously, most of the papers I’d seen about this had been isolated examples. This paper ties the various cases together in a very clean way, and does important work in making some older observations more rigorous.

 

Vittorio Del Duca, Claude Duhr, Robin Marzucca, and Bram Verbeek wrote about transcendental weight in something known as the multi-Regge limit. I’ve talked about transcendental weight before: loosely, it’s counting the power of pi that shows up in formulas. The multi-Regge limit concerns amplitudes with very high energies, in which we have a much better understanding of how the amplitudes should behave. I’ve used this limit before, to calculate amplitudes in N=4 super Yang-Mills.

One slogan I love to repeat is that N=4 super Yang-Mills isn’t just a toy model, it’s the most transcendental part of QCD. I’m usually fairly vague about this, because it’s not always true: while often a calculation in N=4 super Yang-Mills will give the part of the same calculation in QCD with the highest power of pi, this isn’t always the case, and it’s hard to propose a systematic principle for when it should happen. Del Duca, Duhr, Marzucca, and Verbeek’s work is a big step in that direction. While some descriptions of the multi-Regge limit obey this property, others don’t, and in looking at the ones that don’t the authors gain a better understanding of what sorts of theories only have a “maximally transcendental part”. What they find is that even when such theories aren’t restricted to N=4 super Yang-Mills, they have shared properties, like supersymmetry and conformal symmetry. Somehow these properties are tied to the transcendentality of functions in the amplitude, in a way that’s still not fully understood.

 

My colleagues at Perimeter released two papers over the last week: one, by Freddy Cachazo and Alfredo Guevara, uses amplitudes techniques to look at classical gravity, while the other, by Sebastian Mizera and Guojun Zhang, looks at one of the “pieces” inside string theory amplitudes.

I worked with Freddy and Alfredo on an early version of their result, back at the PSI Winter School. While I was off lazing about in Santa Barbara, they were hard at work trying to understand how the quantum-looking “loops” one can use to make predictions for potential energy in classical gravity are secretly classical. What they ended up finding was a trick to figure out whether a given amplitude was going to have a classical part or be purely quantum. So far, the trick works for amplitudes with one loop, and a few special cases at higher loops. It’s still not clear if it works for the general case, and there’s a lot of work still to do to understand what it means, but it definitely seems like an idea with potential. (Pun mostly not intended.)

I’ve talked before about “Z theory”, the weird thing you get when you isolate the “stringy” part of string theory amplitudes. What Sebastian and Guojun have carved out isn’t quite the same piece, but it’s related. I’m still not sure of the significance of cutting string amplitudes up in this way, I’ll have to read the paper more thoroughly (or chat with the authors) to find out.

The Many Worlds of Condensed Matter

Physics is the science of the very big and the very small. We study the smallest scales, the fundamental particles that make up the universe, and the largest, stars on up to the universe as a whole.

We also study the world in between, though.

That’s the domain of condensed matter, the study of solids, liquids, and other medium-sized arrangements of stuff. And while it doesn’t make the news as often, it’s arguably the biggest field in physics today.

(In case you’d like some numbers, the American Physical Society has divisions dedicated to different sub-fields. Condensed Matter Physics is almost twice the size of the next biggest division, Particles & Fields. Add in other sub-fields that focus on medium-sized-stuff, like those who work on solid state physics, optics, or biophysics, and you get a majority of physicists focused on the middle of the distance scale.)

When I started grad school, I didn’t pay much attention to condensed matter and related fields. Beyond the courses in quantum field theory and string theory, my “breadth” courses were on astrophysics and particle physics. But over and over again, from people in every sub-field, I kept hearing the same recommendation:

“You should take Solid State Physics. It’s a really great course!”

At the time, I never understood why. It was only later, once I had some research under my belt, that I realized:

Condensed matter uses quantum field theory!

The same basic framework, describing the world in terms of rippling quantum fields, doesn’t just work for fundamental particles. It also works for materials. Rather than describing the material in terms of its fundamental parts, condensed matter physicists “zoom out” and talk about overall properties, like sound waves and electric currents, treating them as if they were the particles of quantum field theory.

This tends to confuse the heck out of journalists. Not used to covering condensed matter (and sometimes egged on by hype from the physicists), they mix up the metaphorical particles of these systems with the sort of particles made by the LHC, with predictably dumb results.

Once you get past the clumsy journalism, though, this kind of analogy has a lot of value.

Occasionally, you’ll see an article about string theory providing useful tools for condensed matter. This happens, but it’s less widespread than some of the articles make it out to be: condensed matter is a huge and varied field, and string theory applications tend to be of interest to only a small piece of it.

It doesn’t get talked about much, but the dominant trend is actually in the other direction: increasingly, string theorists need to have at least a basic background in condensed matter.

String theory’s curse/triumph is that it can give rise not just to one quantum field theory, but many: a vast array of different worlds obtained by twisting extra dimensions in different ways. Particle physicists tend to study a fairly small range of such theories, looking for worlds close enough to ours that they still fit the evidence.

Condensed matter, in contrast, creates its own worlds. Pick the right material, take the right slice, and you get quantum field theories of almost any sort you like. While you can’t go to higher dimensions than our usual four, you can certainly look at lower ones, at the behavior of currents on a sheet of metal or atoms arranged in a line. This has led some condensed matter theorists to examine a wide range of quantum field theories with one strange behavior or another, theories that wouldn’t have occurred to particle physicists but that, in many cases, are part of the cornucopia of theories you can get out of string theory.

So if you want to explore the many worlds of string theory, the many worlds of condensed matter offer a useful guide. Increasingly, tools from that community, like integrability and tensor networks, are migrating over to ours.

It’s gotten to the point where I genuinely regret ignoring condensed matter in grad school. Parts of it are ubiquitous enough, and useful enough, that some of it is an expected part of a string theorist’s background. The many worlds of condensed matter, as it turned out, were well worth a look.

KITP Conference Retrospective

I’m back from the conference in Santa Barbara, and I thought I’d share a few things I found interesting. (For my non-physicist readers: I know it’s been a bit more technical than usual recently, I promise I’ll get back to some general audience stuff soon!)

James Drummond talked about efforts to extend the hexagon function method I work on to amplitudes with seven (or more) particles. In general, the method involves starting with a guess for what an amplitude should look like, and honing that guess based on behavior in special cases where it’s easier to calculate. In one of those special cases (called the multi-Regge limit), I had thought it would be quite difficult to calculate for more than six particles, but James clarified for me that there’s really only one additional piece needed, and they’re pretty close to having a complete understanding of it.

There were a few talks about ways to think about amplitudes in quantum field theory as the output of a string theory-like setup. There’s been progress pushing to higher quantum-ness, and in understanding the weird web of interconnected theories this setup gives rise to. In the comments, Thoglu asked about one part of this web of theories called Z theory.

Z theory is weird. Most of the theories that come out of this “web” come from a consistent sort of logic: just like you can “square” Yang-Mills to get gravity, you can “square” other theories to get more unusual things. In possibly the oldest known example, you can “square” the part of string theory that looks like Yang-Mills at low energy (open strings) to get the part that looks like gravity (closed strings). Z theory asks: could the open string also come from “multiplying” two theories together? Weirdly enough, the answer is yes: it comes from “multiplying” normal Yang-Mills with a part that takes care of the “stringiness”, a part which Oliver Schlotterer is calling “Z theory”. It’s not clear whether this Z theory makes sense as a theory on its own (for the experts: it may not even be unitary) but it is somewhat surprising that you can isolate a “building block” that just takes care of stringiness.

Peter Young in the comments asked about the Correlahedron. Scattering amplitudes ask a specific sort of question: if some particles come in from very far away, what’s the chance they scatter off each other and some other particles end up very far away? Correlators ask a more general question, about the relationships of quantum fields at different places and times, of which amplitudes are a special case. Just as the Amplituhedron is a geometrical object that specifies scattering amplitudes (in a particular theory), the Correlahedron is supposed to represent correlators (in the same theory). In some sense (different from the sense above) it’s the “square” of the Amplituhedron, and the process that gets you from it to the Amplituhedron is a geometrical version of the process that gets you from the correlator to the amplitude.

For the Amplituhedron, there’s a reasonably smooth story of how to get the amplitude. News articles tended to say the amplitude was the “volume” of the Amplituhedron, but that’s not quite correct. In fact, to find the amplitude you need to add up, not the inside of the Amplituhedron, but something that goes infinite at the Amplituhedron’s boundaries. Finding this “something” can be done on a case by case basis, but it get tricky in more complicated cases.

For the Correlahedron, this part of the story is missing: they don’t know how to define this “something”, the old recipe doesn’t work. Oddly enough, this actually makes me optimistic. This part of the story is something that people working on the Amplituhedron have been trying to avoid for a while, to find a shape where they can more honestly just take the volume. The fact that the old story doesn’t work for the Correlahedron suggests that it might provide some insight into how to build the Amplituhedron in a different way, that bypasses this problem.

There were several more talks by mathematicians trying to understand various aspects of the Amplituhedron. One of them was by Hugh Thomas, who as a fun coincidence actually went to high school with Nima Arkani-Hamed, one of the Amplituhedron’s inventors. He’s now teamed up with Nima and Jaroslav Trnka to try to understand what it means to be inside the Amplituhedron. In the original setup, they had a recipe to generate points inside the Amplituhedron, but they didn’t have a fully geometrical picture of what put them “inside”. Unlike with a normal shape, with the Amplituhedron you can’t just check which side of the wall you’re on. Instead, they can flatten the Amplituhedron, and observe that for points “inside” the Amplituhedron winds around them a specific number of times (hence “Unwinding the Amplituhedron“). Flatten it down to a line and you can read this off from the list of flips over your point, an on-off sequence like binary. If you’ve ever heard the buzzword “scattering amplitudes as binary code”, this is where that comes from.

They also have a better understanding of how supersymmetry shows up in the Amplituhedron, which Song He talked about in his talk. Previously, supersymmetry looked to be quite central, part of the basic geometric shape. Now, they can instead understand it in a different way, with the supersymmetric part coming from derivatives (for the specialists: differential forms) of the part in normal space and time. The encouraging thing is that you can include these sorts of derivatives even if your theory isn’t supersymmetric, to keep track of the various types of particles, and Song provided a few examples in his talk. This is important, because it opens up the possibility that something Amplituhedron-like could be found for a non-supersymmetric theory. Along those lines, Nima talked about ways that aspects of the “nice” description of space and time we use for the Amplituhedron can be generalized to other messier theories.

While he didn’t talk about it at the conference, Jake Bourjaily has a new paper out about a refinement of the generalized unitarity technique I talked about a few weeks back. Generalized unitarity involves matching a “cut up” version of an amplitude to a guess. What Jake is proposing is that in at least some cases you can start with a guess that’s as easy to work with as possible, where each piece of the guess matches up to just one of the “cuts” that you’re checking.  Think about it like a game of twenty questions where you’ve divided all possible answers into twenty individual boxes: for each box, you can just ask “is it in this box”?

Finally, I’ve already talked about the highlight of the conference, so I can direct you to that post for more details. I’ll just mention here that there’s still a fair bit of work to do for Zvi Bern and collaborators to get their result into a form they can check, since the initial output of their setup is quite messy. It’s led to worries about whether they’ll have enough computer power at higher loops, but I’m confident that they still have a few tricks up their sleeves.

Scattering Amplitudes at KITP

I’ve been visiting the Kavli Institute for Theoretical Physics in Santa Barbara for a program on scattering amplitudes. This week they’re having a conference, so I don’t have time to say very much.

scamp-c17

The conference logo, on the other hand, seems to be saying quite a lot

We’ve had talks from a variety of corners of amplitudes, with major themes including the web of theories that can sort of be described by string theory-esque models, the amplituhedron, and theories you can “square” to get other theories. I’m excited about Zvi Bern’s talk at the end of the conference, which will describe the progress I talked about last week. There’s also been recent progress on understanding the amplituhedron, which I will likely post about in the near future.

We also got an early look at Whispers of String Theory, a cute short documentary filmed at the IGST conference.