Tag Archives: theoretical physics

Our Bargain

Sabine Hossenfelder has a blog post this week chastising particle physicists and cosmologists for following “upside-down Popper”, or assuming a theory is worth working on merely because it’s falsifiable. She describes her colleagues churning out one hypothesis after another, each tweaking an old idea just enough to make it falsifiable in the next experiment, without caring whether the hypothesis is actually likely to be true.

Sabine is much more of an expert in this area of physics (phenomenology) than I am, and I don’t presume to tell her she’s wrong about that community. But the problem she’s describing is part of something bigger, something that affects my part of physics as well.

There’s a core question we’d all like to answer: what should physicists work on? What criteria should guide us?

Falsifiability isn’t the whole story. The next obvious criterion is a sense of simplicity, of Occam’s Razor or mathematical elegance. Sabine has argued against the latter, which prompted a friend of mine to comment that between rejecting falsifiability and elegance, Sabine must want us to stop doing high-energy physics at all!

That’s more than a little unfair, though. I think Sabine has a reasonably clear criterion in mind. It’s the same criterion that most critics of the physics mainstream care about. It’s even the same criterion being used by the “other side”, the sort of people who criticize anything that’s not string/SUSY/inflation.

The criterion is quite a simple one: physics research should be productive. Anything we publish, anything we work on, should bring us closer to understanding the real world.

And before you object that this criterion is obvious, that it’s subjective, that it ignores the very real disagreements between the Sabines and the Luboses of the world…before any of that, please let me finish.

We can’t achieve this criterion. And we shouldn’t.

We can’t demand that all physics be productive without breaking a fundamental bargain, one we made when we accepted that science could be a career.

1200px-13_portrait_of_robert_hooke

The Hunchback of Notre Science

It wasn’t always this way. Up until the nineteenth century, “scientist” was a hobby, not a job.

After Newton published his theory of gravity, he was famously accused by Robert Hooke of stealing the idea. There’s some controversy about this, but historians agree on a few points: that Hooke did write a letter to Newton suggesting a 1/r^2 force law, and that Hooke, unlike Newton, never really worked out the law’s full consequences.

Why not? In part, because Hooke, unlike Newton, had a job.

Hooke was arguably the first person for whom science was a full-time source of income. As curator of experiments for the Royal Society, it was his responsibility to set up demonstrations for each Royal Society meeting. Later, he also handled correspondence for the Royal Society Journal. These responsibilities took up much of his time, and as a result, even if he was capable of following up on the consequences of 1/r^2 he wouldn’t have had time to focus on it. That kind of calculation wasn’t what he was being paid for.

We’re better off than Hooke today. We still have our responsibilities, to journals and teaching and the like, at various stages of our careers. But in the centuries since Hooke expectations have changed, and real original research is no longer something we have to fit in our spare time. It’s now a central expectation of the job.

When scientific research became a career, we accepted a kind of bargain. On the positive side, you no longer have to be independently wealthy to contribute to science. More than that, the existence of professional scientists is the bedrock of technological civilization. With enough scientists around, we get modern medicine and the internet and space programs and the LHC, things that wouldn’t be possible in a world of rare wealthy geniuses.

We pay a price for that bargain, though. If science is a steady job, then it has to provide steady work. A scientist has to be able to go in, every day, and do science.

And the problem is, science doesn’t always work like that. There isn’t always something productive to work on. Even when there is, there isn’t always something productive for you to work on.

Sabine blames “upside-down Popper” on the current publish-or-perish environment in physics. If physics careers weren’t so cut-throat and the metrics they are judged by weren’t so flawed, then maybe people would have time to do slow, careful work on deeper topics rather than pumping out minimally falsifiable papers as fast as possible.

There’s a lot of truth to this, but I think at its core it’s a bit too optimistic. Each of us only has a certain amount of expertise, and sometimes that expertise just isn’t likely to be productive at the moment. Because science is a job, a person in that position can’t just go work at the Royal Mint like Newton did. (The modern-day equivalent would be working for Wall Street, but physicists rarely come back from that.) Instead, they keep doing what they know how to do, slowly branching out, until they’ve either learned something productive or their old topic becomes useful once more. You can think of it as a form of practice, where scientists keep their skills honed until they’re needed.

So if we slow down the rate of publication, if we create metrics for universities that let them hire based on the depth and importance of work and not just number of papers and citations, if we manage all of that then yes we will improve science a great deal. But Lisa Randall still won’t work on Haag’s theorem.

In the end, we’ll still have physicists working on topics that aren’t actually productive.

img_0622

A physicist lazing about unproductively under an apple tree

So do we have to pay physicists to work on whatever they want, no matter how ridiculous?

No, I’m not saying that. We can’t expect everyone to do productive work all the time, but we can absolutely establish standards to make the work more likely to be productive.

Strange as it may sound, I think our standards for this are already quite good, or at least better than many other fields.

First, there’s falsifiability itself, or specifically our attitude towards it.

Physics’s obsession with falsifiability has one important benefit: it means that when someone proposes a new model of dark matter or inflation that they tweaked to be just beyond the current experiments, they don’t claim to know it’s true. They just claim it hasn’t been falsified yet.

This is quite different from what happens in biology and the social sciences. There, if someone tweaks their study to be just within statistical significance, people typically assume the study demonstrated something real. Doctors base treatments on it, and politicians base policy on it. Upside-down Popper has its flaws, but at least it’s never going to kill anybody, or put anyone in prison.

Admittedly, that’s a pretty low bar. Let’s try to set a higher one.

Moving past falsifiability, what about originality? We have very strong norms against publishing work that someone else has already done.

Ok, you (and probably Sabine) would object, isn’t that easy to get around? Aren’t all these Popper-flippers pretending to be original but really just following the same recipe each time, modifying their theory just enough to stay falsifiable?

To some extent. But if they were really following a recipe, you could beat them easily: just write the recipe down.

Physics progresses best when we can generalize, when we skip from case-by-case to understanding whole swaths of cases at once. Over time, there have been plenty of cases in which people have done that, where a number of fiddly hand-made models have been summarized in one parameter space. Once that happens, the rule of originality kicks in: now, no-one can propose another fiddly model like that again. It’s already covered.

As long as the recipe really is just a recipe, you can do this. You can write up what these people are doing in computer code, release the code, and then that’s that, they have to do something else. The problem is, most of the time it’s not really a recipe. It’s close enough to one that they can rely on it, close enough to one that they can get paper after paper when they need to…but it still requires just enough human involvement, just enough genuine originality, to be worth a paper.

The good news is that the range of “recipes” we can code up increases with time. Some spaces of theories we might never be able to describe in full generality (I’m glad there are people trying to do statistics on the string landscape, but good grief it looks quixotic). Some of the time though, we have a real chance of putting a neat little bow on a subject, labeled “no need to talk about this again”.

This emphasis on originality keeps the field moving. It means that despite our bargain, despite having to tolerate “practice” work as part of full-time physics jobs, we can still nudge people back towards productivity.

 

One final point: it’s possible you’re completely ok with the idea of physicists spending most of their time “practicing”, but just wish they wouldn’t make such a big deal about it. Maybe you can appreciate that “can I cook up a model where dark matter kills the dinosaurs” is an interesting intellectual exercise, but you don’t think it should be paraded in front of journalists as if it were actually solving a real problem.

In that case, I agree with you, at least up to a point. It is absolutely true that physics has a dysfunctional relationship with the media. We’re too used to describing whatever we’re working on as the most important thing in the universe, and journalists are convinced that’s the only way to get the public to pay attention. This is something we can and should make progress on. An increasing number of journalists are breaking from the trend and focusing not on covering the “next big thing”, but in telling stories about people. We should do all we can to promote those journalists, to spread their work over the hype, to encourage the kind of stories that treat “practice” as interesting puzzles pursued by interesting people, not the solution to the great mysteries of physics. I know that if I ever do anything newsworthy, there are some journalists I’d give the story to before any others.

At the same time, it’s important to understand that some of the dysfunction here isn’t unique to physics, or even to science. Deep down the reason nobody can admit that their physics is “practice” work is the same reason people at job interviews claim to love the company, the same reason college applicants have to tell stirring stories of hardship and couples spend tens of thousands on weddings. We live in a culture in which nothing can ever just be “ok”, in which admitting things are anything other than exceptional is akin to calling them worthless. It’s an arms-race of exaggeration, and it goes far beyond physics.

(I should note that this “culture” may not be as universal as I think it is. If so, it’s possible its presence in physics is due to you guys letting too many of us Americans into the field.)

 

We made a bargain when we turned science into a career. We bought modernity, but the price we pay is subsidizing some amount of unproductive “practice” work. We can negotiate the terms of our bargain, and we should, tilting the field with incentives to get it closer to the truth. But we’ll never get rid of it entirely, because science is still done by people. And sometimes, despite what we’re willing to admit, people are just “ok”.

Advertisements

Amplitudes Papers I Haven’t Had Time to Read

Interesting amplitudes papers seem to come in groups. Several interesting papers went up this week, and I’ve been too busy to read any of them!

Well, that’s not quite true, I did manage to read this paper, by James Drummond, Jack Foster, and Omer Gurdogan. At six pages long, it wasn’t hard to fit in, and the result could be quite useful. The way my collaborators and I calculate amplitudes involves building up a mathematical object called a symbol, described in terms of a string of “letters”. What James and collaborators have found is a restriction on which “letters” can appear next to each other, based on the properties of a mathematical object called a cluster algebra. Oddly, the restriction seems to have the same effect as a more physics-based condition we’d been using earlier. This suggests that the abstract mathematical restriction and the physics-based restriction are somehow connected, but we don’t yet understand how. It also could be useful for letting us calculate amplitudes with more particles: previously we thought the number of “letters” we’d have to consider there was going to be infinite, but with James’s restriction we’d only need to consider a finite number.

I didn’t get a chance to read David Dunbar, John Godwin, Guy Jehu, and Warren Perkins’s paper. They’re computing amplitudes in QCD (which unlike N=4 super Yang-Mills actually describes the real world!) and doing so for fairly complicated arrangements of particles. They claim to get remarkably simple expressions: since that sort of claim was what jump-started our investigations into N=4, I should probably read this if only to see if there’s something there in the real world amenable to our technique.

I also haven’t read Rutger Boels and Hui Lui’s paper yet. From the abstract, I’m still not clear which parts of what they’re describing is new, or how much it improves on existing methods. It will probably take a more thorough reading to find out.

I really ought to read Burkhard Eden, Yunfeng Jiang, Dennis le Plat, and Alessandro Sfondrini’s paper. They’re working on a method referred to as the Hexagon Operator Product Expansion, or HOPE. It’s related to an older method, the Pentagon Operator Product Expansion (POPE), but applicable to trickier cases. I’ve been keeping an eye on the HOPE in part because my collaborators have found the POPE very useful, and the HOPE might enable something similar. It will be interesting to find out how Eden et al.’s paper modifies the HOPE story.

Finally, I’ll probably find the time to read my former colleague Sebastian Mizera’s paper. He’s found a connection between the string-theory-like CHY picture of scattering amplitudes and some unusual mathematical structures. I’m not sure what to make of it until I get a better idea of what those structures are.

One, Two, Infinity

Physicists and mathematicians count one, two, infinity.

We start with the simplest case, as a proof of principle. We take a stripped down toy model or simple calculation and show that our idea works. We count “one”, and we publish.

Next, we let things get a bit more complicated. In the next toy model, or the next calculation, new interactions can arise. We figure out how to deal with those new interactions, our count goes from “one” to “two”, and once again we publish.

By this point, hopefully, we understand the pattern. We know what happens in the simplest case, and we know what happens when the different pieces start to interact. If all goes well, that’s enough: we can extrapolate our knowledge to understand not just case “three”, but any case: any model, any calculation. We publish the general case, the general method. We’ve counted one, two, infinity.

200px-infinite-svg

Once we’ve counted “infinity”, we don’t have to do any more cases. And so “infinity” becomes the new “zero”, and the next type of calculation you don’t know how to do becomes “one”. It’s like going from addition to multiplication, from multiplication to exponentiation, from exponentials up into the wilds of up-arrow notation. Each time, once you understand the general rules you can jump ahead to an entirely new world with new capabilities…and repeat the same process again, on a new scale. You don’t need to count one, two, three, four, on and on and on.

Of course, research doesn’t always work out this way. My last few papers counted three, four, five, with six on the way. (One and two were already known.) Unlike the ideal cases that go one, two, infinity, here “two” doesn’t give all the pieces you need to keep going. You need to go a few numbers more to get novel insights. That said, we are thinking about “infinity” now, so look forward to a future post that says something about that.

A lot of frustration in physics comes from situations when “infinity” remains stubbornly out of reach. When people complain about all the models for supersymmetry, or inflation, in some sense they’re complaining about fields that haven’t taken that “infinity” step. One or two models of inflation are nice, but by the time the count reaches ten you start hoping that someone will describe all possible models of inflation in one paper, and see if they can make any predictions from that.

(In particle physics, there’s an extent to which people can actually do this. There are methods to describe all possible modifications of the Standard Model in terms of what sort of effects they can have on observations of known particles. There’s a group at NBI who work on this sort of thing.)

The gold standard, though, is one, two, infinity. Our ability to step back, stop working case-by-case, and move on to the next level is not just a cute trick: it’s a foundation for exponential progress. If we can count one, two, infinity, then there’s nowhere we can’t reach.

Thoughts on Polchinski’s Memoir

I didn’t get a chance to meet Joseph Polchinski when I was visiting Santa Barbara last spring. At the time, I heard his health was a bit better, but he still wasn’t feeling well enough to come in to campus. Now that I’ve read his memoir, I almost feel like I have met him. There’s a sense of humor, a diffidence, and a passion for physics that shines through the pages.

The following are some scattered thoughts inspired by the memoir:

 

A friend of mine once complained to me that in her field grad students all brag about the colleges they went to. I mentioned that in my field your undergrad never comes up…unless it was Caltech. For some reason, everyone I’ve met who went to Caltech is full of stories about the place, and Polchinski is no exception. Speaking as someone who didn’t go there, it seems like Caltech has a profound effect on its students that other places don’t.

 

Polchinski mentions hearing stories about geniuses of the past, and how those stories helped temper some of his youthful arrogance. There’s an opposite effect that’s also valuable: hearing stories like Polchinski’s, his descriptions of struggling with anxiety and barely publishing and “not really accomplishing anything” till age 40, can be a major comfort to those of us who worry we’ve fallen behind in the academic race. That said, it’s important not to take these things too far: times have changed, you’re not Polchinski, and much like his door-stealing trick at Caltech getting a postdoc without any publications is something you shouldn’t try at home. Even Witten’s students need at least one.

 

Last week I was a bit puzzled by nueww’s comment, a quote from Polchinski’s memoir which distinguishes “math of the equations” from “math of the solutions”, attributing the former to physicists and the latter to mathematicians. Reading the context in the memoir and the phrase’s origin in a remark by Susskind cleared up a bit, but still left me uneasy. I only figured out why after Lubos Motl posted about it: it doesn’t match my experience of mathematicians at all!

If anything, I think physicists usually care more about the “solutions” than mathematicians do. In my field, often a mathematician will construct some handy basis of functions and then frustrate everyone by providing no examples of how to use them. In the wider math community I’ve met graph theorists who are happy to prove something is true for all graphs of size 10^{10^10} and larger, not worrying about the vast number of graphs where it fails because it’s just a finite number of special cases. And I don’t think this is just my experience: a common genre of jokes revolve around mathematicians proving a solution exists and then not bothering to do anything with it (for example, see the joke with the hotel fire here).

I do think there’s a meaningful sense in which mathematicians care about details that we’re happy to ignore, but “solutions” versus “equations” isn’t really the right axis. It’s something more like “rigor” versus “principles”. Mathematicians will often begin a talk by defining a series of maps between different spaces, carefully describing where they are and aren’t valid. A physicist might just write down a function. That sort of thing is dangerous in mathematics: there are always special, pathological cases that make careful definitions necessary. In physics, those cases rarely come up, and when they do there’s often a clear physical problem that brings them to the forefront. We have a pretty good sense of when we need rigor, and when we don’t we’re happy to lay things out without filling in the details, putting a higher priority on moving forward and figuring out the basic principles underlying reality.

 

Polchinski talks a fair bit about his role in the idea of the multiverse, from hearing about Weinberg’s anthropic argument to coming to terms with the string landscape. One thing his account makes clear is how horrifying the concept seemed at first: how the idea that the parameters of our universe might just be random could kill science and discourage experimentalists. This touches on something that I think gets lost in arguments about the multiverse: even the people most involved in promoting the multiverse in public aren’t happy about it.

It also sharpened my thinking about the multiverse a bit. I’ve talked before about how I don’t think the popularity of the multiverse is actually going to hurt theoretical physics as a field. Polchinski’s worries made me think about the experimental side of the equation: why do experiments if the world might just be random? I think I have a clearer answer to this now, but it’s a bit long, so I’ll save it for a future post.

 

One nice thing about these long-term accounts is you get to see how much people shift between fields over time. Polchinski didn’t start out working in string theory, and most of the big names in my field, like Lance Dixon and David Kosower, didn’t start out in scattering amplitudes. Academic careers are long, and however specialized we feel at any one time we can still get swept off in a new direction.

 

I’m grateful for this opportunity to “meet” Polchinski, if only through his writing. His is a window on the world of theoretical physics that is all too rare, and valuable as a result.

On the Care and Feeding of Ideas

I read Zen and the Art of Motorcycle Maintenance in high school. It’s got a reputation for being obnoxiously mystical, but one of its points seemed pretty reasonable: the claim that the hard part of science, and the part we understand the least, is coming up with hypotheses.

In some sense, theoretical physics is all about hypotheses. By this I don’t mean that we just say “what if?” all the time. I mean that in theoretical physics most of the work is figuring out the right way to ask a question. Phrase your question in the right way and the answer becomes obvious (or at least, obvious after a straightforward calculation). Because our questions are mathematical, the right question can logically imply its own solution.

From the point of view of “Zen and the Art”, as well as most non-scientists I’ve met, this part is utterly mysterious. The ideas you need here seem like they can’t come from hard work or careful observation. In order to ask the right questions, you just need to be “smart”.

In practice, I’ve noticed there’s more to it than that. We can’t just sit around and wait for an idea to show up. Instead, as physicists we develop a library of tricks, often unstated, that let us work towards the ideas we need.

Sometimes, this involves finding simpler cases, working with them until we understand the right questions to ask. Sometimes it involves doing numerics, or using crude guesses, not because either method will give the final answer but because it will show what the answer should look like. Sometimes we need to rephrase the problem many times, in many different contexts, before we happen on one that works. Most of this doesn’t end up in published papers, so in the end we usually have to pick it up from experience.

Along the way, we often find tricks to help us think better. Mostly this is straightforward stuff: reminders to keep us on-task, keeping our notes organized and our code commented so we have a good idea of what we were doing when we need to go back to it. Everyone has their own personal combination of these things in the background, and they’re rarely discussed.

The upshot is that coming up with ideas is hard work. We need to be smart, sure, but that’s not enough by itself: there are a lot of smart people who aren’t physicists after all.

With all that said, some geniuses really do seem to come up with ideas out of thin air. It’s not the majority of the field: we’re not the idiosyncratic Sheldon Coopers everyone seems to imagine. But for a few people, it really does feel like there’s something magical about where they get their ideas. I’ve had the privilege of working with a couple people like this, and the way they think sometimes seems qualitatively different from our usual way of building ideas. I can’t see any of the standard trappings, the legacy of partial results and tricks of thought, that would lead to where they end up. That doesn’t mean they don’t use tricks just like the rest of us, in the end. But I think genius, if it means anything at all, is thinking in a novel enough way that from the outside it looks like magic.

Most of the time, though, we just need to hone our craft. We build our methods and shape our minds as best we can, and we get better and better at the central mystery of science: asking the right questions.

We’re Weird

Preparing to move to Denmark, it strikes me just how strange what I’m doing would seem to most people. I’m moving across the ocean to a place where I don’t know the language. (Or at least, don’t know more than half a duolingo lesson.) I’m doing this just three years after another international move. And while I’m definitely nervous, this isn’t the big life changing shift it would be for many people. It’s just how academic careers are expected to work.

At borders, I’m often asked why I am where I am. Why be an American working in Canada? Why move to Denmark? And in general, the answer is just that it’s where I need to be to do what I want to do, because it’s where the other people who do what I want to do are. A few people seed this process by managing to find faculty jobs in their home countries, and others sort themselves out by their interests. In the end, we end up with places like Perimeter, an institute in the middle of Canada with barely any Canadians.

This is more pronounced for smaller fields than for larger ones. A chemist or biologist might just manage to have their whole career in the same state of the US, or the same country in Europe. For a theoretical physicist, this is much less likely. I also suspect it’s more true of more “universal” fields: that most professors of Portuguese literature are in Portugal or Brazil, for example.

For theoretical physics, the result is an essentially random mix of people around the world. This works, in part, because essentially everyone does science in English. Occasionally, a group of collaborators happens to speak the same non-English language, so you sometimes hear people talking science in Russian or Spanish or French. But even then there are times people will default to English anyway, because they’re used to it. We publish in English, we chat in English. And as a result, wherever we end up we can at least talk to our colleagues, even if the surrounding world is trickier.

Communities this international, with four different accents in every conversation, are rare, and I occasionally forget that. Before grad school, the closest I came to this was on the internet. On Dungeons and Dragons forums, much like in academia, everyone was drawn together by shared interests and expertise. We had Australians logging on in the middle of everyone else’s night to argue with the Germans, and Brazilians pointing out how the game’s errata was implemented differently in Portuguese.

It’s fun to be in that sort of community in the real world. There’s always something to learn from each other, even on completely mundane topics. Lunch often turns into a discussion of different countries’ cuisines. As someone who became an academic because I enjoy learning, it’s great to have the wheels constantly spinning like that. I should remember, though, that most of the world doesn’t live like this: we’re currently a pretty weird bunch.

More Travel

I’m visiting the Niels Bohr Institute this week, on my way back from Amplitudes.

IMG_20170719_152906

You might recognize the place from old conference photos.

Amplitudes itself was nice. There weren’t any surprising new developments, but a lot of little “aha” moments when one of the speakers explained something I’d heard vague rumors about. I figured I’d mention a few of the things that stood out. Be warned, this is going to be long and comparatively jargon-heavy.

The conference organizers were rather daring in scheduling Nima Arkani-Hamed for the first talk, as Nima has a tendency to arrive at the last minute and talk for twice as long as you ask him to. Miraculously, though, things worked out, if only barely: Nima arrived at the wrong campus and ran most of the way back, showing up within five minutes of the start of the conference. He also stuck to his allotted time, possibly out of courtesy to his student, Yuntao Bai, who was speaking next.

Between the two of them, Nima and Yuntao covered an interesting development, tying the Amplituhedron together with the string theory-esque picture of scattering amplitudes pioneered by Freddy Cachazo, Song He, and Ellis Ye Yuan (or CHY). There’s a simpler (and older) Amplituhedron-like object called the associahedron that can be thought of as what the Amplituhedron looks like on the surface of a string, and CHY’s setup can be thought of as a sophisticated map that takes this object and turns it into the Amplituhedron. It was nice to hear from both Nima and his student on this topic, because Nima’s talks are often high on motivation but low on detail, so it was great that Yuntao was up next to fill in the blanks.

Anastasia Volovich talked about Landau singularities, a topic I’ve mentioned before. What I hadn’t appreciated was how much they can do with them at this point. Originally, Juan Maldacena had suggested that these singularities, mathematical points that determine the behavior of amplitudes first investigated by Landau in the 60’s, might explain some of the simplicity we’ve observed in N=4 super Yang-Mills. They ended up not being enough by themselves, but what Volovich and collaborators are discovering is that with a bit of help from the Amplithedron they explain quite a lot. In particular, if they start with the Amplituhedron and do a procedure similar to Landau’s, they can find the simpler set of singularities allowed by N=4 super Yang-Mills, at least for the examples they’ve calculated. It’s still a bit unclear how this links to their previous investigations of these things in terms of cluster algebras, but it sounds like they’re making progress.

Dmitry Chicherin gave me one of those minor “aha” moments. One big useful fact about scattering amplitudes in N=4 super Yang-Mills is that they’re “dual” to different mathematical objects called Wilson loops, a fact which allows us to compare to the “POPE” approach of Basso, Sever, and Vieira. Chicherin asks the question: “What if you’re not calculating a scattering amplitude or a Wilson loop, but something halfway in between?” Interestingly, this has an answer, with the “halfway between” objects having a similar duality among themselves.

Yorgos Papathansiou talked about work I’ve been involved with. I’ll probably cover it in detail in another post, so now I’ll just mention that we’re up to six loops!

Andy Strominger talked about soft theorems. It’s always interesting seeing people who don’t traditionally work on amplitudes giving talks at Amplitudes. There’s a range of responses, from integrability people (who are basically welcomed like family) to work on fairly unrelated areas that have some “amplitudes” connection (met with yawns except from the few people interested in the connection). The response to Strominger was neither welcome nor boredom, but lively debate. He’s clearly doing something interesting, but many specialists worried he was ignorant of important no-go results in the field that could hamstring some of his bolder conjectures.

The second day focused on methods for more practical calculations, and had the overall effect of making me really want to clean up my code. Tiziano Peraro’s finite field methods in particular look like they could be quite useful. There were two competing bases of integrals on display, Von Manteuffel’s finite integrals and Rutger Boels’s uniform transcendental integrals later in the conference. Both seem to have their own virtues, and I ended up asking Rob Schabinger if it was possible to combine the two, with the result that he’s apparently now looking into it.

The more practical talks that day had a clear focus on calculations with two loops, which are becoming increasingly viable for LHC-relevant calculations. From talking to people who work on this, I get the impression that the goal of these calculations isn’t so much to find new physics as to confirm and investigate new physics found via other methods. Things are complicated enough at two loops that for the moment it isn’t feasible to describe what all the possible new particles might do at that order, and instead the goal is to understand the standard model well enough that if new physics is noticed (likely based on one-loop calculations) then the details can be pinned down by two-loop data. But this picture could conceivably change as methods improve.

Wednesday was math-focused. We had a talk by Francis Brown on his conjecture of a cosmic Galois group. This is a topic I knew a bit about already, since it’s involved in something I’ve been working on. Brown’s talk cleared up some things, but also shed light on the vagueness of the proposal. As with Yorgos’s talk, I’ll probably cover more about this in a future post, so I’ll skip the details for now.

There was also a talk by Samuel Abreu on a much more physical picture of the “symbols” we calculate with. This is something I’ve seen presented before by Ruth Britto, and it’s a setup I haven’t looked into as much as I ought to. It does seem at the moment that they’re limited to one loop, which is a definite downside. Other talks discussed elliptic integrals, the bogeyman that we still can’t deal with by our favored means but that people are at least understanding better.

The last talk on Wednesday before the hike was by David Broadhurst, who’s quite a character in his own right. Broadhurst sat in the front row and asked a question after nearly every talk, usually bringing up papers at least fifty years old, if not one hundred and fifty. At the conference dinner he was exactly the right person to read the Address to the Haggis, resurrecting a thick Scottish accent from his youth. Broadhurst’s techniques for handling high-loop elliptic integrals are quite impressively powerful, leaving me wondering if the approach can be generalized.

Thursday focused on gravity. Radu Roiban gave a better idea of where he and his collaborators are on the road to seven-loop supergravity and what the next bottlenecks are along the way. Oliver Schlotterer’s talk was another one of those “aha” moments, helping me understand a key difference between two senses in which gravity is Yang-Mills squared ( the Kawai-Lewellen-Tye relations and BCJ). In particular, the latter is much more dependent on specifics of how you write the scattering amplitude, so to the extent that you can prove something more like the former at higher loops (the original was only for trees, unlike BCJ) it’s quite valuable. Schlotterer has managed to do this at one loop, using the “Q-cut” method I’ve (briefly) mentioned before. The next day’s talk by Emil Bjerrum-Bohr focused more heavily on these Q-cuts, including a more detailed example at two loops than I’d seen that group present before.

There was also a talk by Walter Goldberger about using amplitudes methods for classical gravity, a subject I’ve looked into before. It was nice to see a more thorough presentation of those ideas, including a more honest appraisal of which amplitudes techniques are really helpful there.

There were other interesting topics, but I’m already way over my usual post length, so I’ll sign off for now. Videos from all but a few of the talks are now online, so if you’re interested you should watch them on the conference page.