Tag Archives: string theory

Assumptions for Naturalness

Why did physicists expect to see something new at the LHC, more than just the Higgs boson? Mostly, because of something called naturalness.

Naturalness, broadly speaking, is the idea that there shouldn’t be coincidences in physics. If two numbers that appear in your theory cancel out almost perfectly, there should be a reason that they cancel. Put another way, if your theory has a dimensionless constant in it, that constant should be close to one.

(To see why these two concepts are the same, think about a theory where two large numbers miraculously almost cancel, leaving just a small difference. Take the ratio of one of those large numbers to the difference, and you get a very large dimensionless number.)

You might have heard it said that the mass of the Higgs boson is “unnatural”. There are many different physical processes that affect what we measure as the mass of the Higgs. We don’t know exactly how big these effects are, but we do know that they grow with the scale of “new physics” (aka the mass of any new particles we might have discovered), and that they have to cancel to give the Higgs mass we observe. If we don’t see any new particles, the Higgs mass starts looking more and more unnatural, driving some physicists to the idea of a “multiverse”.

If you find parts of this argument hokey, you’re not alone. Critics of naturalness point out that we don’t really have a good reason to favor “numbers close to one”, nor do we have any way to quantify how “bad” a number far from one is (we don’t know the probability distribution, in other words). They critique theories that do preserve naturalness, like supersymmetry, for being increasingly complicated and unwieldy, violating Occam’s razor. And in some cases they act baffled by the assumption that there should be any “new physics” at all.

Some of these criticisms are reasonable, but some are distracting and off the mark. The problem is that the popular argument for naturalness leaves out some important assumptions. These assumptions are usually kept in mind by the people arguing for naturalness (at least the more careful people), but aren’t often made explicit. I’d like to state some of these assumptions. I’ll be framing the naturalness argument in a bit of an unusual (if not unprecedented) way. My goal is to show that some criticisms of naturalness don’t really work, while others still make sense.

I’d like to state the naturalness argument as follows:

  1. The universe should be ultimately described by a theory with no free dimensionless parameters at all. (For the experts: the theory should also be UV-finite.)
  2. We are reasonably familiar with theories of the sort described in 1., we know roughly what they can look like.
  3. If we look at such a theory at low energies, it will appear to have dimensionless parameters again, based on the energy where we “cut off” our description. We understand this process well enough to know what kinds of values these parameters can take, starting from 2.
  4. Point 3. can only be consistent with the observed mass of the Higgs if there is some “new physics” at around the scales the LHC can measure. That is, there is no known way to start with a theory like those of 2. and get the observed Higgs mass without new particles.

Point 1. is often not explicitly stated. It’s an assumption, one that sits in the back of a lot of physicists’ minds and guides their reasoning. I’m really not sure if I can fully justify it, it seems like it should be a consequence of what a final theory is.

(For the experts: you’re probably wondering why I’m insisting on a theory with no free parameters, when usually this argument just demands UV-finiteness. I demand this here because I think this is the core reason why we worry about coincidences: free parameters of any intermediate theory must eventually be explained in a theory where those parameters are fixed, and “unnatural” coincidences are those we don’t expect to be able to fix in this way.)

Point 2. may sound like a stretch, but it’s less of one than you might think. We do know of a number of theories that have few or no dimensionless parameters (and that are UV-finite), they just don’t describe the real world. Treating these theories as toy models, we can hopefully get some idea of how theories like this should look. We also have a candidate theory of this kind that could potentially describe the real world, M theory, but it’s not fleshed out enough to answer these kinds of questions definitively at this point. At best it’s another source of toy models.

Point 3. is where most of the technical arguments show up. If someone talking about naturalness starts talking about effective field theory and the renormalization group, they’re probably hashing out the details of point 3. Parts of this point are quite solid, but once again there are some assumptions that go into it, and I don’t think we can say that this point is entirely certain.

Once you’ve accepted the arguments behind points 1.-3., point 4. follows. The Higgs is unnatural, and you end up expecting new physics.

Framed in this way, arguments about the probability distribution of parameters are missing the point, as are arguments from Occam’s razor.

The point is not that the Standard Model has unlikely parameters, or that some in-between theory has unlikely parameters. The point is that there is no known way to start with the kind of theory that could be an ultimate description of the universe and end up with something like the observed Higgs and no detectable new physics. Such a theory isn’t merely unlikely, if you take this argument seriously it’s impossible. If your theory gets around this argument, it can be as cumbersome and Occam’s razor-violating as it wants, it’s still a better shot than no possible theory at all.

In general, the smarter critics of naturalness are aware of this kind of argument, and don’t just talk probabilities. Instead, they reject some combination of point 2. and point 3.

This is more reasonable, because point 2. and point 3. are, on some level, arguments from ignorance. We don’t know of a theory with no dimensionless parameters that can give something like the Higgs with no detectable new physics, but maybe we’re just not trying hard enough. Given how murky our understanding of M theory is, maybe we just don’t know enough to make this kind of argument yet, and the whole thing is premature. This is where probability can sneak back in, not as some sort of probability distribution on the parameters of physics but just as an estimate of our own ability to come up with new theories. We have to guess what kinds of theories can make sense, and we may well just not know enough to make that guess.

One thing I’d like to know is how many critics of naturalness reject point 1. Because point 1. isn’t usually stated explicitly, it isn’t often responded to explicitly either. The way some critics of naturalness talk makes me suspect that they reject point 1., that they honestly believe that the final theory might simply have some unexplained dimensionless numbers in it that we can only fix through measurement. I’m curious whether they actually think this, or whether I’m misreading them.

There’s a general point to be made here about framing. Suppose that tomorrow someone figures out a way to start with a theory with no dimensionless parameters and plausibly end up with a theory that describes our world, matching all existing experiments. (People have certainly been trying.) Does this mean naturalness was never a problem after all? Or does that mean that this person solved the naturalness problem?

Those sound like very different statements, but it should be clear at this point that they’re not. In principle, nothing distinguishes them. In practice, people will probably frame the result one way or another based on how interesting the solution is.

If it turns out we were missing something obvious, or if we were extremely premature in our argument, then in some sense naturalness was never a real problem. But if we were missing something subtle, something deep that teaches us something important about the world, then it should be fair to describe it as a real solution to a real problem, to cite “solving naturalness” as one of the advantages of the new theory.

If you ask for my opinion? You probably shouldn’t, I’m quite far from an expert in this corner of physics, not being a phenomenologist. But if you insist on asking anyway, I suspect there probably is something wrong with the naturalness argument. That said, I expect that whatever we’re missing, it will be something subtle and interesting, that naturalness is a real problem that needs to really be solved.

Advertisements

How to Get a “Minimum Scale” Without Pixels

Zoom in, and the world gets stranger. Down past atoms, past protons and neutrons, far past the smallest scales we can probe at the Large Hadron Collider, we get to the scale at which quantum gravity matters: the Planck scale.

Weird things happen at the Planck scale. Space and time stop making sense. Read certain pop science articles, and they’ll tell you the Planck scale is the smallest scale, the scale where space and time are quantized, the “pixels of the universe”.

That last sentence, by the way, is not actually how the Planck scale works. In fact, there’s pretty good evidence that the universe doesn’t have “pixels”, that space and time are not quantized in that way. Even very tiny pixels would change the speed of light, making it different for different colors. Tiny effects like that add up, and astronomers would almost certainly have noticed an effect from even Planck-scale pixels. Unless your idea of “pixels” is fairly unusual, it’s already been ruled out.

If the Planck scale isn’t the scale of the “pixels of the universe”, why do people keep saying it is?

Part of the problem is that the real story is vaguer. We don’t know what happens at the Planck scale. It’s not just that we don’t know which theory of quantum gravity is right: we don’t even know what different quantum gravity proposals predict. People are trying to figure it out, and there are some more or less viable ideas, but ultimately all we know is that at the Planck scale our description of space-time should break down.

“Our description breaks down” is unfortunately not very catchy. Certainly, it’s less catchy than “pixels of the universe”. Part of the problem is that most people don’t know what “our description breaks down” actually means.

So if that’s the part that’s puzzling you, maybe an example would help. This won’t be the full answer, though it could be part of the story. What it will be is an example of what “our description breaks down” can actually mean, how there can be a scale beyond which space-time stops making sense without there being “pixels”.

The example comes from string theory, from a concept called “T duality”. In string theory, “extra” dimensions beyond our usual three space and one time are curled up small, so that traveling along them just gets you back where you started. Instead of particles, there are strings, with length close to the Planck length.

Picture a loop of string in a small extra dimension. What can it do?

Image credit: someone who’s done a lot more work explaining string theory than I have

One thing it can do is move along the extra dimension. Since it has to end up back where it started, it can’t just move at any speed it wants. It turns out that the smaller the extra dimension, the more energy the string has when it spins around it.

The other thing it can do is wrap around the extra dimension. If it wraps around, the string has more energy if the dimension is larger, like a rubber band stretched around a pipe.

The string can do either or both of these multiple times. It can wrap many times around the extra dimension, or move in a quicker circle around it, or both at once. And if you calculate the energy of these combinations, you notice something: a string wound around a big circle has the same energy as a string moving around a small circle. In particular, you get the same energy on a circle of radius R, and a circle of radius l^2/R, where l is the length of the string.

It turns out it’s not just the energy that’s the same: for everything that happens on a circle of radius R, there’s a matching description with a circle of radius l^2/R, with wrapping and moving swapped. We say that the two descriptions are dual: two seemingly different pictures that turn out to be completely physically indistinguishable.

Since the two pictures are indistinguishable, it doesn’t actually make sense to talk about dimensions smaller than the length of the string. It’s not that they can’t exist, or that they’re smaller than the “pixels of the universe”: it’s just that any description you write down of such a small dimension could just as easily have been of a larger, dual dimension. It’s that your picture, of one obvious size of the curled up dimension, broke down and stopped making sense.

As I mentioned, this isn’t the whole picture of what happens at the Planck scale, even in string theory. It is an example of a broader idea that string theorists are investigating, that in order to understand space-time at the smallest scales you need to understand many different dual descriptions. And hopefully, it’s something you can hold in your mind, a specific example of what “our description breaks down” can actually mean in practice, without pixels.

A Micrographia of Beastly Feynman Diagrams

Earlier this year, I had a paper about the weird multi-dimensional curves you get when you try to compute trickier and trickier Feynman diagrams. These curves were “Calabi-Yau”, a type of curve string theorists have studied as a way to curl up extra dimensions to preserve something called supersymmetry. At the time, string theorists asked me why Calabi-Yau curves showed up in these Feynman diagrams. Do they also have something to do with supersymmetry?

I still don’t know the general answer. I don’t know if all Feynman diagrams have Calabi-Yau curves hidden in them, or if only some do. But for a specific class of diagrams, I now know the reason. In this week’s paper, with Jacob Bourjaily, Andrew McLeod, and Matthias Wilhelm, we prove it.

We just needed to look at some more exotic beasts to figure it out.

tardigrade_eyeofscience_960

Like this guy!

Meet the tardigrade. In biology, they’re incredibly tenacious microscopic animals, able to withstand the most extreme of temperatures and the radiation of outer space. In physics, we’re using their name for a class of Feynman diagrams.

even_loop_tardigrades

A clear resemblance!

There is a long history of physicists using whimsical animal names for Feynman diagrams, from the penguin to the seagull (no relation). We chose to stick with microscopic organisms: in addition to the tardigrades, we have paramecia and amoebas, even a rogue coccolithophore.

The diagrams we look at have one thing in common, which is key to our proof: the number of lines on the inside of the diagram (“propagators”, which represent “virtual particles”) is related to the number of “loops” in the diagram, as well as the dimension. When these three numbers are related in the right way, it becomes relatively simple to show that any curves we find when computing the Feynman diagram have to be Calabi-Yau.

This includes the most well-known case of Calabi-Yaus showing up in Feynman diagrams, in so-called “banana” or “sunrise” graphs. It’s closely related to some of the cases examined by mathematicians, and our argument ended up pretty close to one made back in 2009 by the mathematician Francis Brown for a different class of diagrams. Oddly enough, neither argument works for the “traintrack” diagrams from our last paper. The tardigrades, paramecia, and amoebas are “more beastly” than those traintracks: their Calabi-Yau curves have more dimensions. In fact, we can show they have the most dimensions possible at each loop, provided all of our particles are massless. In some sense, tardigrades are “as beastly as you can get”.

We still don’t know whether all Feynman diagrams have Calabi-Yau curves, or just these. We’re not even sure how much it matters: it could be that the Calabi-Yau property is a red herring here, noticed because it’s interesting to string theorists but not so informative for us. We don’t understand Calabi-Yaus all that well yet ourselves, so we’ve been looking around at textbooks to try to figure out what people know. One of those textbooks was our inspiration for the “bestiary” in our title, an author whose whimsy we heartily approve of.

Like the classical bestiary, we hope that ours conveys a wholesome moral. There are much stranger beasts in the world of Feynman diagrams than anyone suspected.

IGST 2018

Conference season in Copenhagen continues this week, with Integrability in Gauge and String Theory 2018. Integrability here refers to integrable theories, theories where physicists can calculate things exactly, without the perturbative approximations we typically use. Integrable theories come up in a wide variety of situations, but this conference was focused on the “high-energy” side of the field, on gauge theories (roughly, theories of fundamental forces like Yang-Mills) and string theory.

Integrability is one of the bigger sub-fields in my corner of physics, about the same size as amplitudes. It’s big enough that we can’t host the conference in the old Niels Bohr Institute auditorium.

IMG_20180820_162035401_HDR

Instead, they herded us into the old agriculture school

I don’t normally go to integrability conferences, but when the only cost is bus fare there’s not much to lose. Integrability is arguably amplitudes’s nearest neighbor. The two fields have a history of sharing ideas, and they have similar reputations in the wider community, seen as alternately deep and overly technical. Many of the talks still went over my head, but it was worth getting a chance to see how the neighbors are doing.

Adversarial Collaborations for Physics

Sometimes physics debates get ugly. For the scientists reading this, imagine your worst opponents. Think of the people who always misinterpret your work while using shoddy arguments to prop up their own, where every question at a talk becomes a screaming match until you just stop going to the same conferences at all.

Now, imagine writing a paper with those people.

Adversarial collaborations, subject of a recent a contest on the blog Slate Star Codex, are a proposed method for resolving scientific debates. Two scientists on opposite sides of an argument commit to writing a paper together, describing the overall state of knowledge on the topic. For the paper to get published, both sides have to sign off on it: they both have to agree that everything in the paper is true. This prevents either side from cheating, or from coming back later with made-up objections: if a point in the paper is wrong, one side or the other is bound to catch it.

This won’t work for the most vicious debates, when one (or both) sides isn’t interested in common ground. But for some ongoing debates in physics, I think this approach could actually help.

One advantage of adversarial collaborations is in preventing accusations of bias. The debate between dark matter and MOND-like proposals is filled with these kinds of accusations: claims that one group or another is ignoring important data, being dishonest about the parameters they need to fit, or applying standards of proof they would never require of their own pet theory. Adversarial collaboration prevents these kinds of accusations: whatever comes out of an adversarial collaboration, both sides would make sure the other side didn’t bias it.

Another advantage of adversarial collaborations is that they make it much harder for one side to move the goalposts, or to accuse the other side of moving the goalposts. From the sidelines, one thing that frustrates me watching string theorists debate whether the theory can describe de Sitter space is that they rarely articulate what it would take to decisively show that a particular model gives rise to de Sitter. Any conclusion of an adversarial collaboration between de Sitter skeptics and optimists would at least guarantee that both parties agreed on the criteria. Similarly, I get the impression that many debates about interpretations of quantum mechanics are bogged down by one side claiming they’ve closed off a loophole with a new experiment, only for the other to claim it wasn’t the loophole they were actually using, something that could be avoided if both sides were involved in the experiment from the beginning.

It’s possible, even likely, that no-one will try adversarial collaboration for these debates. Even if they did, it’s quite possible the collaborations wouldn’t be able to agree on anything! Still, I have to hope that someone takes the plunge and tries writing a paper with their enemies. At minimum, it’ll be an interesting read!

Strings 2018

I’m at Strings this week, in tropical Okinawa. Opening the conference, organizer Hirosi Ooguri joked that they had carefully scheduled things for a sunny time of year, and since the rainy season had just ended “who says that string theorists don’t make predictions?”

IMG_20180625_125441806

There was then a rainstorm during lunch, falsifying string theory

This is the first time I’ve been to Strings. There are almost 500 people here, which might seem small for folks in other fields, but for me this is the biggest conference I’ve attended. The size is noticeable in the little things: this is the first conference I’ve been to with a diaper changing room, the first managed by a tour company, the first with a dedicated “Cultural Evening” featuring classical music from the region. With this in mind, the conference were impressively well-organized, but there were some substantial gaps (tightly packed tours before the Cultural Evening that didn’t leave time for dinner, and a talk by Morrison cut short by missing slides that offset the schedule of the whole last day).

On the well-organized side, Strings has a particular structure for its talks, with Review Talks and Plenary Talks. The Review Talks each summarize a subject: mostly main focuses of the conference, but with a few (Ashoke Sen on String Field Theory, David Simmons-Duffin on the Conformal Bootstrap) that only covered the content of a few talks.

I’m not going to make another pie chart this year, if you want that kind of breakdown Daniel Harlow gave one during the “Golden Jubilee” at the end. If I did something like that this time, I’d divide it up not by sub-fields, but by goals. Talks here focused on a few big questions: “Can we classify all quantum field theories?” “What are the general principles behind quantum gravity?” “Can we make some of the murky aspects of string theory clearer?” “How can string theory give rise to sensible physics in four dimensions?”

Of those questions, classifying quantum field theories made up the bulk of the conference. I’ve heard people dismiss this work on the ground that much of it only works in supersymmetric theories. With that in mind, it was remarkable just how much of the conference was non-supersymmetric. Supersymmetry still played a role, but the assumption seemed to be that it was more of a sub-topic than something universal (to the extent that one of the Review Talks, Clay Cordova’s “What’s new with Q?”, was “the supersymmetry review talk”). Both supersymmetric and non-supersymmetric theories are increasingly understood as being part of a “landscape”, linked by duality and thinking at different scales. These links are sometimes understood in terms of string theory, but often not. So far it’s not clear if there is a real organizing principle here, especially for the non-supersymmetric cases, and people seem to be kept busy enough just proving the links they observe.

Finding general principles behind quantum gravity motivated a decent range of the talks, from Andrew Strominger to Jorge Santos. The topics that got the most focus, and two of the Review Talks, were by what I’ve referred to as “entanglers”, people investigating the structure of space and time via quantum entanglement and entropy. My main takeaway from these talks was perhaps a bit frivolous: between Maldacena’s talk (about an extremely small wormhole made from Standard Model-compatible building blocks) and Hartman’s discussion of the Average Null Energy Condition, it looks like a “useful sci-fi wormhole” (specifically, one that gets you there faster than going the normal way) has been conclusively ruled out in quantum field theory.

Only a minority of talks discussed using string theory to describe the real world, though I get the impression this was still more focus than in past years. In particular, there were several talks trying to discover properties of Calabi-Yaus, the geometries used to curl up string theory’s extra dimensions. Watching these talks I had a similar worry to Strominger’s question after Irene Valenzuela’s talk: it’s not clear that these investigations aren’t just examining a small range of possibilities, one that might become irrelevant if new dualities or types of compactification are found. Ironically, this objection seems to apply least to Valenzuela’s talk itself: characterizing the “swampland” of theories that don’t make sense as part of a theory of quantum gravity may start with examples from string compactifications, but its practitioners are looking for more general principles about quantum gravity and seem to manage at least reasonable arguments that don’t depend on string theory being true.

There wasn’t much from the amplitudes field at this conference, with just Yu-tin Huang’s talk carrying that particular flag. Despite that, amplitudes methods came up in several talks, with Silviu Pufu praising an amplitudes textbook and David Simmons-Duffin bringing up amplitudes several times (more than he did in his talk last week at Amplitudes).

The end of the conference featured a panel discussion in honor of String Theory’s 50th Anniversary, its “Golden Jubilee”. The panel was evenly split between founders of string theory, heroes of the string duality revolution, and the current crop of young theorists. The panelists started by each giving a short presentation. Michael Green joked that it felt like a “geriatric gong show”, and indeed a few of the presentations were gong show-esque. Still, some of the speeches were inspiring. I was particularly impressed by Juan Maldacena, Eva Silverstein, and Daniel Harlow, who each laid out a compelling direction for string theory’s future. The questions afterwards were collated by David Gross from audience submissions, and were largely what you would expect, with quite a lot of questions about whether string theory can ever connect with experiment. I was more than a little disappointed by the discussion of whether string theory can give rise to de Sitter space, which was rather botched: Maldacena was appointed as the defender of de Sitter, but (contra Gross’s summary) the quantum complexity-based derivation he proposed didn’t sound much like the flux compactifications that have inspired so much controversy, so everyone involved ended up talking past each other.

Edit: See Shamit’s comment below, I apparently misunderstood what Maldacena was referring to.

Calabi-Yaus for Higgs Phenomenology

less joking title:

You Didn’t Think We’d Stop at Elliptics, Did You?

When calculating scattering amplitudes, I like to work with polylogarithms. They’re a very well-understood type of mathematical function, and thus pretty easy to work with.

Even for our favorite theory of N=4 super Yang-Mills, though, they’re not the whole story. You need other types of functions to represent amplitudes, elliptic polylogarithms that are only just beginning to be properly understood. We had our own modest contribution to that topic last year.

You can think of the difference between these functions in terms of more and more complicated curves. Polylogarithms just need circles or spheres, elliptic polylogarithms can be described with a torus.

A torus is far from the most complicated curve you can think of, though.

983px-calabi_yau_formatted-svgString theorists have done a lot of research into complicated curves, in particular ones with a property called Calabi-Yau. They were looking for ways to curl up six or seven extra dimensions, to get down to the four we experience. They wanted to find ways of curling that preserved some supersymmetry, in the hope that they could use it to predict new particles, and it turned out that Calabi-Yau was the condition they needed.

That hope, for the most part, didn’t pan out. There were too many Calabi-Yaus to check, and the LHC hasn’t seen any supersymmetric particles. Today, “string phenomenologists”, who try to use string theory to predict new particles, are a relatively small branch of the field.

This research did, however, have lasting impact: due to string theorists’ interest, there are huge databases of Calabi-Yau curves, and fruitful dialogues with mathematicians about classifying them.

This has proven quite convenient for us, as we happen to have some Calabi-Yaus to classify.

traintrackpic

Our midnight train going anywhere…in the space of Calabi-Yaus

We call Feynman diagrams like the one above “traintrack integrals”. With two loops, it’s the elliptic integral we calculated last year. With three, though, you need a type of Calabi-Yau curve called a K3. With four loops, it looks like you start needing Calabi-Yau three-folds, the type of space used to compactify string theory to four dimensions.

“We” in this case is myself, Jacob Bourjaily, Andrew McLeod, Matthias Wilhelm, and Yang-Hui He, a Calabi-Yau expert we brought on to help us classify these things. Our new paper investigates these integrals, and the more and more complicated curves needed to compute them.

Calabi-Yaus had been seen in amplitudes before, in diagrams called “sunrise” or “banana” integrals. Our example shows that they should occur much more broadly. “Traintrack” integrals appear in our favorite N=4 super Yang-Mills theory, but they also appear in theories involving just scalar fields, like the Higgs boson. For enough loops and particles, we’re going to need more and more complicated functions, not just the polylogarithms and elliptic polylogarithms that people understand.

(And to be clear, no, nobody needs to do this calculation for Higgs bosons in practice. This diagram would calculate the result of two Higgs bosons colliding and producing ten or more Higgs bosons, all at energies so high you can ignore their mass, which is…not exactly relevant for current collider phenomenology. Still, the title proved too tempting to resist.)

Is there a way to understand traintrack integrals like we understand polylogarithms? What kinds of Calabi-Yaus do they pick out, in the vast space of these curves? We’d love to find out. For the moment, we just wanted to remind all the people excited about elliptic polylogarithms that there’s quite a bit more strangeness to find, even if we don’t leave the tracks.