# Tag Archives: particle physics

By A. Physicist

…because it disagrees with precision electroweak measurements

…………………………………..with bounds from ATLAS and CMS

…………………………………..with the power spectrum of the CMB

…………………………………..with Eötvös experiments

…because it isn’t gauge invariant

………………………….Lorentz invariant

………………………….diffeomorphism invariant

………………………….background-independent, whatever that means

…because it violates unitarity

…………………………………locality

…………………………………causality

…………………………………observer-independence

…………………………………technical naturalness

…………………………………international treaties

…………………………………cosmic censorship

…because you screwed up the calculation

…because you didn’t actually do the calculation

…because I don’t understand the calculation

…because you predict too many magnetic monopoles

……………………………………too many proton decays

……………………………………too many primordial black holes

…………………………………..remnants, at all

…because it’s fine-tuned

…because it’s suspiciously finely-tuned

…because it’s finely tuned to be always outside of experimental bounds

…because you’re misunderstanding quantum mechanics

…………………………………………………………..black holes

………………………………………………………….effective field theory

…………………………………………………………..thermodynamics

…………………………………………………………..the scientific method

…because Condensed Matter would contribute more to Chinese GDP

…because the approximation you’re making is unjustified

…………………………………………………………………………is not valid

…………………………………………………………………………is wildly overoptimistic

………………………………………………………………………….is just kind of lazy

…because there isn’t a plausible UV completion

…because you care too much about the UV

…because it only works in polynomial time

…………………………………………..exponential time

…………………………………………..factorial time

…because even if it’s fast it requires more memory than any computer on Earth

…because it requires more bits of memory than atoms in the visible universe

…because it has no meaningful advantages over current methods

…because it has meaningful advantages over my own methods

…because it can’t just be that easy

…because it’s not the kind of idea that usually works

…because it’s not the kind of idea that usually works in my field

…because it isn’t canonical

…because it’s ugly

…because it’s baroque

…because it ain’t baroque, and thus shouldn’t be fixed

…because only a few people work on it

…because far too many people work on it

…because clearly it will only work for the first case

……………………………………………………………….the first two cases

……………………………………………………………….the first seven cases

……………………………………………………………….the cases you’ve published and no more

…because I know you’re wrong

…because I strongly suspect you’re wrong

…because I strongly suspect you’re wrong, but saying I know you’re wrong looks better on a grant application

…….in a blog post

…because I’m just really pessimistic about something like that ever actually working

…because I’d rather work on my own thing, that I’m much more optimistic about

…because if I’m clear about my reasons

……and what I know

…….and what I don’t

……….then I’ll convince you you’re wrong.

……….or maybe you’ll convince me?

# Unreasonably Big Physics

The Large Hadron Collider is big, eight and a half kilometers across. It’s expensive, with a cost to construct and operate in the billions. And with an energy of 6.5 TeV per proton, it’s the most powerful collider in the world, accelerating protons to 0.99999999 of the speed of light.

The LHC is reasonable. After all, it was funded, and built. What does an unreasonable physics proposal look like?

It’s probably unfair to call the Superconducting Super Collider unreasonable, after all, it did almost get built. It would have been a 28 kilometer-wide circle in the Texas desert, accelerating protons to an energy of 20 TeV, three times the energy of the LHC. When it was cancelled in 1993, it was projected to cost twelve billion dollars, and two billion had already been spent digging the tunnel. The US hasn’t invested in a similarly sized project since.

A better example of an unreasonable proposal might be the Collider-in-the-Sea. (If that link is paywalled, this paper covers most of the same information.)

If you run out of room on land, why not build your collider underwater?

Ok, there are pretty obvious reasons why not. Surprisingly, the people proposing the Collider-in-the-Sea do a decent job of answering them. They plan to put it far enough out that it won’t disrupt shipping, and deep enough down that it won’t interfere with fish. Apparently at those depths even a hurricane barely ripples the water, and they argue that the technology exists to keep a floating ring stable under those conditions. All in all, they’re imagining a collider 600 kilometers in diameter, accelerating protons to 250 TeV, all for a cost they claim would be roughly comparable to the (substantially smaller) new colliders that China and Europe are considering.

I’m sure that there are reasons I’ve overlooked why this sort of project is impossible. (I mean, just look at the map!) Still, it’s impressive that they can marshal this much of an argument.

Besides, there are even more impossible projects, like this one, by Sugawara, Hagura, and Sanami. Their proposal for a 1000 TeV neutrino beam isn’t intended for research: rather, the idea is a beam powerful enough to send neutrinos through the Earth to destroy nuclear bombs. Such a beam could cause the bombs to detonate prematurely, “fizzling” with about 3% the explosion they would have normally.

In this case, Sugawara and co. admit that their proposal is pure fantasy. With current technology they would need a ring larger than the Collider-in-the-Sea, and the project would cost hundreds of billions of dollars. It’s not even clear who would want to build such a machine, or who could get away with building it: the authors imagine a science fiction-esque world government to foot the bill.

There’s a spectrum of papers that scientists write, from whimsical speculation to serious work. The press doesn’t always make the difference clear, so it’s a useful skill to see the clues in the writing that show where a given proposal lands. In the case of the Sugawara and co. proposal, the paper is littered with caveats, explicitly making it clear that it’s just a rough estimate. Even the first line, dedicating the paper to another professor, should get you to look twice: while this sometimes happens on serious papers, often it means the paper was written as a fun gift for the professor in question. The Collider-in-the-Sea doesn’t have these kinds of warning signs, and it’s clear its authors take it a bit more seriously. Nonetheless, comparing the level of detail to other accelerator proposals, even those from the same people, should suggest that the Collider-in-the-Sea isn’t entirely on the same level. As wacky as it is to imagine, we probably won’t get a collider that takes up most of the Gulf of Mexico, or a massive neutrino beam capable of blowing up nukes around the world.

# Tutoring at GGI

I’m still at the Galileo Galilei Institute this week, tutoring at the winter school.

At GGI’s winter school, each week is featuring a pair of lecturers. This week, the lectures alternate between Lance Dixon covering the basics of amplitudeology and Csaba Csaki, discussing ways in which the Higgs could be a composite made up of new fundamental particles.

Most of the students at this school are phenomenologists, physicists who make predictions for particle physics. I’m an amplitudeologist, I study the calculation tools behind those predictions. You’d think these would be very close areas, but it’s been interesting seeing how different our approaches really are.

Some of the difference is apparent just from watching the board. In Csaki’s lectures, the equations that show up are short, a few terms long at most. When amplitudes show up, it’s for their general properties: how many factors of the coupling constant, or the multipliers that show up with loops. There aren’t any long technical calculations, and in general they aren’t needed: he’s arguing about the kinds of physics that can show up, not the specifics of how they give rise to precise numbers.

In contrast, Lance’s board filled up with longer calculations, each with many moving parts. Even things that seem simple from our perspective take a decent amount of board space to derive, and involve no small amount of technical symbol-shuffling. For most of the students, working out an amplitude this complicated was an unfamiliar experience. There are a few applications for which you need the kind of power that amplitudeology provides, and a few students were working on them. For the rest, it was a bit like learning about a foreign culture, an exercise in understanding what other people are doing rather than picking up a new skill themselves. Still, they made a strong go at it, and it was enlightening to see the pieces that ended up mattering to them, and to hear the kinds of questions they asked.

# Our Bargain

Sabine Hossenfelder has a blog post this week chastising particle physicists and cosmologists for following “upside-down Popper”, or assuming a theory is worth working on merely because it’s falsifiable. She describes her colleagues churning out one hypothesis after another, each tweaking an old idea just enough to make it falsifiable in the next experiment, without caring whether the hypothesis is actually likely to be true.

Sabine is much more of an expert in this area of physics (phenomenology) than I am, and I don’t presume to tell her she’s wrong about that community. But the problem she’s describing is part of something bigger, something that affects my part of physics as well.

There’s a core question we’d all like to answer: what should physicists work on? What criteria should guide us?

Falsifiability isn’t the whole story. The next obvious criterion is a sense of simplicity, of Occam’s Razor or mathematical elegance. Sabine has argued against the latter, which prompted a friend of mine to comment that between rejecting falsifiability and elegance, Sabine must want us to stop doing high-energy physics at all!

That’s more than a little unfair, though. I think Sabine has a reasonably clear criterion in mind. It’s the same criterion that most critics of the physics mainstream care about. It’s even the same criterion being used by the “other side”, the sort of people who criticize anything that’s not string/SUSY/inflation.

The criterion is quite a simple one: physics research should be productive. Anything we publish, anything we work on, should bring us closer to understanding the real world.

And before you object that this criterion is obvious, that it’s subjective, that it ignores the very real disagreements between the Sabines and the Luboses of the world…before any of that, please let me finish.

We can’t achieve this criterion. And we shouldn’t.

We can’t demand that all physics be productive without breaking a fundamental bargain, one we made when we accepted that science could be a career.

The Hunchback of Notre Science

It wasn’t always this way. Up until the nineteenth century, “scientist” was a hobby, not a job.

After Newton published his theory of gravity, he was famously accused by Robert Hooke of stealing the idea. There’s some controversy about this, but historians agree on a few points: that Hooke did write a letter to Newton suggesting a $1/r^2$ force law, and that Hooke, unlike Newton, never really worked out the law’s full consequences.

Why not? In part, because Hooke, unlike Newton, had a job.

Hooke was arguably the first person for whom science was a full-time source of income. As curator of experiments for the Royal Society, it was his responsibility to set up demonstrations for each Royal Society meeting. Later, he also handled correspondence for the Royal Society Journal. These responsibilities took up much of his time, and as a result, even if he was capable of following up on the consequences of $1/r^2$ he wouldn’t have had time to focus on it. That kind of calculation wasn’t what he was being paid for.

We’re better off than Hooke today. We still have our responsibilities, to journals and teaching and the like, at various stages of our careers. But in the centuries since Hooke expectations have changed, and real original research is no longer something we have to fit in our spare time. It’s now a central expectation of the job.

When scientific research became a career, we accepted a kind of bargain. On the positive side, you no longer have to be independently wealthy to contribute to science. More than that, the existence of professional scientists is the bedrock of technological civilization. With enough scientists around, we get modern medicine and the internet and space programs and the LHC, things that wouldn’t be possible in a world of rare wealthy geniuses.

We pay a price for that bargain, though. If science is a steady job, then it has to provide steady work. A scientist has to be able to go in, every day, and do science.

And the problem is, science doesn’t always work like that. There isn’t always something productive to work on. Even when there is, there isn’t always something productive for you to work on.

Sabine blames “upside-down Popper” on the current publish-or-perish environment in physics. If physics careers weren’t so cut-throat and the metrics they are judged by weren’t so flawed, then maybe people would have time to do slow, careful work on deeper topics rather than pumping out minimally falsifiable papers as fast as possible.

There’s a lot of truth to this, but I think at its core it’s a bit too optimistic. Each of us only has a certain amount of expertise, and sometimes that expertise just isn’t likely to be productive at the moment. Because science is a job, a person in that position can’t just go work at the Royal Mint like Newton did. (The modern-day equivalent would be working for Wall Street, but physicists rarely come back from that.) Instead, they keep doing what they know how to do, slowly branching out, until they’ve either learned something productive or their old topic becomes useful once more. You can think of it as a form of practice, where scientists keep their skills honed until they’re needed.

So if we slow down the rate of publication, if we create metrics for universities that let them hire based on the depth and importance of work and not just number of papers and citations, if we manage all of that then yes we will improve science a great deal. But Lisa Randall still won’t work on Haag’s theorem.

In the end, we’ll still have physicists working on topics that aren’t actually productive.

A physicist lazing about unproductively under an apple tree

So do we have to pay physicists to work on whatever they want, no matter how ridiculous?

No, I’m not saying that. We can’t expect everyone to do productive work all the time, but we can absolutely establish standards to make the work more likely to be productive.

Strange as it may sound, I think our standards for this are already quite good, or at least better than many other fields.

First, there’s falsifiability itself, or specifically our attitude towards it.

Physics’s obsession with falsifiability has one important benefit: it means that when someone proposes a new model of dark matter or inflation that they tweaked to be just beyond the current experiments, they don’t claim to know it’s true. They just claim it hasn’t been falsified yet.

This is quite different from what happens in biology and the social sciences. There, if someone tweaks their study to be just within statistical significance, people typically assume the study demonstrated something real. Doctors base treatments on it, and politicians base policy on it. Upside-down Popper has its flaws, but at least it’s never going to kill anybody, or put anyone in prison.

Admittedly, that’s a pretty low bar. Let’s try to set a higher one.

Moving past falsifiability, what about originality? We have very strong norms against publishing work that someone else has already done.

Ok, you (and probably Sabine) would object, isn’t that easy to get around? Aren’t all these Popper-flippers pretending to be original but really just following the same recipe each time, modifying their theory just enough to stay falsifiable?

To some extent. But if they were really following a recipe, you could beat them easily: just write the recipe down.

Physics progresses best when we can generalize, when we skip from case-by-case to understanding whole swaths of cases at once. Over time, there have been plenty of cases in which people have done that, where a number of fiddly hand-made models have been summarized in one parameter space. Once that happens, the rule of originality kicks in: now, no-one can propose another fiddly model like that again. It’s already covered.

As long as the recipe really is just a recipe, you can do this. You can write up what these people are doing in computer code, release the code, and then that’s that, they have to do something else. The problem is, most of the time it’s not really a recipe. It’s close enough to one that they can rely on it, close enough to one that they can get paper after paper when they need to…but it still requires just enough human involvement, just enough genuine originality, to be worth a paper.

The good news is that the range of “recipes” we can code up increases with time. Some spaces of theories we might never be able to describe in full generality (I’m glad there are people trying to do statistics on the string landscape, but good grief it looks quixotic). Some of the time though, we have a real chance of putting a neat little bow on a subject, labeled “no need to talk about this again”.

This emphasis on originality keeps the field moving. It means that despite our bargain, despite having to tolerate “practice” work as part of full-time physics jobs, we can still nudge people back towards productivity.

One final point: it’s possible you’re completely ok with the idea of physicists spending most of their time “practicing”, but just wish they wouldn’t make such a big deal about it. Maybe you can appreciate that “can I cook up a model where dark matter kills the dinosaurs” is an interesting intellectual exercise, but you don’t think it should be paraded in front of journalists as if it were actually solving a real problem.

In that case, I agree with you, at least up to a point. It is absolutely true that physics has a dysfunctional relationship with the media. We’re too used to describing whatever we’re working on as the most important thing in the universe, and journalists are convinced that’s the only way to get the public to pay attention. This is something we can and should make progress on. An increasing number of journalists are breaking from the trend and focusing not on covering the “next big thing”, but in telling stories about people. We should do all we can to promote those journalists, to spread their work over the hype, to encourage the kind of stories that treat “practice” as interesting puzzles pursued by interesting people, not the solution to the great mysteries of physics. I know that if I ever do anything newsworthy, there are some journalists I’d give the story to before any others.

At the same time, it’s important to understand that some of the dysfunction here isn’t unique to physics, or even to science. Deep down the reason nobody can admit that their physics is “practice” work is the same reason people at job interviews claim to love the company, the same reason college applicants have to tell stirring stories of hardship and couples spend tens of thousands on weddings. We live in a culture in which nothing can ever just be “ok”, in which admitting things are anything other than exceptional is akin to calling them worthless. It’s an arms-race of exaggeration, and it goes far beyond physics.

(I should note that this “culture” may not be as universal as I think it is. If so, it’s possible its presence in physics is due to you guys letting too many of us Americans into the field.)

We made a bargain when we turned science into a career. We bought modernity, but the price we pay is subsidizing some amount of unproductive “practice” work. We can negotiate the terms of our bargain, and we should, tilting the field with incentives to get it closer to the truth. But we’ll never get rid of it entirely, because science is still done by people. And sometimes, despite what we’re willing to admit, people are just “ok”.

# One, Two, Infinity

Physicists and mathematicians count one, two, infinity.

We start with the simplest case, as a proof of principle. We take a stripped down toy model or simple calculation and show that our idea works. We count “one”, and we publish.

Next, we let things get a bit more complicated. In the next toy model, or the next calculation, new interactions can arise. We figure out how to deal with those new interactions, our count goes from “one” to “two”, and once again we publish.

By this point, hopefully, we understand the pattern. We know what happens in the simplest case, and we know what happens when the different pieces start to interact. If all goes well, that’s enough: we can extrapolate our knowledge to understand not just case “three”, but any case: any model, any calculation. We publish the general case, the general method. We’ve counted one, two, infinity.

Once we’ve counted “infinity”, we don’t have to do any more cases. And so “infinity” becomes the new “zero”, and the next type of calculation you don’t know how to do becomes “one”. It’s like going from addition to multiplication, from multiplication to exponentiation, from exponentials up into the wilds of up-arrow notation. Each time, once you understand the general rules you can jump ahead to an entirely new world with new capabilities…and repeat the same process again, on a new scale. You don’t need to count one, two, three, four, on and on and on.

Of course, research doesn’t always work out this way. My last few papers counted three, four, five, with six on the way. (One and two were already known.) Unlike the ideal cases that go one, two, infinity, here “two” doesn’t give all the pieces you need to keep going. You need to go a few numbers more to get novel insights. That said, we are thinking about “infinity” now, so look forward to a future post that says something about that.

A lot of frustration in physics comes from situations when “infinity” remains stubbornly out of reach. When people complain about all the models for supersymmetry, or inflation, in some sense they’re complaining about fields that haven’t taken that “infinity” step. One or two models of inflation are nice, but by the time the count reaches ten you start hoping that someone will describe all possible models of inflation in one paper, and see if they can make any predictions from that.

(In particle physics, there’s an extent to which people can actually do this. There are methods to describe all possible modifications of the Standard Model in terms of what sort of effects they can have on observations of known particles. There’s a group at NBI who work on this sort of thing.)

The gold standard, though, is one, two, infinity. Our ability to step back, stop working case-by-case, and move on to the next level is not just a cute trick: it’s a foundation for exponential progress. If we can count one, two, infinity, then there’s nowhere we can’t reach.

# What Space Can Tell Us about Fundamental Physics

Back when LIGO announced its detection of gravitational waves, there was one question people kept asking me: “what does this say about quantum gravity?”

The answer, each time, was “nothing”. LIGO’s success told us nothing about quantum gravity, and very likely LIGO will never tell us anything about quantum gravity.

The sheer volume of questions made me think, though. Astronomy, astrophysics, and cosmology fascinate people. They capture the public’s imagination in a way that makes them expect breakthroughs about fundamental questions. Especially now, with the LHC so far seeing nothing new since the Higgs, people are turning to space for answers.

Is that a fair expectation? Well, yes and no.

Most astrophysicists aren’t concerned with finding new fundamental laws of nature. They’re interested in big systems like stars and galaxies, where we know most of the basic rules but can’t possibly calculate all their consequences. Like most physicists, they’re doing the vital work of “physics of decimals”.

At the same time, there’s a decent chunk of astrophysics and cosmology that does matter for fundamental physics. Just not all of it. Here are some of the key areas where space has something important to say about the fundamental rules that govern our world:

1. Dark Matter:

Galaxies rotate at different speeds than their stars would alone. Clusters of galaxies bend light that passes by, and do so more than their visible mass would suggest. And when scientists try to model the evolution of the universe, from early images to its current form, the models require an additional piece: extra matter that cannot interact with light. All of this suggests that there is some extra “dark” matter in the universe, not described by our standard model of particle physics.

If we want to understand this dark matter, we need to know more about its properties, and much of that can be learned from astronomy. If it turns out dark matter isn’t really matter after all, if it can be explained by a modification of gravity or better calculations of gravity’s effects, then it still will have important implications for fundamental physics, and astronomical evidence will still be key to finding those implications.

2. Dark Energy (/Cosmological Constant/Inflation/…):

The universe is expanding, and its expansion appears to be accelerating. It also seems more smooth and uniform than expected, suggesting that it had a period of much greater acceleration early on. Both of these suggest some extra quantity: a changing acceleration, a “dark energy”, the sort of thing that can often be explained by a new scalar field like the Higgs.

Again, the specifics: how (and perhaps if) the universe is expanding now, what kinds of early expansion (if any) the shape of the universe suggests, these will almost certainly have implications for fundamental physics.

3. Limits on stable stuff:

Let’s say you have a new proposal for particle physics. You’ve predicted a new particle, but it can’t interact with anything else, or interacts so weakly we’d never detect it. If your new particle is stable, then you can still say something about it, because its mass would have an effect on the early universe. Too many such particles and they would throw off cosmologists’ models, ruling them out.

Alternatively, you might predict something that could be detected, but hasn’t, like a magnetic monopole. Then cosmologists can tell you how many such particles would have been produced in the early universe, and thus how likely we would be to detect them today. If you predict too many particles and we don’t see them, then that becomes evidence against your proposal.

4. “Cosmological Collider Physics”:

A few years back, Nima Arkani-Hamed and Juan Maldacena suggested that the early universe could be viewed as an extremely high energy particle collider. While this collider performed only one experiment, the results from that experiment are spread across the sky, and observed patterns in the early universe should tell us something about the particles produced by the cosmic collider.

People are still teasing out the implications of this idea, but it looks promising, and could mean we have a lot more to learn from examining the structure of the universe.

5. Big Weird Space Stuff:

If you suspect we live in a multiverse, you might want to look for signs of other universes brushing up against our own. If your model of the early universe predicts vast cosmic strings, maybe a gravitational wave detector like LIGO will be able to see them.

6. Unexpected weirdness:

In all likelihood, nothing visibly “quantum” happens at the event horizons of astrophysical black holes. If you think there’s something to see though, the Event Horizon Telescope might be able to see it. There’s a grab bag of other predictions like this: situations where we probably won’t see anything, but where at least one person thinks there’s a question worth asking.

I’ve probably left something out here, but this should give you a general idea. There is a lot that fundamental physics can learn from astronomy, from the overall structure and origins of the universe to unexplained phenomena like dark matter. But not everything in astronomy has these sorts of implications: for the most part, astronomy is interesting not because it tells us something about the fundamental laws of nature, but because it tells us how the vast space above us actually happens to work.

# What If the Field Is Doomed?

Around Halloween, I have a tradition of exploring the spooky and/or scary side of physics (sometimes rather tenuously). This time, I want to talk about something particle physicists find scary: the future of the field.

For a long time, now, our field has centered around particle colliders. Early colliders confirmed the existence of quarks and gluons, and populated the Standard Model with a wealth of particles, some expected and some not. Now, an enormous amount of effort has poured into the Large Hadron Collider, which found the Higgs…and so far, nothing else.

Plans are being discussed for an even larger collider, in Europe or China, but it’s not clear that either will be funded. Even if the case for new physics isn’t as strong in such a collider, there are properties of the Higgs that the LHC won’t be able to measure, things it’s important to check with a more powerful machine.

That’s the case we’ll have to make to the public, if we want such a collider to be built. But in addition to the scientific reasons, there are selfish reasons to hope for a new collider. Without one, it’s not clear the field can survive in its current form.

By “the field”, here, I don’t just mean those focused on making predictions for collider physics. My work isn’t plugged particularly tightly into the real world, the same is true of most string theorists. Naively, you’d think it wouldn’t matter to us if a new collider gets built.

The trouble is, physics is interconnected. We may not all make predictions about the world, but the purpose of the tools we build and concepts we explore is to eventually make contact. On grant applications, we talk about that future, one that leads not just to understanding the mathematics and models we use but to understanding reality. And for a long while, a major theme in those grant applications has been collider physics.

Different sub-fields are vulnerable to this in different ways. Surprisingly, the people who directly make predictions for the LHC might have it easiest. Many of them can pivot, and make predictions for cosmological observations and cheaper dark matter detection experiments. Quite a few are already doing so.

It’s harder for my field, for amplitudeology. We try to push the calculation techniques of theoretical physics to greater and greater precision…but without colliders, there are fewer experiments that can match that precision. Cosmological observations and dark matter detection won’t need four-loop calculations.

If there isn’t a next big collider, our field won’t dry up overnight. Our work is disconnected enough, at a far enough remove from reality, that it takes time for that sort of change to be reflected in our funding. Optimistically, this gives people enough time to change gears and alter their focus to the less collider-dependent parts of the field. Pessimistically, it means people would be working on a zombie field, shambling around in a field that is already dead but can’t admit it.

Well I had to use some Halloween imagery

My hope is that this won’t happen. Even if the new colliders don’t get approved and collider physics goes dormant, I’d like to think my colleagues are adaptable enough to stay useful as the world’s demands change. But I’m young in this field, I haven’t seen it face these kinds of challenges before. And so, I worry.