# Our Bargain

Sabine Hossenfelder has a blog post this week chastising particle physicists and cosmologists for following “upside-down Popper”, or assuming a theory is worth working on merely because it’s falsifiable. She describes her colleagues churning out one hypothesis after another, each tweaking an old idea just enough to make it falsifiable in the next experiment, without caring whether the hypothesis is actually likely to be true.

Sabine is much more of an expert in this area of physics (phenomenology) than I am, and I don’t presume to tell her she’s wrong about that community. But the problem she’s describing is part of something bigger, something that affects my part of physics as well.

There’s a core question we’d all like to answer: what should physicists work on? What criteria should guide us?

Falsifiability isn’t the whole story. The next obvious criterion is a sense of simplicity, of Occam’s Razor or mathematical elegance. Sabine has argued against the latter, which prompted a friend of mine to comment that between rejecting falsifiability and elegance, Sabine must want us to stop doing high-energy physics at all!

That’s more than a little unfair, though. I think Sabine has a reasonably clear criterion in mind. It’s the same criterion that most critics of the physics mainstream care about. It’s even the same criterion being used by the “other side”, the sort of people who criticize anything that’s not string/SUSY/inflation.

The criterion is quite a simple one: physics research should be productive. Anything we publish, anything we work on, should bring us closer to understanding the real world.

And before you object that this criterion is obvious, that it’s subjective, that it ignores the very real disagreements between the Sabines and the Luboses of the world…before any of that, please let me finish.

We can’t achieve this criterion. And we shouldn’t.

We can’t demand that all physics be productive without breaking a fundamental bargain, one we made when we accepted that science could be a career.

The Hunchback of Notre Science

It wasn’t always this way. Up until the nineteenth century, “scientist” was a hobby, not a job.

After Newton published his theory of gravity, he was famously accused by Robert Hooke of stealing the idea. There’s some controversy about this, but historians agree on a few points: that Hooke did write a letter to Newton suggesting a $1/r^2$ force law, and that Hooke, unlike Newton, never really worked out the law’s full consequences.

Why not? In part, because Hooke, unlike Newton, had a job.

Hooke was arguably the first person for whom science was a full-time source of income. As curator of experiments for the Royal Society, it was his responsibility to set up demonstrations for each Royal Society meeting. Later, he also handled correspondence for the Royal Society Journal. These responsibilities took up much of his time, and as a result, even if he was capable of following up on the consequences of $1/r^2$ he wouldn’t have had time to focus on it. That kind of calculation wasn’t what he was being paid for.

We’re better off than Hooke today. We still have our responsibilities, to journals and teaching and the like, at various stages of our careers. But in the centuries since Hooke expectations have changed, and real original research is no longer something we have to fit in our spare time. It’s now a central expectation of the job.

When scientific research became a career, we accepted a kind of bargain. On the positive side, you no longer have to be independently wealthy to contribute to science. More than that, the existence of professional scientists is the bedrock of technological civilization. With enough scientists around, we get modern medicine and the internet and space programs and the LHC, things that wouldn’t be possible in a world of rare wealthy geniuses.

We pay a price for that bargain, though. If science is a steady job, then it has to provide steady work. A scientist has to be able to go in, every day, and do science.

And the problem is, science doesn’t always work like that. There isn’t always something productive to work on. Even when there is, there isn’t always something productive for you to work on.

Sabine blames “upside-down Popper” on the current publish-or-perish environment in physics. If physics careers weren’t so cut-throat and the metrics they are judged by weren’t so flawed, then maybe people would have time to do slow, careful work on deeper topics rather than pumping out minimally falsifiable papers as fast as possible.

There’s a lot of truth to this, but I think at its core it’s a bit too optimistic. Each of us only has a certain amount of expertise, and sometimes that expertise just isn’t likely to be productive at the moment. Because science is a job, a person in that position can’t just go work at the Royal Mint like Newton did. (The modern-day equivalent would be working for Wall Street, but physicists rarely come back from that.) Instead, they keep doing what they know how to do, slowly branching out, until they’ve either learned something productive or their old topic becomes useful once more. You can think of it as a form of practice, where scientists keep their skills honed until they’re needed.

So if we slow down the rate of publication, if we create metrics for universities that let them hire based on the depth and importance of work and not just number of papers and citations, if we manage all of that then yes we will improve science a great deal. But Lisa Randall still won’t work on Haag’s theorem.

In the end, we’ll still have physicists working on topics that aren’t actually productive.

A physicist lazing about unproductively under an apple tree

So do we have to pay physicists to work on whatever they want, no matter how ridiculous?

No, I’m not saying that. We can’t expect everyone to do productive work all the time, but we can absolutely establish standards to make the work more likely to be productive.

Strange as it may sound, I think our standards for this are already quite good, or at least better than many other fields.

First, there’s falsifiability itself, or specifically our attitude towards it.

Physics’s obsession with falsifiability has one important benefit: it means that when someone proposes a new model of dark matter or inflation that they tweaked to be just beyond the current experiments, they don’t claim to know it’s true. They just claim it hasn’t been falsified yet.

This is quite different from what happens in biology and the social sciences. There, if someone tweaks their study to be just within statistical significance, people typically assume the study demonstrated something real. Doctors base treatments on it, and politicians base policy on it. Upside-down Popper has its flaws, but at least it’s never going to kill anybody, or put anyone in prison.

Admittedly, that’s a pretty low bar. Let’s try to set a higher one.

Moving past falsifiability, what about originality? We have very strong norms against publishing work that someone else has already done.

Ok, you (and probably Sabine) would object, isn’t that easy to get around? Aren’t all these Popper-flippers pretending to be original but really just following the same recipe each time, modifying their theory just enough to stay falsifiable?

To some extent. But if they were really following a recipe, you could beat them easily: just write the recipe down.

Physics progresses best when we can generalize, when we skip from case-by-case to understanding whole swaths of cases at once. Over time, there have been plenty of cases in which people have done that, where a number of fiddly hand-made models have been summarized in one parameter space. Once that happens, the rule of originality kicks in: now, no-one can propose another fiddly model like that again. It’s already covered.

As long as the recipe really is just a recipe, you can do this. You can write up what these people are doing in computer code, release the code, and then that’s that, they have to do something else. The problem is, most of the time it’s not really a recipe. It’s close enough to one that they can rely on it, close enough to one that they can get paper after paper when they need to…but it still requires just enough human involvement, just enough genuine originality, to be worth a paper.

The good news is that the range of “recipes” we can code up increases with time. Some spaces of theories we might never be able to describe in full generality (I’m glad there are people trying to do statistics on the string landscape, but good grief it looks quixotic). Some of the time though, we have a real chance of putting a neat little bow on a subject, labeled “no need to talk about this again”.

This emphasis on originality keeps the field moving. It means that despite our bargain, despite having to tolerate “practice” work as part of full-time physics jobs, we can still nudge people back towards productivity.

One final point: it’s possible you’re completely ok with the idea of physicists spending most of their time “practicing”, but just wish they wouldn’t make such a big deal about it. Maybe you can appreciate that “can I cook up a model where dark matter kills the dinosaurs” is an interesting intellectual exercise, but you don’t think it should be paraded in front of journalists as if it were actually solving a real problem.

In that case, I agree with you, at least up to a point. It is absolutely true that physics has a dysfunctional relationship with the media. We’re too used to describing whatever we’re working on as the most important thing in the universe, and journalists are convinced that’s the only way to get the public to pay attention. This is something we can and should make progress on. An increasing number of journalists are breaking from the trend and focusing not on covering the “next big thing”, but in telling stories about people. We should do all we can to promote those journalists, to spread their work over the hype, to encourage the kind of stories that treat “practice” as interesting puzzles pursued by interesting people, not the solution to the great mysteries of physics. I know that if I ever do anything newsworthy, there are some journalists I’d give the story to before any others.

At the same time, it’s important to understand that some of the dysfunction here isn’t unique to physics, or even to science. Deep down the reason nobody can admit that their physics is “practice” work is the same reason people at job interviews claim to love the company, the same reason college applicants have to tell stirring stories of hardship and couples spend tens of thousands on weddings. We live in a culture in which nothing can ever just be “ok”, in which admitting things are anything other than exceptional is akin to calling them worthless. It’s an arms-race of exaggeration, and it goes far beyond physics.

(I should note that this “culture” may not be as universal as I think it is. If so, it’s possible its presence in physics is due to you guys letting too many of us Americans into the field.)

We made a bargain when we turned science into a career. We bought modernity, but the price we pay is subsidizing some amount of unproductive “practice” work. We can negotiate the terms of our bargain, and we should, tilting the field with incentives to get it closer to the truth. But we’ll never get rid of it entirely, because science is still done by people. And sometimes, despite what we’re willing to admit, people are just “ok”.

# One, Two, Infinity

Physicists and mathematicians count one, two, infinity.

We start with the simplest case, as a proof of principle. We take a stripped down toy model or simple calculation and show that our idea works. We count “one”, and we publish.

Next, we let things get a bit more complicated. In the next toy model, or the next calculation, new interactions can arise. We figure out how to deal with those new interactions, our count goes from “one” to “two”, and once again we publish.

By this point, hopefully, we understand the pattern. We know what happens in the simplest case, and we know what happens when the different pieces start to interact. If all goes well, that’s enough: we can extrapolate our knowledge to understand not just case “three”, but any case: any model, any calculation. We publish the general case, the general method. We’ve counted one, two, infinity.

Once we’ve counted “infinity”, we don’t have to do any more cases. And so “infinity” becomes the new “zero”, and the next type of calculation you don’t know how to do becomes “one”. It’s like going from addition to multiplication, from multiplication to exponentiation, from exponentials up into the wilds of up-arrow notation. Each time, once you understand the general rules you can jump ahead to an entirely new world with new capabilities…and repeat the same process again, on a new scale. You don’t need to count one, two, three, four, on and on and on.

Of course, research doesn’t always work out this way. My last few papers counted three, four, five, with six on the way. (One and two were already known.) Unlike the ideal cases that go one, two, infinity, here “two” doesn’t give all the pieces you need to keep going. You need to go a few numbers more to get novel insights. That said, we are thinking about “infinity” now, so look forward to a future post that says something about that.

A lot of frustration in physics comes from situations when “infinity” remains stubbornly out of reach. When people complain about all the models for supersymmetry, or inflation, in some sense they’re complaining about fields that haven’t taken that “infinity” step. One or two models of inflation are nice, but by the time the count reaches ten you start hoping that someone will describe all possible models of inflation in one paper, and see if they can make any predictions from that.

(In particle physics, there’s an extent to which people can actually do this. There are methods to describe all possible modifications of the Standard Model in terms of what sort of effects they can have on observations of known particles. There’s a group at NBI who work on this sort of thing.)

The gold standard, though, is one, two, infinity. Our ability to step back, stop working case-by-case, and move on to the next level is not just a cute trick: it’s a foundation for exponential progress. If we can count one, two, infinity, then there’s nowhere we can’t reach.

# What Space Can Tell Us about Fundamental Physics

Back when LIGO announced its detection of gravitational waves, there was one question people kept asking me: “what does this say about quantum gravity?”

The answer, each time, was “nothing”. LIGO’s success told us nothing about quantum gravity, and very likely LIGO will never tell us anything about quantum gravity.

The sheer volume of questions made me think, though. Astronomy, astrophysics, and cosmology fascinate people. They capture the public’s imagination in a way that makes them expect breakthroughs about fundamental questions. Especially now, with the LHC so far seeing nothing new since the Higgs, people are turning to space for answers.

Is that a fair expectation? Well, yes and no.

Most astrophysicists aren’t concerned with finding new fundamental laws of nature. They’re interested in big systems like stars and galaxies, where we know most of the basic rules but can’t possibly calculate all their consequences. Like most physicists, they’re doing the vital work of “physics of decimals”.

At the same time, there’s a decent chunk of astrophysics and cosmology that does matter for fundamental physics. Just not all of it. Here are some of the key areas where space has something important to say about the fundamental rules that govern our world:

1. Dark Matter:

Galaxies rotate at different speeds than their stars would alone. Clusters of galaxies bend light that passes by, and do so more than their visible mass would suggest. And when scientists try to model the evolution of the universe, from early images to its current form, the models require an additional piece: extra matter that cannot interact with light. All of this suggests that there is some extra “dark” matter in the universe, not described by our standard model of particle physics.

If we want to understand this dark matter, we need to know more about its properties, and much of that can be learned from astronomy. If it turns out dark matter isn’t really matter after all, if it can be explained by a modification of gravity or better calculations of gravity’s effects, then it still will have important implications for fundamental physics, and astronomical evidence will still be key to finding those implications.

2. Dark Energy (/Cosmological Constant/Inflation/…):

The universe is expanding, and its expansion appears to be accelerating. It also seems more smooth and uniform than expected, suggesting that it had a period of much greater acceleration early on. Both of these suggest some extra quantity: a changing acceleration, a “dark energy”, the sort of thing that can often be explained by a new scalar field like the Higgs.

Again, the specifics: how (and perhaps if) the universe is expanding now, what kinds of early expansion (if any) the shape of the universe suggests, these will almost certainly have implications for fundamental physics.

3. Limits on stable stuff:

Let’s say you have a new proposal for particle physics. You’ve predicted a new particle, but it can’t interact with anything else, or interacts so weakly we’d never detect it. If your new particle is stable, then you can still say something about it, because its mass would have an effect on the early universe. Too many such particles and they would throw off cosmologists’ models, ruling them out.

Alternatively, you might predict something that could be detected, but hasn’t, like a magnetic monopole. Then cosmologists can tell you how many such particles would have been produced in the early universe, and thus how likely we would be to detect them today. If you predict too many particles and we don’t see them, then that becomes evidence against your proposal.

4. “Cosmological Collider Physics”:

A few years back, Nima Arkani-Hamed and Juan Maldacena suggested that the early universe could be viewed as an extremely high energy particle collider. While this collider performed only one experiment, the results from that experiment are spread across the sky, and observed patterns in the early universe should tell us something about the particles produced by the cosmic collider.

People are still teasing out the implications of this idea, but it looks promising, and could mean we have a lot more to learn from examining the structure of the universe.

5. Big Weird Space Stuff:

If you suspect we live in a multiverse, you might want to look for signs of other universes brushing up against our own. If your model of the early universe predicts vast cosmic strings, maybe a gravitational wave detector like LIGO will be able to see them.

6. Unexpected weirdness:

In all likelihood, nothing visibly “quantum” happens at the event horizons of astrophysical black holes. If you think there’s something to see though, the Event Horizon Telescope might be able to see it. There’s a grab bag of other predictions like this: situations where we probably won’t see anything, but where at least one person thinks there’s a question worth asking.

I’ve probably left something out here, but this should give you a general idea. There is a lot that fundamental physics can learn from astronomy, from the overall structure and origins of the universe to unexplained phenomena like dark matter. But not everything in astronomy has these sorts of implications: for the most part, astronomy is interesting not because it tells us something about the fundamental laws of nature, but because it tells us how the vast space above us actually happens to work.

# What If the Field Is Doomed?

Around Halloween, I have a tradition of exploring the spooky and/or scary side of physics (sometimes rather tenuously). This time, I want to talk about something particle physicists find scary: the future of the field.

For a long time, now, our field has centered around particle colliders. Early colliders confirmed the existence of quarks and gluons, and populated the Standard Model with a wealth of particles, some expected and some not. Now, an enormous amount of effort has poured into the Large Hadron Collider, which found the Higgs…and so far, nothing else.

Plans are being discussed for an even larger collider, in Europe or China, but it’s not clear that either will be funded. Even if the case for new physics isn’t as strong in such a collider, there are properties of the Higgs that the LHC won’t be able to measure, things it’s important to check with a more powerful machine.

That’s the case we’ll have to make to the public, if we want such a collider to be built. But in addition to the scientific reasons, there are selfish reasons to hope for a new collider. Without one, it’s not clear the field can survive in its current form.

By “the field”, here, I don’t just mean those focused on making predictions for collider physics. My work isn’t plugged particularly tightly into the real world, the same is true of most string theorists. Naively, you’d think it wouldn’t matter to us if a new collider gets built.

The trouble is, physics is interconnected. We may not all make predictions about the world, but the purpose of the tools we build and concepts we explore is to eventually make contact. On grant applications, we talk about that future, one that leads not just to understanding the mathematics and models we use but to understanding reality. And for a long while, a major theme in those grant applications has been collider physics.

Different sub-fields are vulnerable to this in different ways. Surprisingly, the people who directly make predictions for the LHC might have it easiest. Many of them can pivot, and make predictions for cosmological observations and cheaper dark matter detection experiments. Quite a few are already doing so.

It’s harder for my field, for amplitudeology. We try to push the calculation techniques of theoretical physics to greater and greater precision…but without colliders, there are fewer experiments that can match that precision. Cosmological observations and dark matter detection won’t need four-loop calculations.

If there isn’t a next big collider, our field won’t dry up overnight. Our work is disconnected enough, at a far enough remove from reality, that it takes time for that sort of change to be reflected in our funding. Optimistically, this gives people enough time to change gears and alter their focus to the less collider-dependent parts of the field. Pessimistically, it means people would be working on a zombie field, shambling around in a field that is already dead but can’t admit it.

Well I had to use some Halloween imagery

My hope is that this won’t happen. Even if the new colliders don’t get approved and collider physics goes dormant, I’d like to think my colleagues are adaptable enough to stay useful as the world’s demands change. But I’m young in this field, I haven’t seen it face these kinds of challenges before. And so, I worry.

# Most of String Theory Is Not String Pheno

Last week, Sabine Hossenfelder wrote a post entitled “Why not string theory?” In it, she argued that string theory has a much more dominant position in physics than it ought to: that it’s crowding out alternative theories like Loop Quantum Gravity and hogging much more funding than it actually merits.

If you follow the string wars at all, you’ve heard these sorts of arguments before. There’s not really anything new here.

That said, there were a few sentences in Hossenfelder’s post that got my attention, and inspired me to write this post.

So far, string theory has scored in two areas. First, it has proved interesting for mathematicians. But I’m not one to easily get floored by pretty theorems – I care about math only to the extent that it’s useful to explain the world. Second, string theory has shown to be useful to push ahead with the lesser understood aspects of quantum field theories. This seems a fruitful avenue and is certainly something to continue. However, this has nothing to do with string theory as a theory of quantum gravity and a unification of the fundamental interactions.

(Bolding mine)

Here, Hossenfelder explicitly leaves out string theorists who work on “lesser understood aspects of quantum field theories” from her critique. They’re not the big, dominant program she’s worried about.

What Hossenfelder doesn’t seem to realize is that right now, it is precisely the “aspects of quantum field theories” crowd that is big and dominant. The communities of string theorists working on something else, and especially those making bold pronouncements about the nature of the real world, are much, much smaller.

Let’s define some terms:

Phenomenology (or pheno for short) is the part of theoretical physics that attempts to make predictions that can be tested in experiments. String pheno, then, covers attempts to use string theory to make predictions. In practice, though, it’s broader than that: while some people do attempt to predict the results of experiments, more work on figuring out how models constructed by other phenomenologists can make sense in string theory. This still attempts to test string theory in some sense: if a phenomenologist’s model turns out to be true but it can’t be replicated in string theory then string theory would be falsified. That said, it’s more indirect. In parallel to string phenomenology, there is also the related field of string cosmology, which has a similar relationship with cosmology.

If other string theorists aren’t trying to make predictions, what exactly are they doing? Well, a large number of them are studying quantum field theories. Quantum field theories are currently our most powerful theories of nature, but there are many aspects of them that we don’t yet understand. For a large proportion of string theorists, string theory is useful because it provides a new way to understand these theories in terms of different configurations of string theory, which often uncovers novel and unexpected properties. This is still physics, not mathematics: the goal, in the end, is to understand theories that govern the real world. But it doesn’t involve the same sort of direct statements about the world as string phenomenology or string cosmology: crucially, it doesn’t depend on whether string theory is true.

Last week, I said that before replying to Hossenfelder’s post I’d have to gather some numbers. I was hoping to find some statistics on how many people work on each of these fields, or on their funding. Unfortunately, nobody seems to collect statistics broken down by sub-field like this.

As a proxy, though, we can look at conferences. Strings is the premier conference in string theory. If something has high status in the string community, it will probably get a talk at Strings. So to investigate, I took a look at the talks given last year, at Strings 2015, and broke them down by sub-field.

Here I’ve left out the historical overview talks, since they don’t say much about current research.

“QFT” is for talks about lesser understood aspects of quantum field theories. Amplitudes, my own sub-field, should be part of this: I’ve separated it out to show what a typical sub-field of the QFT block might look like.

“Formal Strings” refers to research into the fundamentals of how to do calculations in string theory: in principle, both the QFT folks and the string pheno folks find it useful.

“Holography” is a sub-topic of string theory in which string theory in some space is equivalent to a quantum field theory on the boundary of that space. Some people study this because they want to learn about quantum field theory from string theory, others because they want to learn about quantum gravity from quantum field theory. Since the field can’t be cleanly divided into quantum gravity and quantum field theory research, I’ve given it its own category.

While all string theory research is in principle about quantum gravity, the “Quantum Gravity” section refers to people focused on the sorts of topics that interest non-string quantum gravity theorists, like black hole entropy.

Finally, we have String Cosmology and String Phenomenology, which I’ve already defined.

Don’t take the exact numbers here too seriously: not every talk fit cleanly into a category, so there were some judgement calls on my part. Nonetheless, this should give you a decent idea of the makeup of the string theory community.

The biggest wedge in the diagram by far, taking up a majority of the talks, is QFT. Throwing in Amplitudes (part of QFT) and Formal Strings (useful to both), and you’ve got two thirds of the conference. Even if you believe Hossenfelder’s tale of the failures of string theory, then, that only matters to a third of this diagram. And once you take into account that many of the Holography and Quantum Gravity people are interested in aspects of QFT as well, you’re looking at an even smaller group. Really, Hossenfelder’s criticism is aimed at two small slices on the chart: String Pheno, and String Cosmo.

Of course, string phenomenologists also have their own conference. It’s called String Pheno, and last year it had 130 participants. In contrast, LOOPS’ 2015, the conference for string theory’s most famous “rival”, had…190 participants. The fields are really pretty comparable.

Now, I have a lot more sympathy for the string phenomenologists and string cosmologists than I do for loop quantum gravity. If other string theorists felt the same way, then maybe that would cause the sort of sociological effect that Hossenfelder is worried about.

But in practice, I don’t think this happens. I’ve met string theorists who didn’t even know that people still did string phenomenology. The two communities are almost entirely disjoint: string phenomenologists and string cosmologists interact much more with other phenomenologists and cosmologists than they do with other string theorists.

You want to talk about sociology? Sociologically, people choose careers and fund research because they expect something to happen soon. People don’t want to be left high and dry by a dearth of experiments, don’t feel comfortable working on something that may only be vindicated long after they’re dead. Most people choose the safe option, the one that, even if it’s still aimed at a distant goal, is also producing interesting results now (aspects of quantum field theories, for example).

The people that don’t? Tend to form small, tight-knit, passionate communities. They carve out a few havens of like-minded people, and they think big thoughts while the world around them seems to only care about their careers.

If you’re a loop quantum gravity theorist, or a quantum gravity phenomenologist like Hossenfelder, and you see some of your struggles in that paragraph, please realize that string phenomenology is like that too.

I feel like Hossenfelder imagines a world in which string theory is struck from its high place, and alternative theories of quantum gravity are of comparable size and power. But from where I’m sitting, it doesn’t look like it would work out that way. Instead, you’d have alternatives grow to the same size as similarly risky parts of string theory, like string phenomenology. And surprise, surprise: they’re already that size.

In certain corners of the internet, people like to argue about “punching up” and “punching down”. Hossenfelder seems to think she’s “punching up”, giving the big dominant group a taste of its own medicine. But by leaving out string theorists who study QFTs, she’s really “punching down”, or at least sideways, and calling out a sub-group that doesn’t have much more power than her own.

# Mass Is Just Energy You Haven’t Met Yet

There is one central misunderstanding that makes each of these topics confusing. It’s something I’ve brought up before, but it really deserves its own post. It’s people not realizing that mass is just energy you haven’t met yet.

It’s quite intuitive to think of mass as some sort of “stuff” that things can be made out of. In our everyday experience, that’s how it works: combine this mass of flour and this mass of sugar, and get this mass of cake. Historically, it was the dominant view in physics for quite some time. However, once you get to particle physics it starts to break down.

It’s probably most obvious for protons. A proton has a mass of 938 MeV/c², or 1.6×10⁻²⁷ kg in less physicist-specific units. Protons are each made of three quarks, two up quarks and a down quark. Naively, you’d think that the quarks would have to be around 300 MeV/c². They’re not, though: up and down quarks both have masses less than 10 MeV/c². Those three quarks account for less than a fiftieth of a proton’s mass.

The “extra” mass is because a proton is not just three quarks. It’s three quarks interacting. The forces between those quarks, the strong nuclear force that binds them together, involves a heck of a lot of energy. And from a distance, that energy ends up looking like mass.

This isn’t unique to protons. In some sense, it’s just what mass is.

The quarks themselves get their mass from the Higgs field. Far enough away, this looks like the quarks having a mass. However, zoom in and it’s energy again, the energy of interaction between quarks and the Higgs. In string theory, mass comes from the energy of vibrating strings. And so on. Every time we run into something that looks like a fundamental mass, it ends up being just another energy of interaction.

If mass is just energy, what about gravity?

When you’re taught about gravity, the story is all about mass. Mass attracts mass. Mass bends space-time. What gets left out, until you actually learn the details of General Relativity, is that energy gravitates too.

Normally you don’t notice this, because mass contributes so much more to energy than anything else. That’s really what E=m is really about: it’s a unit conversion formula. It tells you that if you want to know how much energy a given mass “really is”, you multiply it by the speed of light squared. And that’s a large enough number that most of the time, when you notice energy gravitating, it’s because that energy looks like a big chunk of mass. (It’s also why physicists like silly units like MeV/c² for mass: we can just multiply by c² and get an energy!)

It’s really tempting to think about mass as a substance, of mass as always conserved, of mass as fundamental. But in physics we often have to toss aside our everyday intuitions, and this is no exception. Mass really is just energy. It’s just energy that we’ve “zoomed out” enough not to notice.

# A Collider’s Eye View

When it detected the Higgs, what did the LHC see, exactly?

What do you see with your detector-eyes, CMS?

The first problem is that the Higgs, like most particles produced in particle colliders, is unstable. In a very short amount of time the Higgs transforms into two or more lighter particles. Often, these particles will decay in turn, possibly many more times.  So when the LHC sees a Higgs boson, it doesn’t really “see the Higgs”.

The second problem is that you can’t “see” the lighter particles either. They’re much too small for that. Instead, the LHC has to measure their properties.

Does the particle have a charge? Then its path will curve in a magnetic field, and it will send electrical signals in silicon. So the LHC can “see” charge.

Can the particle be stopped, absorbed by some material? Getting absorbed releases energy, lighting up a detector. So the LHC can “see” energy, and what it takes for a particle to be absorbed.

Diagram of a collider’s “eye”

And that’s…pretty much it. When the LHC “sees” the Higgs, what it sees is a set of tracks in a magnetic field, indicating charge, and energy in its detectors, caused by absorption at different points. Everything else has to be inferred: what exactly the particles were, where they decayed, and from what. Some of it can be figured out in real-time, some is only understood later once we can add up everything and do statistics.

On the face of it, this sounds about as impossible as astrophysics. Like astrophysics, it works in part because what the colliders see is not the whole story. The strong force has to both be consistent with our observations of hadrons, and with nuclear physics. Neutrinos aren’t just mysterious missing energy that we can’t track, they’re an important part of cosmology. And so on.

So in the sense of that massive, interconnected web of ideas, the LHC sees the Higgs. It sees patterns of charges and energies, binned into histograms and analyzed with statistics and cross-checked, implicitly or explicitly, against all of the rest of physics at every scale we know. All of that, together, is the collider’s eye view of the universe.