Tag Archives: particle physics

Changing the Question

I’ve recently been reading Why Does the World Exist?, a book by the journalist Jim Holt. In it he interviews a range of physicists and philosophers, asking each the question in the title. As the book goes on, he concludes that physicists can’t possibly give him the answer he’s looking for: even if physicists explain the entire universe from simple physical laws, they still would need to explain why those laws exist. A bit disappointed, he turns back to the philosophers.

Something about Holt’s account rubs me the wrong way. Yes, it’s true that physics can’t answer this kind of philosophical problem, at least not in a logically rigorous way. But I think we do have a chance of answering the question nonetheless…by eclipsing it with a better question.

How would that work? Let’s consider a historical example.

Does the Earth go around the Sun, or does the Sun go around the Earth? We learn in school that this is a solved question: Copernicus was right, the Earth goes around the Sun.

The details are a bit more subtle, though. The Sun and the Earth both attract each other: while it is a good approximation to treat the Sun as fixed, in reality it and the Earth both move in elliptical orbits around the same focus (which is close to, but not exactly, the center of the Sun). Furthermore, this is all dependent on your choice of reference frame: if you wish you can choose coordinates in which the Earth stays still while the Sun moves.

So what stops a modern-day Tycho Brahe from arguing that the Sun and the stars and everything else orbit around the Earth?

The reason we aren’t still debating the Copernican versus the Tychonic system isn’t that we proved Copernicus right. Instead, we replaced the old question with a better one. We don’t actually care which object is the center of the universe. What we care about is whether we can make predictions, and what mathematical laws we need to do so. Newton’s law of universal gravitation lets us calculate the motion of the solar system. It’s easier to teach it by talking about the Earth going around the Sun, so we talk about it that way. The “philosophical” question, about the “center of the universe”, has been explained away by the more interesting practical question.

My suspicion is that other philosophical questions will be solved in this way. Maybe physicists can’t solve the ultimate philosophical question, of why the laws of physics are one way and not another. But if we can predict unexpected laws and match observations of the early universe, then we’re most of the way to making the question irrelevant. Similarly, perhaps neuroscientists will never truly solve the mystery of consciousness, at least the way philosophers frame it today. Nevertheless, if they can describe brains well enough to understand why we act like we’re conscious, if they have something in their explanation that looks sufficiently “consciousness-like”, then it won’t matter if they meet the philosophical requirements, people simply won’t care. The question will have been eaten by a more interesting question.

This can happen in physics by itself, without reference to philosophy. Indeed, it may happen again soon. In the New Yorker this week, Natalie Wolchover has an article in which she talks to Nima Arkani-Hamed about the search for better principles to describe the universe. In it, Nima talks about looking for a deep mathematical question that the laws of physics answer. Peter Woit has expressed confusion that Nima can both believe this and pursue various complicated, far-fetched, and at times frankly ugly ideas for new physics.

I think the way to reconcile these two perspectives is to know that Nima takes naturalness seriously. The naturalness argument in physics states that physics as we currently see it is “unnatural”, in particular, that we can’t get it cleanly from the kinds of physical theories we understand. If you accept the argument as stated, then you get driven down a rabbit hole of increasingly strange solutions: versions of supersymmetry that cleverly hide from all experiments, hundreds of copies of the Standard Model, or even a multiverse.

Taking naturalness seriously doesn’t just mean accepting the argument as stated though. It can also mean believing the argument is wrong, but wrong in an interesting way.

One interesting way naturalness could be wrong would be if our reductionist picture of the world, where the ultimate laws live on the smallest scales, breaks down. I’ve heard vague hints from physicists over the years that this might be the case, usually based on the way that gravity seems to mix small and large scales. (Wolchover’s article also hints at this.) In that case, you’d want to find not just a new physical theory, but a new question to ask, something that could eclipse the old question with something more interesting and powerful.

Nima’s search for better questions seems to drive most of his research now. But I don’t think he’s 100% certain that the old questions are wrong, so you can still occasionally see him talking about multiverses and the like.

Ultimately, we can’t predict when a new question will take over. It’s a mix of the social and the empirical, of new predictions and observations but also of which ideas are compelling and beautiful enough to get people to dismiss the old question as irrelevant. It feels like we’re due for another change…but we might not be, and even if we are it might be a long time coming.

Advertisements

The Particle Physics Curse of Knowledge

There’s a debate raging right now in particle physics, about whether and how to build the next big collider. CERN’s Future Circular Collider group has been studying different options, some more expensive and some less (Peter Woit has a nice summary of these here). This year, the European particle physics community will debate these proposals, deciding whether to include them in an updated European Strategy for Particle Physics. After that, it will be up to the various countries that are members of CERN to decide whether to fund the proposal. With the costs of the more expensive options hovering around $20 billion, this has led to substantial controversy.

I’m not going to offer an opinion here one way or another. Weighing this kind of thing requires knowing the alternatives: what else the European particle physics community might lobby for in the next few years, and once they decide, what other budget priorities each individual country has. I know almost nothing about either.

Instead of an opinion, I have an observation:

Imagine that primatologists had proposed a $20 billion primate center, able to observe gorillas in greater detail than ever before. The proposal might be criticized in any number of ways: there could be much cheaper ways to accomplish the same thing, the project might fail, it might be that we simply don’t care enough about primate behavior to spend $20 billion on it.

What you wouldn’t expect is the claim that a $20 billion primate center would teach us nothing new.

It probably wouldn’t teach us “$20 billion worth of science”, whatever that means. But a center like that would be guaranteed to discover something. That’s because we don’t expect primatologists’ theories to be exact. Even if gorillas behaved roughly as primatologists expected, the center would still see new behaviors, just as a consequence of looking at a new level of detail.

To pick a physics example, consider the gravitational wave telescope LIGO. Before their 2016 observation of two black holes merging, LIGO faced substantial criticism. After their initial experiments didn’t detect anything, many physicists thought that the project was doomed to fail: that it would never be sensitive enough to detect the faint signals of gravitational waves past the messy vibrations of everyday life on Earth.

When it finally worked, though, LIGO did teach us something new. Not the existence of gravitational waves, we already knew about them. Rather, LIGO taught us new things about the kinds of black holes that exist. LIGO observed much bigger black holes than astronomers expected, a surprise big enough that it left some people skeptical. Even if it hadn’t, though, we still would almost certainly observe something new: there’s no reason to expect astronomers to perfectly predict the size of the universe’s black holes.

Particle physics is different.

I don’t want to dismiss the work that goes in to collider physics (far too many people have dismissed it recently). Much, perhaps most, of the work on the LHC is dedicated not to detecting new particles, but to confirming and measuring the Standard Model. A new collider would bring heroic scientific effort. We’d learn revolutionary new things about how to build colliders, how to analyze data from colliders, and how to use the Standard Model to make predictions for colliders.

In the end, though, we expect those predictions to work. And not just to work reasonably well, but to work perfectly. While we might see something beyond the Standard Model, the default expectation is that we won’t, that after doing the experiments and analyzing the data and comparing to predictions we’ll get results that are statistically indistinguishable from an equation we can fit on a T-shirt. We’ll fix the constants on that T-shirt to an unprecedented level of precision, yes, but the form of the equation may well stay completely the same.

I don’t think there’s another field where that’s even an option. Nowhere else in all of science could we observe the world in unprecedented detail, capturing phenomena that had never been seen before…and end up perfectly matching our existing theory. There’s no other science where anyone would even expect that to happen.

That makes the argument here different from any argument we’ve faced before. It forces people to consider their deep priorities, to think not just about the best way to carry out this test or that but about what science is supposed to be for. I don’t think there are any easy answers. We’re in what may well be a genuinely new situation, and we have to figure out how to navigate it together.

Postscript: I still don’t want to give an opinion, but given that I didn’t have room for this above let me give a fragment of an opinion: Higgs triple couplings!!!

Made of Quarks Versus Made of Strings

When you learn physics in school, you learn it in terms of building blocks.

First, you learn about atoms. Indivisible elements, as the Greeks foretold…until you learn that they aren’t indivisible. You learn that atoms are made of electrons, protons, and neutrons. Then you learn that protons and neutrons aren’t indivisible either, they’re made of quarks. They’re what physicists call composite particles, particles made of other particles stuck together.

Hearing this story, you notice a pattern. Each time physicists find a more fundamental theory, they find that what they thought were indivisible particles are actually composite. So when you hear physicists talking about the next, more fundamental theory, you might guess it has to work the same way. If quarks are made of, for example, strings, then each quark is made of many strings, right?

Nope! As it turns out, there are two different things physicists can mean when they say a particle is “made of” a more fundamental particle. Sometimes they mean the particle is composite, like the proton is made of quarks. But sometimes, like when they say particles are “made of strings”, they mean something different.

To understand what this “something different” is, let’s go back to quarks for a moment. You might have heard there are six types, or flavors, of quarks: up and down, strange and charm, top and bottom. The different types have different mass and electric charge. You might have also heard that quarks come in different colors, red green and blue. You might wonder then, aren’t there really eighteen types of quark? Red up quarks, green top quarks, and so forth?

Physicists don’t think about it that way. Unlike the different flavors, the different colors of quark have a more unified mathematical description. Changing the color of a quark doesn’t change its mass or electric charge. All it changes is how the quark interacts with other particles via the strong nuclear force. Know how one color works, and you know how the other colors work. Different colors can also “mix” together, similarly to how different situations can mix together in quantum mechanics: just as Schrodinger’s cat can be both alive and dead, a quark can be both red and green.

This same kind of thing is involved in another example, electroweak unification. You might have heard that electromagnetism and the weak nuclear force are secretly the same thing. Each force has corresponding particles: the familiar photon for electromagnetism, and W and Z bosons for the weak nuclear force. Unlike the different colors of quarks, photons and W and Z bosons have different masses from each other. It turns out, though, that they still come from a unified mathematical description: they’re “mixtures” (in the same Schrodinger’s cat-esque sense) of the particles from two more fundamental forces (sometimes called “weak isospin” and “weak hypercharge”). The reason they have different masses isn’t their own fault, but the fault of the Higgs: the Higgs field we have in our universe interacts with different parts of this unified force differently, so the corresponding particles end up with different masses.

A physicist might say that electromagnetism and the weak force are “made of” weak isospin and weak hypercharge. And it’s that kind of thing that physicists mean when they say that quarks might be made of strings, or the like: not that quarks are composite, but that quarks and other particles might have a unified mathematical description, and look different only because they’re interacting differently with something else.

This isn’t to say that quarks and electrons can’t be composite as well. They might be, we don’t know for sure. If they are, the forces binding them together must be very strong, strong enough that our most powerful colliders can’t make them wiggle even a little out of shape. The tricky part is that composite particles get mass from the energy holding them together. A particle held together by very powerful forces would normally be very massive, if you want it to end up lighter you have to construct your theory carefully to do that. So while occasionally people will suggest theories where quarks or electrons are composite, these theories aren’t common. Most of the time, if a physicist says that quarks or electrons are “made of ” something else, they mean something more like “particles are made of strings” than like “protons are made of quarks”.

Assumptions for Naturalness

Why did physicists expect to see something new at the LHC, more than just the Higgs boson? Mostly, because of something called naturalness.

Naturalness, broadly speaking, is the idea that there shouldn’t be coincidences in physics. If two numbers that appear in your theory cancel out almost perfectly, there should be a reason that they cancel. Put another way, if your theory has a dimensionless constant in it, that constant should be close to one.

(To see why these two concepts are the same, think about a theory where two large numbers miraculously almost cancel, leaving just a small difference. Take the ratio of one of those large numbers to the difference, and you get a very large dimensionless number.)

You might have heard it said that the mass of the Higgs boson is “unnatural”. There are many different physical processes that affect what we measure as the mass of the Higgs. We don’t know exactly how big these effects are, but we do know that they grow with the scale of “new physics” (aka the mass of any new particles we might have discovered), and that they have to cancel to give the Higgs mass we observe. If we don’t see any new particles, the Higgs mass starts looking more and more unnatural, driving some physicists to the idea of a “multiverse”.

If you find parts of this argument hokey, you’re not alone. Critics of naturalness point out that we don’t really have a good reason to favor “numbers close to one”, nor do we have any way to quantify how “bad” a number far from one is (we don’t know the probability distribution, in other words). They critique theories that do preserve naturalness, like supersymmetry, for being increasingly complicated and unwieldy, violating Occam’s razor. And in some cases they act baffled by the assumption that there should be any “new physics” at all.

Some of these criticisms are reasonable, but some are distracting and off the mark. The problem is that the popular argument for naturalness leaves out some important assumptions. These assumptions are usually kept in mind by the people arguing for naturalness (at least the more careful people), but aren’t often made explicit. I’d like to state some of these assumptions. I’ll be framing the naturalness argument in a bit of an unusual (if not unprecedented) way. My goal is to show that some criticisms of naturalness don’t really work, while others still make sense.

I’d like to state the naturalness argument as follows:

  1. The universe should be ultimately described by a theory with no free dimensionless parameters at all. (For the experts: the theory should also be UV-finite.)
  2. We are reasonably familiar with theories of the sort described in 1., we know roughly what they can look like.
  3. If we look at such a theory at low energies, it will appear to have dimensionless parameters again, based on the energy where we “cut off” our description. We understand this process well enough to know what kinds of values these parameters can take, starting from 2.
  4. Point 3. can only be consistent with the observed mass of the Higgs if there is some “new physics” at around the scales the LHC can measure. That is, there is no known way to start with a theory like those of 2. and get the observed Higgs mass without new particles.

Point 1. is often not explicitly stated. It’s an assumption, one that sits in the back of a lot of physicists’ minds and guides their reasoning. I’m really not sure if I can fully justify it, it seems like it should be a consequence of what a final theory is.

(For the experts: you’re probably wondering why I’m insisting on a theory with no free parameters, when usually this argument just demands UV-finiteness. I demand this here because I think this is the core reason why we worry about coincidences: free parameters of any intermediate theory must eventually be explained in a theory where those parameters are fixed, and “unnatural” coincidences are those we don’t expect to be able to fix in this way.)

Point 2. may sound like a stretch, but it’s less of one than you might think. We do know of a number of theories that have few or no dimensionless parameters (and that are UV-finite), they just don’t describe the real world. Treating these theories as toy models, we can hopefully get some idea of how theories like this should look. We also have a candidate theory of this kind that could potentially describe the real world, M theory, but it’s not fleshed out enough to answer these kinds of questions definitively at this point. At best it’s another source of toy models.

Point 3. is where most of the technical arguments show up. If someone talking about naturalness starts talking about effective field theory and the renormalization group, they’re probably hashing out the details of point 3. Parts of this point are quite solid, but once again there are some assumptions that go into it, and I don’t think we can say that this point is entirely certain.

Once you’ve accepted the arguments behind points 1.-3., point 4. follows. The Higgs is unnatural, and you end up expecting new physics.

Framed in this way, arguments about the probability distribution of parameters are missing the point, as are arguments from Occam’s razor.

The point is not that the Standard Model has unlikely parameters, or that some in-between theory has unlikely parameters. The point is that there is no known way to start with the kind of theory that could be an ultimate description of the universe and end up with something like the observed Higgs and no detectable new physics. Such a theory isn’t merely unlikely, if you take this argument seriously it’s impossible. If your theory gets around this argument, it can be as cumbersome and Occam’s razor-violating as it wants, it’s still a better shot than no possible theory at all.

In general, the smarter critics of naturalness are aware of this kind of argument, and don’t just talk probabilities. Instead, they reject some combination of point 2. and point 3.

This is more reasonable, because point 2. and point 3. are, on some level, arguments from ignorance. We don’t know of a theory with no dimensionless parameters that can give something like the Higgs with no detectable new physics, but maybe we’re just not trying hard enough. Given how murky our understanding of M theory is, maybe we just don’t know enough to make this kind of argument yet, and the whole thing is premature. This is where probability can sneak back in, not as some sort of probability distribution on the parameters of physics but just as an estimate of our own ability to come up with new theories. We have to guess what kinds of theories can make sense, and we may well just not know enough to make that guess.

One thing I’d like to know is how many critics of naturalness reject point 1. Because point 1. isn’t usually stated explicitly, it isn’t often responded to explicitly either. The way some critics of naturalness talk makes me suspect that they reject point 1., that they honestly believe that the final theory might simply have some unexplained dimensionless numbers in it that we can only fix through measurement. I’m curious whether they actually think this, or whether I’m misreading them.

There’s a general point to be made here about framing. Suppose that tomorrow someone figures out a way to start with a theory with no dimensionless parameters and plausibly end up with a theory that describes our world, matching all existing experiments. (People have certainly been trying.) Does this mean naturalness was never a problem after all? Or does that mean that this person solved the naturalness problem?

Those sound like very different statements, but it should be clear at this point that they’re not. In principle, nothing distinguishes them. In practice, people will probably frame the result one way or another based on how interesting the solution is.

If it turns out we were missing something obvious, or if we were extremely premature in our argument, then in some sense naturalness was never a real problem. But if we were missing something subtle, something deep that teaches us something important about the world, then it should be fair to describe it as a real solution to a real problem, to cite “solving naturalness” as one of the advantages of the new theory.

If you ask for my opinion? You probably shouldn’t, I’m quite far from an expert in this corner of physics, not being a phenomenologist. But if you insist on asking anyway, I suspect there probably is something wrong with the naturalness argument. That said, I expect that whatever we’re missing, it will be something subtle and interesting, that naturalness is a real problem that needs to really be solved.

Book Review: We Have No Idea

I have no idea how I’m going to review this book.

Ok fine, I have some idea.

Jorge Cham writes Piled Higher and Deeper, a webcomic with possibly the most accurate depiction of grad school available. Daniel Whiteson is a professor at the University of California, Irvine, and a member of the ATLAS collaboration (one of the two big groups that make measurements at the Large Hadron Collider). Together, they’ve written a popular science book covering everything we don’t know about fundamental physics.

Writing a book about what we don’t know is an unusual choice, and there was a real risk it would end up as just a superficial gimmick. The pie chart on the cover presents the most famous “things physicists don’t know”, dark matter and dark energy. If they had just stuck to those this would have been a pretty ordinary popular physics book.

Refreshingly, they don’t do that. After blazing through dark matter and dark energy in the first three chapters, the rest of the book focuses on a variety of other scientific mysteries.

The book contains a mix of problems that get serious research attention (matter-antimatter asymmetry, high-energy cosmic rays) and more blue-sky “what if” questions (does matter have to be made out of particles?). As a theorist, I’m not sure that all of these questions are actually mysterious (we do have some explanation of the weird “1/3” charges of quarks, and I’d like to think we understand why mass includes binding energy), but even in these cases what we really know is that they follow from “sensible assumptions”, and one could just as easily ask “what if” about those assumptions instead. Overall, these “what if” questions make the book unique, and it would be a much weaker book without them.

“We Have No Idea” is strongest when the authors actually have some idea, i.e. when Whiteson is discussing experimental particle physics. It gets weaker on other topics, where the authors seem to rely more on others’ popular treatments (their discussion of “pixels of space-time” motivated me to write this post). Still, they at least seem to have asked the right people, and their accounts are on the more accurate end of typical pop science. (Closer to Quanta than IFLScience.)

The book’s humor really ties it together, often in surprisingly subtle ways. Each chapter has its own running joke, initially a throwaway line that grows into metaphors for everything the chapter discusses. It’s a great way to help the audience visualize without introducing too many new concepts at once. If there’s one thing cartoonists can teach science communicators, it’s the value of repetition.

I liked “We Have No Idea”. It could have been more daring, or more thorough, but it was still charming and honest and fun. If you’re looking for a Christmas present to explain physics to your relatives, you won’t go wrong with this book.

Why the Coupling Constants Aren’t Constant: Epistemology and Pragmatism

If you’ve heard a bit about physics, you might have heard that each of the fundamental forces (electromagnetism, the weak nuclear force, the strong nuclear force, and gravity) has a coupling constant, a number, handed down from nature itself, that determines how strong of a force it is. Maybe you’ve seen them in a table, like this:

tablefromhyperphysics

If you’ve heard a bit more about physics, though, you’ll have heard that those coupling constants aren’t actually constant! Instead, they vary with energy. Maybe you’ve seen them plotted like this:

phypub4highen

The usual way physicists explain this is in terms of quantum effects. We talk about “virtual particles”, and explain that any time particles and forces interact, these virtual particles can pop up, adding corrections that change with the energy of the interacting particles. The coupling constant includes all of these corrections, so it can’t be constant, it has to vary with energy.

renormalized-vertex

Maybe you’re happy with this explanation. But maybe you object:

“Isn’t there still a constant, though? If you ignore all the virtual particles, and drop all the corrections, isn’t there some constant number you’re correcting? Some sort of `bare coupling constant’ you could put into a nice table for me?”

There are two reasons I can’t do that. One is an epistemological reason, that comes from what we can and cannot know. The other is practical: even if I knew the bare coupling, most of the time I wouldn’t want to use it.

Let’s start with the epistemology:

The first thing to understand is that we can’t measure the bare coupling directly. When we measure the strength of forces, we’re always measuring the result of quantum corrections. We can’t “turn off” the virtual particles.

You could imagine measuring it indirectly, though. You’d measure the end result of all the corrections, then go back and calculate. That calculation would tell you how big the corrections were supposed to be, and you could subtract them off, solve the equation, and find the bare coupling.

And this would be a totally reasonable thing to do, except that when you go and try to calculate the quantum corrections, instead of something sensible, you get infinity.

We think that “infinity” is due to our ignorance: we know some of the quantum corrections, but not all of them, because we don’t have a final theory of nature. In order to calculate anything we need to hedge around that ignorance, with a trick called renormalization. I talk about that more in an older post. The key message to take away there is that in order to calculate anything we need to give up the hope of measuring certain bare constants, even “indirectly”. Once we fix a few constants that way, the rest of the theory gives reliable predictions.

So we can’t measure bare constants, and we can’t reason our way to them. We have to find the full coupling, with all the quantum corrections, and use that as our coupling constant.

Still, you might wonder, why does the coupling constant have to vary? Can’t I just pick one measurement, at one energy, and call that the constant?

This is where pragmatism comes in. You could fix your constant at some arbitrary energy, sure. But you’ll regret it.

In particle physics, we usually calculate in something called perturbation theory. Instead of calculating something exactly, we have to use approximations. We add up the approximations, order by order, expecting that each time the corrections will get smaller and smaller, so we get closer and closer to the truth.

And this works reasonably well if your coupling constant is small enough, provided it’s at the right energy.

If your coupling constant is at the wrong energy, then your quantum corrections will notice the difference. They won’t just be small numbers anymore. Instead, they end up containing logarithms of the ratio of energies. The more difference between your arbitrary energy scale and the correct one, the bigger these logarithms get.

This doesn’t make your calculation wrong, exactly. It makes your error estimate wrong. It means that your assumption that the next order is “small enough” isn’t actually true. You’d need to go to higher and higher orders to get a “good enough” answer, if you can get there at all.

Because of that, you don’t want to think about the coupling constants as actually constant. If we knew the final theory then maybe we’d know the true numbers, the ultimate bare coupling constants. But we still would want to use coupling constants that vary with energy for practical calculations. We’d still prefer the plot, and not just the table.

The Physics Isn’t New, We Are

Last week, I mentioned the announcement from the IceCube, Fermi-LAT, and MAGIC collaborations of high-energy neutrinos and gamma rays detected from the same source, the blazar TXS 0506+056. Blazars are sources of gamma rays, thought to be enormous spinning black holes that act like particle colliders vastly more powerful than the LHC. This one, near Orion’s elbow, is “aimed” roughly at Earth, allowing us to detect the light and particles it emits. On September 22, a neutrino with energy around 300 TeV was detected by IceCube (a kilometer-wide block of Antarctic ice stuffed with detectors), coming from the direction of TXS 0506+056. Soon after, the satellite Fermi-LAT and ground-based telescope MAGIC were able to confirm that the blazar TXS 0506+056 was flaring at the time. The IceCube team then looked back, and found more neutrinos coming from the same source in earlier years. There are still lingering questions (Why didn’t they see this kind of behavior from other, closer blazars?) but it’s still a nice development in the emerging field of “multi-messenger” astronomy.

It also got me thinking about a conversation I had a while back, before one of Perimeter’s Public Lectures. An elderly fellow was worried about the LHC. He wondered if putting all of that energy in the same place, again and again, might do something unprecedented: weaken the fabric of space and time, perhaps, until it breaks? He acknowledged this didn’t make physical sense, but what if we’re wrong about the physics? Do we really want to take that risk?

At the time, I made the same point that gets made to counter fears of the LHC creating a black hole: that the energy of the LHC is less than the energy of cosmic rays, particles from space that collide with our atmosphere on a regular basis. If there was any danger, it would have already happened. Now, knowing about blazars, I can make a similar point: there are “galactic colliders” with energies so much higher than any machine we can build that there’s no chance we could screw things up on that kind of scale: if we could, they already would have.

This connects to a broader point, about how to frame particle physics. Each time we build an experiment, we’re replicating something that’s happened before. Our technology simply isn’t powerful enough to do something truly unprecedented in the universe: we’re not even close! Instead, the point of an experiment is to reproduce something where we can see it. It’s not the physics itself, but our involvement in it, our understanding of it, that’s genuinely new.

The IceCube experiment itself is a great example of this: throughout Antarctica, neutrinos collide with ice. The only difference is that in IceCube’s ice, we can see them do it. More broadly, I have to wonder how much this is behind the “unreasonable effectiveness of mathematics”: if mathematics is just the most precise way humans have to communicate with each other, then of course it will be effective in physics, since the goal of physics is to communicate the nature of the world to humans!

There may well come a day when we’re really able to do something truly unprecedented, that has never been done before in the history of the universe. Until then, we’re playing catch-up, taking laws the universe has tested extensively and making them legible, getting humanity that much closer to understanding physics that, somewhere out there, already exists.