# Changing the Question

I’ve recently been reading Why Does the World Exist?, a book by the journalist Jim Holt. In it he interviews a range of physicists and philosophers, asking each the question in the title. As the book goes on, he concludes that physicists can’t possibly give him the answer he’s looking for: even if physicists explain the entire universe from simple physical laws, they still would need to explain why those laws exist. A bit disappointed, he turns back to the philosophers.

Something about Holt’s account rubs me the wrong way. Yes, it’s true that physics can’t answer this kind of philosophical problem, at least not in a logically rigorous way. But I think we do have a chance of answering the question nonetheless…by eclipsing it with a better question.

How would that work? Let’s consider a historical example.

Does the Earth go around the Sun, or does the Sun go around the Earth? We learn in school that this is a solved question: Copernicus was right, the Earth goes around the Sun.

The details are a bit more subtle, though. The Sun and the Earth both attract each other: while it is a good approximation to treat the Sun as fixed, in reality it and the Earth both move in elliptical orbits around the same focus (which is close to, but not exactly, the center of the Sun). Furthermore, this is all dependent on your choice of reference frame: if you wish you can choose coordinates in which the Earth stays still while the Sun moves.

So what stops a modern-day Tycho Brahe from arguing that the Sun and the stars and everything else orbit around the Earth?

The reason we aren’t still debating the Copernican versus the Tychonic system isn’t that we proved Copernicus right. Instead, we replaced the old question with a better one. We don’t actually care which object is the center of the universe. What we care about is whether we can make predictions, and what mathematical laws we need to do so. Newton’s law of universal gravitation lets us calculate the motion of the solar system. It’s easier to teach it by talking about the Earth going around the Sun, so we talk about it that way. The “philosophical” question, about the “center of the universe”, has been explained away by the more interesting practical question.

My suspicion is that other philosophical questions will be solved in this way. Maybe physicists can’t solve the ultimate philosophical question, of why the laws of physics are one way and not another. But if we can predict unexpected laws and match observations of the early universe, then we’re most of the way to making the question irrelevant. Similarly, perhaps neuroscientists will never truly solve the mystery of consciousness, at least the way philosophers frame it today. Nevertheless, if they can describe brains well enough to understand why we act like we’re conscious, if they have something in their explanation that looks sufficiently “consciousness-like”, then it won’t matter if they meet the philosophical requirements, people simply won’t care. The question will have been eaten by a more interesting question.

This can happen in physics by itself, without reference to philosophy. Indeed, it may happen again soon. In the New Yorker this week, Natalie Wolchover has an article in which she talks to Nima Arkani-Hamed about the search for better principles to describe the universe. In it, Nima talks about looking for a deep mathematical question that the laws of physics answer. Peter Woit has expressed confusion that Nima can both believe this and pursue various complicated, far-fetched, and at times frankly ugly ideas for new physics.

I think the way to reconcile these two perspectives is to know that Nima takes naturalness seriously. The naturalness argument in physics states that physics as we currently see it is “unnatural”, in particular, that we can’t get it cleanly from the kinds of physical theories we understand. If you accept the argument as stated, then you get driven down a rabbit hole of increasingly strange solutions: versions of supersymmetry that cleverly hide from all experiments, hundreds of copies of the Standard Model, or even a multiverse.

Taking naturalness seriously doesn’t just mean accepting the argument as stated though. It can also mean believing the argument is wrong, but wrong in an interesting way.

One interesting way naturalness could be wrong would be if our reductionist picture of the world, where the ultimate laws live on the smallest scales, breaks down. I’ve heard vague hints from physicists over the years that this might be the case, usually based on the way that gravity seems to mix small and large scales. (Wolchover’s article also hints at this.) In that case, you’d want to find not just a new physical theory, but a new question to ask, something that could eclipse the old question with something more interesting and powerful.

Nima’s search for better questions seems to drive most of his research now. But I don’t think he’s 100% certain that the old questions are wrong, so you can still occasionally see him talking about multiverses and the like.

Ultimately, we can’t predict when a new question will take over. It’s a mix of the social and the empirical, of new predictions and observations but also of which ideas are compelling and beautiful enough to get people to dismiss the old question as irrelevant. It feels like we’re due for another change…but we might not be, and even if we are it might be a long time coming.

# Valentine’s Day Physics Poem 2019

It’s that time of year again! Time for me to dig in to my files and bring you yet another of my old physics poems.

Plagued with Divergences

“The whole scheme of local field theory is plagued with divergences”

Is divergence ever really unexpected?

If you asked a computer, what would it tell you?

You’d hear a whirring first, lungs and heart of the machine beating faster and faster.

And you’d dismiss it.
You knew this wasn’t going to be an easy interaction.
It doesn’t mean you’re going to diverge.

And perhaps it would try to warn you, write it there on the page.
It might even notice, its built-in instincts telling you, by the book,
“This will diverge.”

But instincts lie, and builders cheat.
And it doesn’t mean you’re going to diverge.

Now, you do everything the slow way,
Numerically.
Dismiss your instincts and force yourself through
Piece by piece.

And now, you can’t stop hearing the whir
The machine’s beating heart
Even when it should be at rest

And step by step, it tries to minimize its errors
And step by step, the errors grow

And exhausted, in the end, you see splashed across the screen
Something bigger than it should ever have been.

But sometimes things feel big and strange.
That’s just the way of the big wide world.
And it doesn’t mean you’re going to diverge.

You could have seen the signs,
Power-counted, seen what could overwhelm.
And you could have regulated, with an epsilon of flexibility.

But this one, this time, was supposed to be
Needed to be
Physical Truth
And truth doesn’t diverge

So you keep going,
Wheezing breath and painstaking calculation,
And every little thing blowing up

It’s not like there’s a better way to live.

# The Particle Physics Curse of Knowledge

There’s a debate raging right now in particle physics, about whether and how to build the next big collider. CERN’s Future Circular Collider group has been studying different options, some more expensive and some less (Peter Woit has a nice summary of these here). This year, the European particle physics community will debate these proposals, deciding whether to include them in an updated European Strategy for Particle Physics. After that, it will be up to the various countries that are members of CERN to decide whether to fund the proposal. With the costs of the more expensive options hovering around \$20 billion, this has led to substantial controversy.

I’m not going to offer an opinion here one way or another. Weighing this kind of thing requires knowing the alternatives: what else the European particle physics community might lobby for in the next few years, and once they decide, what other budget priorities each individual country has. I know almost nothing about either.

Instead of an opinion, I have an observation:

Imagine that primatologists had proposed a \$20 billion primate center, able to observe gorillas in greater detail than ever before. The proposal might be criticized in any number of ways: there could be much cheaper ways to accomplish the same thing, the project might fail, it might be that we simply don’t care enough about primate behavior to spend \$20 billion on it.

What you wouldn’t expect is the claim that a \$20 billion primate center would teach us nothing new.

It probably wouldn’t teach us “\$20 billion worth of science”, whatever that means. But a center like that would be guaranteed to discover something. That’s because we don’t expect primatologists’ theories to be exact. Even if gorillas behaved roughly as primatologists expected, the center would still see new behaviors, just as a consequence of looking at a new level of detail.

To pick a physics example, consider the gravitational wave telescope LIGO. Before their 2016 observation of two black holes merging, LIGO faced substantial criticism. After their initial experiments didn’t detect anything, many physicists thought that the project was doomed to fail: that it would never be sensitive enough to detect the faint signals of gravitational waves past the messy vibrations of everyday life on Earth.

When it finally worked, though, LIGO did teach us something new. Not the existence of gravitational waves, we already knew about them. Rather, LIGO taught us new things about the kinds of black holes that exist. LIGO observed much bigger black holes than astronomers expected, a surprise big enough that it left some people skeptical. Even if it hadn’t, though, we still would almost certainly observe something new: there’s no reason to expect astronomers to perfectly predict the size of the universe’s black holes.

Particle physics is different.

I don’t want to dismiss the work that goes in to collider physics (far too many people have dismissed it recently). Much, perhaps most, of the work on the LHC is dedicated not to detecting new particles, but to confirming and measuring the Standard Model. A new collider would bring heroic scientific effort. We’d learn revolutionary new things about how to build colliders, how to analyze data from colliders, and how to use the Standard Model to make predictions for colliders.

In the end, though, we expect those predictions to work. And not just to work reasonably well, but to work perfectly. While we might see something beyond the Standard Model, the default expectation is that we won’t, that after doing the experiments and analyzing the data and comparing to predictions we’ll get results that are statistically indistinguishable from an equation we can fit on a T-shirt. We’ll fix the constants on that T-shirt to an unprecedented level of precision, yes, but the form of the equation may well stay completely the same.

I don’t think there’s another field where that’s even an option. Nowhere else in all of science could we observe the world in unprecedented detail, capturing phenomena that had never been seen before…and end up perfectly matching our existing theory. There’s no other science where anyone would even expect that to happen.

That makes the argument here different from any argument we’ve faced before. It forces people to consider their deep priorities, to think not just about the best way to carry out this test or that but about what science is supposed to be for. I don’t think there are any easy answers. We’re in what may well be a genuinely new situation, and we have to figure out how to navigate it together.

Postscript: I still don’t want to give an opinion, but given that I didn’t have room for this above let me give a fragment of an opinion: Higgs triple couplings!!!

# Grant Roulette

Sometimes, it feels like applying for funding in science is a form of high-stakes gambling. You put in weeks of work assembling a grant application, making sure that it’s exciting and relevant and contains all the obnoxious buzzwords you’re supposed to use…and in the end, it gets approved or rejected for reasons that seem entirely out of your control.

What if, instead, you were actually gambling?

That’s the philosophy behind a 2016 proposal by Ferric Fang and Arturo Casadevall, recently summarized in an article on Vox by Kelsey Piper. The goal is to cut down on the time scientists waste applying for money from various government organizations (for them, the US’s National Institute of Health) by making part of the process random. Applications would be reviewed to make sure they met a minimum standard, but past that point every grant would have an equal chance of getting funded. That way scientists wouldn’t spend so much time perfecting grant applications, and could focus on the actual science.

It’s an idea that seems, on its face, a bit too cute. Yes, grant applications are exhausting, but surely you still want some way to prioritize better ideas over worse ones? For all its flaws, one would hope the grant review process at least does that.

Well, maybe not. The Vox piece argues that, at least in medicine, grants are almost random already. Each grant is usually reviewed by multiple experts. Several studies cited in the piece looked at the variability between these experts: do they usually agree, or disagree? Measuring this in a variety of ways, they came to the same conclusion: there is almost no consistency among ratings by different experts. In effect, the NIH appears to already be using a lottery, one in which grants are randomly accepted or rejected depending on who reviews them.

What encourages me about these studies is that there really is a concrete question to ask. You could argue that physics shouldn’t suffer from the same problems as medicine, that grant review is really doing good work in our field. If you want to argue that, you can test it! Look at old reviews by different people, or get researchers to do “mock reviews”, and test statistical measures like inter-rater reliability. If there really is no consistency between reviews then we have a real problem in need of fixing.

I genuinely don’t know what to expect from that kind of study in my field. But the way people talk about grants makes me suspicious. Everyone seems to feel like grant agencies are biased against their sub-field. Grant-writing advice is full of weird circumstantial tips. (“I heard so-and-so is reviewing this year, so don’t mention QCD!”) It could all be true…but it’s also the kind of superstition people come up with when they look for patterns in a random process. If all the grant-writing advice in the world boils down to “bet on red”, we might as well admit which game we’re playing.

# Book Review: Thirty Years That Shook Physics and Mr Tompkins in Paperback

George Gamow was one of the “quantum kids” who got their start at the Niels Bohr Institute in the 30’s. He’s probably best known for the Alpher, Bethe, Gamow paper, which managed to combine one of the best sources of evidence we have for the Big Bang with a gratuitous Greek alphabet pun. He was the group jester in a lot of ways: the historians here have archives full of his cartoons and in-jokes.

Naturally, he also did science popularization.

I recently read two of Gamow’s science popularization books, “Mr Tompkins” and “Thirty Years That Shook Physics”. Reading them was a trip back in time, to when people thought about physics in surprisingly different ways.

“Mr. Tompkins” started as a series of articles in Discovery, a popular science magazine. They were published as a book in 1940, with a sequel in 1945 and an update in 1965. Apparently they were quite popular among a certain generation: the edition I’m reading has a foreword by Roger Penrose.

(As an aside: Gamow mentions that the editor of Discovery was C. P. Snow…that C. P. Snow?)

Mr Tompkins himself is a bank clerk who decides on a whim to go to a lecture on relativity. Unable to keep up, he falls asleep, and dreams of a world in which the speed of light is much slower than it is in our world. Bicyclists visibly redshift, and travelers lead much longer lives than those who stay at home. As the book goes on he meets the same professor again and again (eventually marrying his daughter) and sits through frequent lectures on physics, inevitably falling asleep and experiencing it first-hand: jungles where Planck’s constant is so large that tigers appear as probability clouds, micro-universes that expand and collapse in minutes, and electron societies kept strictly monogamous by “Father Paulini”.

The structure definitely feels dated, and not just because these days people don’t often go to physics lectures for fun. Gamow actually includes the full text of the lectures that send Mr Tompkins to sleep, and while they’re not quite boring enough to send the reader to sleep they are written on a higher level than the rest of the text, with more technical terms assumed. In the later additions to the book the “lecture” aspect grows: the last two chapters involve a dream of Dirac explaining antiparticles to a dolphin in basically the same way he would explain them to a human, and a discussion of mesons in a Japanese restaurant where the only fantastical element is a trio of geishas acting out pion exchange.

Some aspects of the physics will also feel strange to a modern audience. Gamow presents quantum mechanics in a way that I don’t think I’ve seen in a modern text: while modern treatments start with uncertainty and think of quantization as a consequence, Gamow starts with the idea that there is a minimum unit of action, and derives uncertainty from that. Some of the rest is simply limited by timing: quarks weren’t fully understood even by the 1965 printing, in 1945 they weren’t even a gleam in a theorist’s eye. Thus Tompkins’ professor says that protons and neutrons are really two states of the same particle and goes on to claim that “in my opinion, it is quite safe to bet your last dollar that the elementary particles of modern physics [electrons, protons/neutrons, and neutrinos] will live up to their name.” Neutrinos also have an amusing status: they hadn’t been detected when the earlier chapters were written, and they come across rather like some people write about dark matter today, as a silly theorist hypothesis that is all-too-conveniently impossible to observe.

“Thirty Years That Shook Physics”, published in 1966, is a more usual sort of popular science book, describing the history of the quantum revolution. While mostly focused on the scientific concepts, Gamow does spend some time on anecdotes about the people involved. If you’ve read much about the time period, you’ll probably recognize many of the anecdotes (for example, the Pauli Principle that a theorist can break experimental equipment just by walking in to the room, or Dirac’s “discovery” of purling), even the ones specific to Gamow have by now been spread far and wide.

Like Mr Tompkins, the level in this book is not particularly uniform. Gamow will spend a paragraph carefully defining an average, and then drop the word “electroscope” as if everyone should know what it is. The historical perspective taught me a few things I perhaps should have already known, but found surprising anyway. (The plum-pudding model was an actual mathematical model, and people calculated its consequences! Muons were originally thought to be mesons!)

Both books are filled with Gamow’s whimsical illustrations, something he was very much known for. Apparently he liked to imitate other art styles as well, which is visible in the portraits of physicists at the front of each chapter.

1966 was late enough that this book doesn’t have the complacency of the earlier chapters in Mr Tompkins: Gamow knew that there were more particles than just electrons, nucleons, and neutrinos. It was still early enough, though, that the new particles were not fully understood. It’s interesting seeing how Gamow reacts to this: his expectation was that physics was on the cusp of another massive change, a new theory built on new fundamental principles. He speculates that there might be a minimum length scale (although oddly enough he didn’t expect it to be related to gravity).

It’s only natural that someone who lived through the dawn of quantum mechanics should expect a similar revolution to follow. Instead, the revolution of the late 60’s and early 70’s was in our understanding: not new laws of nature so much as new comprehension of just how much quantum field theory can actually do. I wonder if the generation who lived through that later revolution left it with the reverse expectation: that the next crisis should be solved in a similar way, that the world is quantum field theory (or close cousins, like string theory) all the way down and our goal should be to understand the capabilities of these theories as well as possible.

The final section of the book is well worth waiting for. In 1932, Gamow directed Bohr’s students in staging a play, the “Blegdamsvej Faust”. A parody of Faust, it features Bohr as god, Pauli as Mephistopheles, and Ehrenfest as the “erring Faust” (Gamow’s pun, not mine) that he tempts to sin with the promise of the neutrino, Gretchen. The piece, translated to English by Gamow’s wife Barbara, is filled with in-jokes on topics as obscure as Bohr’s habitual mistakes when speaking German. It’s gloriously weird and well worth a read. If you’ve ever seen someone do a revival performance, let me know!

When you learn physics in school, you learn it in terms of building blocks.

First, you learn about atoms. Indivisible elements, as the Greeks foretold…until you learn that they aren’t indivisible. You learn that atoms are made of electrons, protons, and neutrons. Then you learn that protons and neutrons aren’t indivisible either, they’re made of quarks. They’re what physicists call composite particles, particles made of other particles stuck together.

Hearing this story, you notice a pattern. Each time physicists find a more fundamental theory, they find that what they thought were indivisible particles are actually composite. So when you hear physicists talking about the next, more fundamental theory, you might guess it has to work the same way. If quarks are made of, for example, strings, then each quark is made of many strings, right?

Nope! As it turns out, there are two different things physicists can mean when they say a particle is “made of” a more fundamental particle. Sometimes they mean the particle is composite, like the proton is made of quarks. But sometimes, like when they say particles are “made of strings”, they mean something different.

To understand what this “something different” is, let’s go back to quarks for a moment. You might have heard there are six types, or flavors, of quarks: up and down, strange and charm, top and bottom. The different types have different mass and electric charge. You might have also heard that quarks come in different colors, red green and blue. You might wonder then, aren’t there really eighteen types of quark? Red up quarks, green top quarks, and so forth?

Physicists don’t think about it that way. Unlike the different flavors, the different colors of quark have a more unified mathematical description. Changing the color of a quark doesn’t change its mass or electric charge. All it changes is how the quark interacts with other particles via the strong nuclear force. Know how one color works, and you know how the other colors work. Different colors can also “mix” together, similarly to how different situations can mix together in quantum mechanics: just as Schrodinger’s cat can be both alive and dead, a quark can be both red and green.

This same kind of thing is involved in another example, electroweak unification. You might have heard that electromagnetism and the weak nuclear force are secretly the same thing. Each force has corresponding particles: the familiar photon for electromagnetism, and W and Z bosons for the weak nuclear force. Unlike the different colors of quarks, photons and W and Z bosons have different masses from each other. It turns out, though, that they still come from a unified mathematical description: they’re “mixtures” (in the same Schrodinger’s cat-esque sense) of the particles from two more fundamental forces (sometimes called “weak isospin” and “weak hypercharge”). The reason they have different masses isn’t their own fault, but the fault of the Higgs: the Higgs field we have in our universe interacts with different parts of this unified force differently, so the corresponding particles end up with different masses.

A physicist might say that electromagnetism and the weak force are “made of” weak isospin and weak hypercharge. And it’s that kind of thing that physicists mean when they say that quarks might be made of strings, or the like: not that quarks are composite, but that quarks and other particles might have a unified mathematical description, and look different only because they’re interacting differently with something else.

This isn’t to say that quarks and electrons can’t be composite as well. They might be, we don’t know for sure. If they are, the forces binding them together must be very strong, strong enough that our most powerful colliders can’t make them wiggle even a little out of shape. The tricky part is that composite particles get mass from the energy holding them together. A particle held together by very powerful forces would normally be very massive, if you want it to end up lighter you have to construct your theory carefully to do that. So while occasionally people will suggest theories where quarks or electrons are composite, these theories aren’t common. Most of the time, if a physicist says that quarks or electrons are “made of ” something else, they mean something more like “particles are made of strings” than like “protons are made of quarks”.

# What Science Would You Do If You Had the Time?

I know a lot of people who worry about the state of academia. They worry that the competition for grants and jobs has twisted scientists’ priorities, that the sort of dedicated research of the past, sitting down and thinking about a topic until you really understand it, just isn’t possible anymore. The timeline varies: there are people who think the last really important development was the Standard Model, or the top quark, or AdS/CFT. Even more optimistic people, who think physics is still just as great as it ever was, often complain that they don’t have enough time.

Sometimes I wonder what physics would be like if we did have the time. If we didn’t have to worry about careers and funding, what would we do? I can speculate, comparing to different communities, but here I’m interested in something more concrete: what, specifically, could we accomplish? I often hear people complain that the incentives of academia discourage deep work, but I don’t often hear examples of the kind of deep work that’s being discouraged.

So I’m going to try an experiment here. I know I have a decent number of readers who are scientists of one field or another. Imagine you didn’t have to worry about funding any more. You’ve got a permanent position, and what’s more, your favorite collaborators do too. You don’t have to care about whether your work is popular, whether it appeals to the university or the funding agencies or any of that. What would you work on? What projects would you personally do, that you don’t have the time for in the current system? What worthwhile ideas has modern academia left out?