# Our Bargain

Sabine Hossenfelder has a blog post this week chastising particle physicists and cosmologists for following “upside-down Popper”, or assuming a theory is worth working on merely because it’s falsifiable. She describes her colleagues churning out one hypothesis after another, each tweaking an old idea just enough to make it falsifiable in the next experiment, without caring whether the hypothesis is actually likely to be true.

Sabine is much more of an expert in this area of physics (phenomenology) than I am, and I don’t presume to tell her she’s wrong about that community. But the problem she’s describing is part of something bigger, something that affects my part of physics as well.

There’s a core question we’d all like to answer: what should physicists work on? What criteria should guide us?

Falsifiability isn’t the whole story. The next obvious criterion is a sense of simplicity, of Occam’s Razor or mathematical elegance. Sabine has argued against the latter, which prompted a friend of mine to comment that between rejecting falsifiability and elegance, Sabine must want us to stop doing high-energy physics at all!

That’s more than a little unfair, though. I think Sabine has a reasonably clear criterion in mind. It’s the same criterion that most critics of the physics mainstream care about. It’s even the same criterion being used by the “other side”, the sort of people who criticize anything that’s not string/SUSY/inflation.

The criterion is quite a simple one: physics research should be productive. Anything we publish, anything we work on, should bring us closer to understanding the real world.

And before you object that this criterion is obvious, that it’s subjective, that it ignores the very real disagreements between the Sabines and the Luboses of the world…before any of that, please let me finish.

We can’t achieve this criterion. And we shouldn’t.

We can’t demand that all physics be productive without breaking a fundamental bargain, one we made when we accepted that science could be a career.

The Hunchback of Notre Science

It wasn’t always this way. Up until the nineteenth century, “scientist” was a hobby, not a job.

After Newton published his theory of gravity, he was famously accused by Robert Hooke of stealing the idea. There’s some controversy about this, but historians agree on a few points: that Hooke did write a letter to Newton suggesting a $1/r^2$ force law, and that Hooke, unlike Newton, never really worked out the law’s full consequences.

Why not? In part, because Hooke, unlike Newton, had a job.

Hooke was arguably the first person for whom science was a full-time source of income. As curator of experiments for the Royal Society, it was his responsibility to set up demonstrations for each Royal Society meeting. Later, he also handled correspondence for the Royal Society Journal. These responsibilities took up much of his time, and as a result, even if he was capable of following up on the consequences of $1/r^2$ he wouldn’t have had time to focus on it. That kind of calculation wasn’t what he was being paid for.

We’re better off than Hooke today. We still have our responsibilities, to journals and teaching and the like, at various stages of our careers. But in the centuries since Hooke expectations have changed, and real original research is no longer something we have to fit in our spare time. It’s now a central expectation of the job.

When scientific research became a career, we accepted a kind of bargain. On the positive side, you no longer have to be independently wealthy to contribute to science. More than that, the existence of professional scientists is the bedrock of technological civilization. With enough scientists around, we get modern medicine and the internet and space programs and the LHC, things that wouldn’t be possible in a world of rare wealthy geniuses.

We pay a price for that bargain, though. If science is a steady job, then it has to provide steady work. A scientist has to be able to go in, every day, and do science.

And the problem is, science doesn’t always work like that. There isn’t always something productive to work on. Even when there is, there isn’t always something productive for you to work on.

Sabine blames “upside-down Popper” on the current publish-or-perish environment in physics. If physics careers weren’t so cut-throat and the metrics they are judged by weren’t so flawed, then maybe people would have time to do slow, careful work on deeper topics rather than pumping out minimally falsifiable papers as fast as possible.

There’s a lot of truth to this, but I think at its core it’s a bit too optimistic. Each of us only has a certain amount of expertise, and sometimes that expertise just isn’t likely to be productive at the moment. Because science is a job, a person in that position can’t just go work at the Royal Mint like Newton did. (The modern-day equivalent would be working for Wall Street, but physicists rarely come back from that.) Instead, they keep doing what they know how to do, slowly branching out, until they’ve either learned something productive or their old topic becomes useful once more. You can think of it as a form of practice, where scientists keep their skills honed until they’re needed.

So if we slow down the rate of publication, if we create metrics for universities that let them hire based on the depth and importance of work and not just number of papers and citations, if we manage all of that then yes we will improve science a great deal. But Lisa Randall still won’t work on Haag’s theorem.

In the end, we’ll still have physicists working on topics that aren’t actually productive.

A physicist lazing about unproductively under an apple tree

So do we have to pay physicists to work on whatever they want, no matter how ridiculous?

No, I’m not saying that. We can’t expect everyone to do productive work all the time, but we can absolutely establish standards to make the work more likely to be productive.

Strange as it may sound, I think our standards for this are already quite good, or at least better than many other fields.

First, there’s falsifiability itself, or specifically our attitude towards it.

Physics’s obsession with falsifiability has one important benefit: it means that when someone proposes a new model of dark matter or inflation that they tweaked to be just beyond the current experiments, they don’t claim to know it’s true. They just claim it hasn’t been falsified yet.

This is quite different from what happens in biology and the social sciences. There, if someone tweaks their study to be just within statistical significance, people typically assume the study demonstrated something real. Doctors base treatments on it, and politicians base policy on it. Upside-down Popper has its flaws, but at least it’s never going to kill anybody, or put anyone in prison.

Admittedly, that’s a pretty low bar. Let’s try to set a higher one.

Moving past falsifiability, what about originality? We have very strong norms against publishing work that someone else has already done.

Ok, you (and probably Sabine) would object, isn’t that easy to get around? Aren’t all these Popper-flippers pretending to be original but really just following the same recipe each time, modifying their theory just enough to stay falsifiable?

To some extent. But if they were really following a recipe, you could beat them easily: just write the recipe down.

Physics progresses best when we can generalize, when we skip from case-by-case to understanding whole swaths of cases at once. Over time, there have been plenty of cases in which people have done that, where a number of fiddly hand-made models have been summarized in one parameter space. Once that happens, the rule of originality kicks in: now, no-one can propose another fiddly model like that again. It’s already covered.

As long as the recipe really is just a recipe, you can do this. You can write up what these people are doing in computer code, release the code, and then that’s that, they have to do something else. The problem is, most of the time it’s not really a recipe. It’s close enough to one that they can rely on it, close enough to one that they can get paper after paper when they need to…but it still requires just enough human involvement, just enough genuine originality, to be worth a paper.

The good news is that the range of “recipes” we can code up increases with time. Some spaces of theories we might never be able to describe in full generality (I’m glad there are people trying to do statistics on the string landscape, but good grief it looks quixotic). Some of the time though, we have a real chance of putting a neat little bow on a subject, labeled “no need to talk about this again”.

This emphasis on originality keeps the field moving. It means that despite our bargain, despite having to tolerate “practice” work as part of full-time physics jobs, we can still nudge people back towards productivity.

One final point: it’s possible you’re completely ok with the idea of physicists spending most of their time “practicing”, but just wish they wouldn’t make such a big deal about it. Maybe you can appreciate that “can I cook up a model where dark matter kills the dinosaurs” is an interesting intellectual exercise, but you don’t think it should be paraded in front of journalists as if it were actually solving a real problem.

In that case, I agree with you, at least up to a point. It is absolutely true that physics has a dysfunctional relationship with the media. We’re too used to describing whatever we’re working on as the most important thing in the universe, and journalists are convinced that’s the only way to get the public to pay attention. This is something we can and should make progress on. An increasing number of journalists are breaking from the trend and focusing not on covering the “next big thing”, but in telling stories about people. We should do all we can to promote those journalists, to spread their work over the hype, to encourage the kind of stories that treat “practice” as interesting puzzles pursued by interesting people, not the solution to the great mysteries of physics. I know that if I ever do anything newsworthy, there are some journalists I’d give the story to before any others.

At the same time, it’s important to understand that some of the dysfunction here isn’t unique to physics, or even to science. Deep down the reason nobody can admit that their physics is “practice” work is the same reason people at job interviews claim to love the company, the same reason college applicants have to tell stirring stories of hardship and couples spend tens of thousands on weddings. We live in a culture in which nothing can ever just be “ok”, in which admitting things are anything other than exceptional is akin to calling them worthless. It’s an arms-race of exaggeration, and it goes far beyond physics.

(I should note that this “culture” may not be as universal as I think it is. If so, it’s possible its presence in physics is due to you guys letting too many of us Americans into the field.)

We made a bargain when we turned science into a career. We bought modernity, but the price we pay is subsidizing some amount of unproductive “practice” work. We can negotiate the terms of our bargain, and we should, tilting the field with incentives to get it closer to the truth. But we’ll never get rid of it entirely, because science is still done by people. And sometimes, despite what we’re willing to admit, people are just “ok”.

# Thoughts on Polchinski’s Memoir

I didn’t get a chance to meet Joseph Polchinski when I was visiting Santa Barbara last spring. At the time, I heard his health was a bit better, but he still wasn’t feeling well enough to come in to campus. Now that I’ve read his memoir, I almost feel like I have met him. There’s a sense of humor, a diffidence, and a passion for physics that shines through the pages.

The following are some scattered thoughts inspired by the memoir:

A friend of mine once complained to me that in her field grad students all brag about the colleges they went to. I mentioned that in my field your undergrad never comes up…unless it was Caltech. For some reason, everyone I’ve met who went to Caltech is full of stories about the place, and Polchinski is no exception. Speaking as someone who didn’t go there, it seems like Caltech has a profound effect on its students that other places don’t.

Polchinski mentions hearing stories about geniuses of the past, and how those stories helped temper some of his youthful arrogance. There’s an opposite effect that’s also valuable: hearing stories like Polchinski’s, his descriptions of struggling with anxiety and barely publishing and “not really accomplishing anything” till age 40, can be a major comfort to those of us who worry we’ve fallen behind in the academic race. That said, it’s important not to take these things too far: times have changed, you’re not Polchinski, and much like his door-stealing trick at Caltech getting a postdoc without any publications is something you shouldn’t try at home. Even Witten’s students need at least one.

Last week I was a bit puzzled by nueww’s comment, a quote from Polchinski’s memoir which distinguishes “math of the equations” from “math of the solutions”, attributing the former to physicists and the latter to mathematicians. Reading the context in the memoir and the phrase’s origin in a remark by Susskind cleared up a bit, but still left me uneasy. I only figured out why after Lubos Motl posted about it: it doesn’t match my experience of mathematicians at all!

If anything, I think physicists usually care more about the “solutions” than mathematicians do. In my field, often a mathematician will construct some handy basis of functions and then frustrate everyone by providing no examples of how to use them. In the wider math community I’ve met graph theorists who are happy to prove something is true for all graphs of size $10^{10^10}$ and larger, not worrying about the vast number of graphs where it fails because it’s just a finite number of special cases. And I don’t think this is just my experience: a common genre of jokes revolve around mathematicians proving a solution exists and then not bothering to do anything with it (for example, see the joke with the hotel fire here).

I do think there’s a meaningful sense in which mathematicians care about details that we’re happy to ignore, but “solutions” versus “equations” isn’t really the right axis. It’s something more like “rigor” versus “principles”. Mathematicians will often begin a talk by defining a series of maps between different spaces, carefully describing where they are and aren’t valid. A physicist might just write down a function. That sort of thing is dangerous in mathematics: there are always special, pathological cases that make careful definitions necessary. In physics, those cases rarely come up, and when they do there’s often a clear physical problem that brings them to the forefront. We have a pretty good sense of when we need rigor, and when we don’t we’re happy to lay things out without filling in the details, putting a higher priority on moving forward and figuring out the basic principles underlying reality.

Polchinski talks a fair bit about his role in the idea of the multiverse, from hearing about Weinberg’s anthropic argument to coming to terms with the string landscape. One thing his account makes clear is how horrifying the concept seemed at first: how the idea that the parameters of our universe might just be random could kill science and discourage experimentalists. This touches on something that I think gets lost in arguments about the multiverse: even the people most involved in promoting the multiverse in public aren’t happy about it.

It also sharpened my thinking about the multiverse a bit. I’ve talked before about how I don’t think the popularity of the multiverse is actually going to hurt theoretical physics as a field. Polchinski’s worries made me think about the experimental side of the equation: why do experiments if the world might just be random? I think I have a clearer answer to this now, but it’s a bit long, so I’ll save it for a future post.

One nice thing about these long-term accounts is you get to see how much people shift between fields over time. Polchinski didn’t start out working in string theory, and most of the big names in my field, like Lance Dixon and David Kosower, didn’t start out in scattering amplitudes. Academic careers are long, and however specialized we feel at any one time we can still get swept off in a new direction.

I’m grateful for this opportunity to “meet” Polchinski, if only through his writing. His is a window on the world of theoretical physics that is all too rare, and valuable as a result.

# Topic Conferences, Place Conferences

I spent this week at Current Themes in High Energy Physics and Cosmology, a conference at the Niels Bohr Institute.

Most conferences focus on a particular topic. Usually the broader the topic, the bigger the conference. A workshop on flux tubes is smaller than Amplitudes, which is smaller than Strings, which is smaller than the March Meeting of the American Physical Society.

“Current Themes in High Energy Physics and Cosmology” sounds like a very broad topic, but it was a small conference. The reason why is that it wasn’t a “topic conference”, it was a “place conference”.

Most conferences focus on a topic, but some are built around a place. These conferences are hosted by a particular institute year after year. Sometimes each year has a loose theme (for example, the Simons Summer Workshop this year focused on theories without supersymmetry) but sometimes no attempt is made to tie the talks together (“current themes”).

Instead of a theme, the people who go to these conferences are united by their connections to the institute. Some of them have collaborators there, or worked there in the past. Others have been coming for many years. Some just happened to be in the area.

While they may seem eclectic, “place” conferences have a valuable role: they help to keep our interests broad. In physics, there’s a natural tendency to specialize. Left alone, we end up reading papers and going to talks only when they’re directly relevant for what we’re working on. By doing this we lose track of the wider field, losing access to the insights that come from different perspectives and methods.

“Place” conferences, like seminars, help pull things in the other direction. When you’re hearing talks from “everyone connected to the Simons Center” or “everyone connected to the Niels Bohr Institute”, you’re exposed to a much broader range of topics than a conference for just your sub-field. You get a broad overview of what’s going on in the field, but unlike a big conference like Strings there are few enough people that you can actually talk to everyone.

Physicists’ attachment to places is counter-intuitive. We’re studying mathematical truths and laws of nature, surely it shouldn’t matter where we work. In practice, though, we’re still human. Out of the vast span of physics we still pick our interests based on the people around us. That’s why places, why institutes with a wide range of excellent people, are so important: they put our social instincts to work studying the universe.

# Copenhagen!

After a week of packing, shipping, selling or donating my worldly possessions, I have now arrived in Denmark! I’m too exhausted for much of a post this week, so enjoy this picture of the wilderness of the frozen north.

Ok fine it’s a park.

# We’re Weird

Preparing to move to Denmark, it strikes me just how strange what I’m doing would seem to most people. I’m moving across the ocean to a place where I don’t know the language. (Or at least, don’t know more than half a duolingo lesson.) I’m doing this just three years after another international move. And while I’m definitely nervous, this isn’t the big life changing shift it would be for many people. It’s just how academic careers are expected to work.

At borders, I’m often asked why I am where I am. Why be an American working in Canada? Why move to Denmark? And in general, the answer is just that it’s where I need to be to do what I want to do, because it’s where the other people who do what I want to do are. A few people seed this process by managing to find faculty jobs in their home countries, and others sort themselves out by their interests. In the end, we end up with places like Perimeter, an institute in the middle of Canada with barely any Canadians.

This is more pronounced for smaller fields than for larger ones. A chemist or biologist might just manage to have their whole career in the same state of the US, or the same country in Europe. For a theoretical physicist, this is much less likely. I also suspect it’s more true of more “universal” fields: that most professors of Portuguese literature are in Portugal or Brazil, for example.

For theoretical physics, the result is an essentially random mix of people around the world. This works, in part, because essentially everyone does science in English. Occasionally, a group of collaborators happens to speak the same non-English language, so you sometimes hear people talking science in Russian or Spanish or French. But even then there are times people will default to English anyway, because they’re used to it. We publish in English, we chat in English. And as a result, wherever we end up we can at least talk to our colleagues, even if the surrounding world is trickier.

Communities this international, with four different accents in every conversation, are rare, and I occasionally forget that. Before grad school, the closest I came to this was on the internet. On Dungeons and Dragons forums, much like in academia, everyone was drawn together by shared interests and expertise. We had Australians logging on in the middle of everyone else’s night to argue with the Germans, and Brazilians pointing out how the game’s errata was implemented differently in Portuguese.

It’s fun to be in that sort of community in the real world. There’s always something to learn from each other, even on completely mundane topics. Lunch often turns into a discussion of different countries’ cuisines. As someone who became an academic because I enjoy learning, it’s great to have the wheels constantly spinning like that. I should remember, though, that most of the world doesn’t live like this: we’re currently a pretty weird bunch.

# Join the Dark Side: Become a Seminar Organizer

Attending talks is the bane of many a physicist’s existence. Taking an hour out of your busy schedule to listen to someone you know you’ll only understand for fifteen minutes, hoping that they’ll at least give you a vague idea of why you should care but expecting that they won’t…who would willingly subject people to that?

Well, I would.

I’ve signed up to be the High Energy Theory Seminar organizer for the Niels Bohr Institute this year. Most physics institutes hold regular seminars, usually once or twice a week, where they invite speakers from the surrounding region and all over the world. Organizing these seminars is a job often handed to one of the local postdocs: in this case, me.

In the past I’ve put some thought into the purpose of seminars, but mostly from the perspective of someone attending and occasionally giving them. Now that I’m involved in organizing them, entirely new questions present themselves.

Are seminars for work, or for fun? On the one hand, seminars can be a way to keep up with your own field and pick up useful techniques from others. Looked at in that way, I should invite speakers whose interests line up with the researchers at NBI. On the other hand, seminars can be a good way to find out what’s going on outside of your own field, to satisfy your curiosity about the “next big thing”. Sometimes you see a paper and wish you could ask the author what they were thinking, seminars let you ask face to face.

Is it better to invite big names, or grad students? The big-name people might give better talks on more interesting topics, and they enhance the prestige of the seminar series. They also tend to be busy, and don’t need the talks as much as the grad students do.

People from nearby, or far away? It’s cheaper to invite people from nearby, but you want at least a few big names from farther away.

For most of these, the right approach is a balanced one. You want to invite people whose interests line up with your colleagues, but also a few more distant people for breadth. You want a mix of established big-name people and younger researchers, nearby people and far away ones.

The Niels Bohr Institute does a lot of seminars, typically two per week. Even with a co-organizer filling half of them, that’s a lot of ground to cover, a lot of room to balance all of these goals.

Seminar organizers get exposed to a wide range of researchers working on a wide range of topics. It’s supposed to be good for the career, the ultimate networking experience. For myself, I’m still quite specialized, so I’m hoping this will be a good opportunity to broaden my interests and learn about what others are doing. Along the way, perhaps I’ll get a better idea of what seminars are really for.

# Where Grants Go on the Ground

I’ve seen several recent debates about grant funding, arguments about whether this or that scientist’s work is “useless” and shouldn’t get funded. Wading into the specifics is a bit more political than I want to get on this blog right now, and if you’re looking for a general defense of basic science there are plenty to choose from. I’d like to focus on a different part, one where I think the sort of people who want to de-fund “useless” research are wildly overoptimistic.

People who call out “useless” research act as if government science funding works in a simple, straightforward way: scientists say what they want to work on, the government chooses which projects it thinks are worth funding, and the scientists the government chooses get paid.

This may be a (rough) picture of how grants are assigned. For big experiments and grants with very specific purposes, it’s reasonably accurate. But for the bulk of grants distributed among individual scientists, it ignores what happens to the money on the ground, after the scientists get it.

The simple fact of the matter is that what a grant is “for” doesn’t have all that much influence on what it gets spent on. In most cases, scientists work on what they want to, and find ways to pay for it.

Sometimes, this means getting grants for applied work, doing some of that, but also fitting in more abstract theoretical projects during downtime. Sometimes this means sharing grant money, if someone has a promising grad student they can’t fund at the moment and needs the extra help. (When I first got research funding as a grad student, I had to talk to the particle physics group’s secretary, and I’m still not 100% sure why.) Sometimes this means being funded to look into something specific and finding a promising spinoff that takes you in an entirely different direction. Sometimes you can get quite far by telling a good story, like a mathematician I know who gets defense funding to study big abstract mathematical systems because some related systems happen to have practical uses.

Is this unethical? Some of it, maybe. But from what I’ve seen of grant applications, it’s understandable.

The problem is that if scientists are too loose with what they spend grant money on, grant agency asks tend to be far too specific. I’ve heard of grants that ask you to give a timeline, over the next five years, of each discovery you’re planning to make. That sort of thing just isn’t possible in science: we can lay out a rough direction to go, but we don’t know what we’ll find.

The end result is a bit like complaints about job interviews, where everyone is expected to say they love the company even though no-one actually does. It creates an environment where everyone has to twist the truth just to keep up with everyone else.

The other thing to keep in mind is that there really isn’t any practical way to enforce any of this. Sure, you can require receipts for equipment and the like, but once you’re paying for scientists’ time you don’t have a good way to monitor how they spend it. The best you can do is have experts around to evaluate the scientists’ output…but if those experts understand enough to do that, they’re going to be part of the scientific community, like grant committees usually already are. They’ll have the same expectations as the scientists, and give similar leeway.

So if you want to kill off some “useless” area of research, you can’t do it by picking and choosing who gets grants for what. There are advocates of more drastic actions of course, trying to kill whole agencies or fields, and that’s beyond the scope of this post. But if you want science funding to keep working the way it does, and just have strong opinions about what scientists should do with it, then calling out “useless” research doesn’t do very much: if the scientists in question think it’s useful, they’ll find a way to keep working on it. You’ve slowed them down, but you’ll still end up paying for research you don’t like.

Final note: The rule against political discussion in the comments is still in effect. For this post, that means no specific accusations of one field or another as being useless, or one politician/political party/ideology or another of being the problem here. Abstract discussions and discussions of how the grant system works should be fine.