Tag Archives: philosophy of science

The Multiverse Can Only Kill Physics by Becoming Physics

I’m not a fan of the multiverse. I think it’s over-hyped, way beyond its current scientific support.

But I don’t think it’s going to kill physics.

By “the multiverse” I’m referring to a group of related ideas. There’s the idea that we live in a vast, varied universe, with different physical laws in different regions. Relatedly, there’s the idea that the properties of our region aren’t typical of the universe as a whole, just typical of places where life can exist. It may be that in most of the universe the cosmological constant is enormous, but if life can only exist in places where it is tiny then a tiny cosmological constant is what we’ll see. That sort of logic is called anthropic reasoning. If it seems strange, think about a smaller scale: there are many planets in the universe, but only a small number of them can support life. Still, we shouldn’t be surprised that we live on a planet that can support life: if it couldn’t, we wouldn’t live here!

If we really do live in a multiverse, though, some of what we think of as laws of physics are just due to random chance. Maybe the quarks have the masses they do not for some important reason, but just because they happened to end up that way in our patch of the universe.

This seems to have depressing implications. If the laws of physics are random, or just consequences of where life can exist, then what’s left to discover? Why do experiments at all?

Well, why not ask the geoscientists?


These guys

We might live in one universe among many, but we definitely live on one planet among many. And somehow, this realization hasn’t killed geoscience.

That’s because knowing we live on a random planet doesn’t actually tell us very much.

Now, I’m not saying you can’t do anthropic reasoning about the Earth. For example, it looks like an active system of plate tectonics is a necessary ingredient for life. Even if plate tectonics is rare, we shouldn’t be surprised to live on a planet that has it.

Ok, so imagine it’s 1900, before Wegener proposed continental drift. Scientists believe there are many planets in the universe, that we live in a “multiplanet”. Could you predict plate tectonics?

Even knowing that we live on one of the few planets that can support life, you don’t know how it supports life. Even living in a “multiplanet”, geoscience isn’t dead. The specifics of our Earth are still going to teach you something important about how planets work.

Physical laws work the same way. I’ve said that the masses of the quarks could be random, but it’s not quite that simple. The underlying reasons why the masses of the quarks are what they are could be random: the specifics of how six extra dimensions happened to curl up in our region of the universe, for example. But there’s important physics in between: the physics of how those random curlings of space give rise to quark masses. There’s a mechanism there, and we can’t just pick one out of a hat or work backwards to it anthropically. We have to actually go out and discover the answer.

Similarly, we don’t know automatically which phenomena are “random”, which are “anthropic”, and which are required by some deep physical principle. Even in a multiverse, we can’t assume that everything comes down to chance, we only know that some things will, much as the geoscientists don’t know what’s unique to Earth and what’s true of every planet without actually going out and checking.

You can even find a notion of “naturalness” here, if you squint. In physics, we find phenomena like the mass of the Higgs “unnatural”, they’re “fine-tuned” in a way that cries out for an explanation. Normally, we think of this in terms of a hypothetical “theory of everything”: the more “fine-tuned” something appears, the harder it would be to explain it in a final theory. In a multiverse, it looks like we’d have to give up on this, because even the most unlikely-looking circumstance would happen somewhere, especially if it’s needed for life.

Once again, though, imagine you’re a geoscientist. Someone suggests a ridiculously fine-tuned explanation for something: perhaps volcanoes only work if they have exactly the right amount of moisture. Even though we live on one planet in a vast universe, you’re still going to look for simpler explanations before you move on to more complicated ones. It’s human nature, and by and large it’s the only way we’re capable of doing science. As physicists, we’ve papered this over with technical definitions of naturalness, but at the end of the day even in a multiverse we’ll still start with less fine-tuned-looking explanations and only accept the fine-tuned ones when the evidence forces us to. It’s just what people do.

The only way for anthropic reasoning to get around this, to really make physics pointless once and for all, is if it actually starts making predictions. If anthropic reasoning in physics can be made much stronger than anthropic reasoning in geoscience (which, as mentioned, didn’t predict tectonic plates until a century after their discovery) then maybe we can imagine getting to a point where it tells us what particles we should expect to discover, and what masses they should have.

At that point, though, anthropic reasoning won’t have made physics pointless: it will have become physics.

If anthropic reasoning is really good enough to make reliable, falsifiable predictions, then we should be ecstatic! I don’t think we’re anywhere near that point, though some people are earnestly trying to get there. But if it really works out, then we’d have a powerful new method to make predictions about the universe.


Ok, so with all of this said, there is one other worry.

Karl Popper criticized Marxism and Freudianism for being unfalsifiable. In both disciplines, there was a tendency to tell what were essentially “just-so stories”. They could “explain” any phenomenon by setting it in their framework and explaining how it came to be “just so”. These explanations didn’t make new predictions, and different people often ended up coming up with different explanations with no way to distinguish between them. They were stories, not scientific hypotheses. In more recent times, the same criticism has been made of evolutionary psychology. In each case the field is accused of being able to justify anything and everything in terms of its overly ambiguous principles, whether dialectical materialism, the unconscious mind, or the ancestral environment.


Or an elephant’s ‘satiable curtiosity

You’re probably worried that this could happen to physics. With anthropic reasoning and the multiverse, what’s to stop physicists from just proposing some “anthropic” just-so-story for any evidence we happen to find, no matter what it is? Surely anything could be “required for life” given a vague enough argument.

You’re also probably a bit annoyed that I saved this objection for last. I know that for many people, this is precisely what you mean when you say the multiverse will “kill physics”.

I’ve saved this for last for a reason though. It’s because I want to point out something important: this outcome, that our field degenerates into just-so-stories, isn’t required by the physics of the multiverse. Rather, it’s a matter of sociology.

If we hold anthropic reasoning to the same standards as the rest of physics, then there’s no problem: if an anthropic explanation doesn’t make falsifiable predictions then we ignore it. The problem comes if we start loosening our criteria, start letting people publish just-so-stories instead of real science.

This is a real risk! I don’t want to diminish that. It’s harder than it looks for a productive academic field to fall into bullshit, but just-so-stories are a proven way to get there.

What I want to emphasize is that we’re all together in this. We all want to make sure that physics remains scientific. We all need to be vigilant, to prevent a culture of just-so-stories from growing. Regardless of whether the multiverse is the right picture, and regardless of how many annoying TV specials they make about it in the meantime, that’s the key: keeping physics itself honest. If we can manage that, nothing we discover can kill our field.


On the Care and Feeding of Ideas

I read Zen and the Art of Motorcycle Maintenance in high school. It’s got a reputation for being obnoxiously mystical, but one of its points seemed pretty reasonable: the claim that the hard part of science, and the part we understand the least, is coming up with hypotheses.

In some sense, theoretical physics is all about hypotheses. By this I don’t mean that we just say “what if?” all the time. I mean that in theoretical physics most of the work is figuring out the right way to ask a question. Phrase your question in the right way and the answer becomes obvious (or at least, obvious after a straightforward calculation). Because our questions are mathematical, the right question can logically imply its own solution.

From the point of view of “Zen and the Art”, as well as most non-scientists I’ve met, this part is utterly mysterious. The ideas you need here seem like they can’t come from hard work or careful observation. In order to ask the right questions, you just need to be “smart”.

In practice, I’ve noticed there’s more to it than that. We can’t just sit around and wait for an idea to show up. Instead, as physicists we develop a library of tricks, often unstated, that let us work towards the ideas we need.

Sometimes, this involves finding simpler cases, working with them until we understand the right questions to ask. Sometimes it involves doing numerics, or using crude guesses, not because either method will give the final answer but because it will show what the answer should look like. Sometimes we need to rephrase the problem many times, in many different contexts, before we happen on one that works. Most of this doesn’t end up in published papers, so in the end we usually have to pick it up from experience.

Along the way, we often find tricks to help us think better. Mostly this is straightforward stuff: reminders to keep us on-task, keeping our notes organized and our code commented so we have a good idea of what we were doing when we need to go back to it. Everyone has their own personal combination of these things in the background, and they’re rarely discussed.

The upshot is that coming up with ideas is hard work. We need to be smart, sure, but that’s not enough by itself: there are a lot of smart people who aren’t physicists after all.

With all that said, some geniuses really do seem to come up with ideas out of thin air. It’s not the majority of the field: we’re not the idiosyncratic Sheldon Coopers everyone seems to imagine. But for a few people, it really does feel like there’s something magical about where they get their ideas. I’ve had the privilege of working with a couple people like this, and the way they think sometimes seems qualitatively different from our usual way of building ideas. I can’t see any of the standard trappings, the legacy of partial results and tricks of thought, that would lead to where they end up. That doesn’t mean they don’t use tricks just like the rest of us, in the end. But I think genius, if it means anything at all, is thinking in a novel enough way that from the outside it looks like magic.

Most of the time, though, we just need to hone our craft. We build our methods and shape our minds as best we can, and we get better and better at the central mystery of science: asking the right questions.

Textbook Review: Exploring Black Holes

I’m bringing a box of textbooks with me to Denmark. Most of them are for work: a few Quantum Field Theory texts I might use, a Complex Analysis book for when I inevitably forget how to do contour integration.

One of the books, though, is just for fun.


Exploring Black Holes is an introduction to general relativity for undergraduates. The book came out of a collaboration between Edwin F. Taylor, known for his contributions to physics teaching, and John Archibald Wheeler, who among a long list of achievements was responsible for popularizing the term “black hole”. The result is something quite unique: a general relativity course that requires no math more advanced than calculus, and no physics more advanced than special relativity.

It does this by starting, not with the full tensor-riddled glory of Einstein’s equations, but with specialized solutions to those equations, mostly the Schwarzschild solution that describes space around spherical objects (including planets, stars, and black holes). From there, it manages to introduce curved space in a way that is both intuitive and naturally grows out of what students learn about special relativity. It really is the kind of course a student can take right after their first physics course, and indeed as an undergrad that’s exactly what I did.

With just the Schwarzchild solution and its close relatives, you can already answer most of the questions young students have about general relativity. In a series of “projects”, the book explores the corrections GR demands of GPS satellites, the process of falling into a black hole, the famous measurement of the advance of the perihelion of mercury, the behavior of light in a strong gravitational field, and even a bit of cosmology. In the end the students won’t know the full power of the theory, but they’ll get a taste while building valuable physical intuition.

Still, I wouldn’t bring this book with me if it was just an excellent undergraduate textbook. Exploring Black Holes is a great introduction to general relativity, but it also has a hilarious not-so-hidden agenda: inspiring future astronauts to jump into black holes.

“Nowhere could life be simpler or more relaxed than in a free-float frame, such as an unpowered spaceship falling toward a black hole.” – pg. 2-31

The book is full of quotes like this. One of the book’s “projects” involves computing what happens to an astronaut who falls into a black hole. The book takes special care to have students calculate that “spaghettification”, the process by which the tidal forces of a black hole stretch infalling observers into spaghetti, is surprisingly completely painless: the amount of time you experience it is always less than the amount of time it takes light (and thus also pain) to go from your feet to your head, for any (sufficiently calm) black hole.

Why might Taylor and Wheeler want people of the future to jump into black holes? As the discussion on page B-3 of the book describes, the reason is on one level an epistemic one. As theorists, we’d like to reason about what lies inside the event horizon of black holes, but we face a problem: any direct test would be trapped inside, and we would never know the result, which some would argue makes such speculation unscientific. What Taylor and Wheeler point out is that it’s not quite true that no-one would know the results of such a test: if someone jumped into a black hole, they would be able to test our reasoning. If a whole scientific community jumped in, then the question of what is inside a black hole is from their perspective completely scientific.

Of course, I don’t think Taylor and Wheeler seriously thought their book would convince its readers to jump into black holes. For one, it’s unlikely anyone reading the book will get a chance. Still, I suspect that the idea that future generations might explore black holes gave Taylor and Wheeler some satisfaction, and a nice clean refutation of those who think physics inside the horizon is unscientific. Seeing as the result was an excellent textbook full of hilarious prose, I can’t complain.

Boltzmann Brains, Evil Demons, and Why It’s Occasionally a Good Idea to Listen to Philosophers

There’s been a bit of a buzz recently about a paper Sean Carroll posted to the arXiv, “Why Boltzmann Brains Are Bad”. The argument in the paper isn’t new, it’s something Carroll has been arguing for a long time, and the arXiv post was just because he had been invited to contribute a piece to a book on Current Controversies in Philosophy of Science.

(By the way: in our field, invited papers and conference proceedings are almost always reviews of old work, not new results. If you see something on arXiv and want to know whether it’s actually new work, the “Comments:” section will almost always mention this.)

While the argument isn’t new, it is getting new attention. And since I don’t think I’ve said much about my objections to it, now seems like a good time to do so.

Carroll’s argument is based on theoretical beings called Boltzmann brains. The idea is that if you wait a very very long time in a sufficiently random (“high-entropy”) universe, the matter in that universe will arrange itself in pretty much every imaginable way, if only for a moment. In particular, it will eventually form a brain, or enough of a brain to have a conscious experience. Wait long enough, and you can find a momentary brain having any experience you want, with any (fake) memories you want. Long enough, and you can find a brain having the same experience you are having right now.

So, Carroll asks, how do you know you aren’t a Boltzmann brain? If the universe exists for long enough, most of the beings having your current experiences would be Boltzmann brains, not real humans. But if you really are a Boltzmann brain, then you can’t know anything about the universe at all: everything you think are your memories are just random fluctuations with no connection to the real world.

Carroll calls this sort of situation “cognitively unstable”. If you reason scientifically that the universe must be full of Boltzmann brains, then you can’t rule out that you could be a Boltzmann brain, and thus you shouldn’t accept your original reasoning.

The only way out, according to Carroll, is if we live in a universe that will never contain Boltzmann brains, for example one that won’t exist in its current form long enough to create them. So from a general concern about cognitive instability, Carroll argues for specific physics. And if that seems odd…well, it is.

For the purpose of this post, I’m going to take for granted the physics case: that a sufficiently old and random universe would indeed produce Boltzmann brains. That’s far from uncontroversial, and if you’re interested in that side of the argument (and have plenty of patience for tangents and Czech poop jokes) Lubos Motl posted about it recently.

Instead, I’d like to focus on the philosophical side of the argument.

Let’s start with intro philosophy, and talk about Descartes.

Descartes wanted to start philosophy from scratch by questioning everything he thought he knew. In one of his arguments, he asks the reader to imagine an evil demon.


Probably Graz’zt. It’s usually Graz’zt.

Descartes imagines this evil demon exercising all its power to deceive. Perhaps it could confound your senses with illusions, or modify your memories. If such a demon existed, there would be no way to know if anything you believed or reasoned about the world was correct. So, Descartes asked, how do you know you’re not being deceived by an evil demon right now?

Amusingly, like Carroll, Descartes went on to use this uncertainty to argue for specific proposals in physics: in Descartes’ case, everything from the existence of a benevolent god to the idea that gravity was caused by a vortex of fluid around the sun.

Descartes wasn’t the last to propose this kind of uncertainty, and philosophers have asked more sophisticated questions over the years challenging the idea that it makes sense to reason from the past about the future at all.

Carroll is certainly aware of all of this. But I suspect he doesn’t quite appreciate the current opinion philosophers have on these sorts of puzzles.

The impression I’ve gotten from philosophers is that they don’t take this kind of “cognitive instability” very seriously anymore. There are specialists who still work on it, and it’s still of historical interest. But the majority of philosophers have moved on.

How did they move on? How have they dismissed these kinds of arguments?

That varies. Philosophers don’t tend to have the kind of consensus that physicists usually do.

Some reject them on pragmatic grounds: science works, even if we can’t “justify” it. Some use a similar argument to Carroll’s, but take it one step back, arguing that we shouldn’t worry that we could be deceived by an evil demon or be a Boltzmann brain because those worries by themselves are cognitively unstable. Some bite the bullet, that reasoning is impossible, then just ignore it and go on with their lives.

The common trait of all of these rejections, though? They don’t rely on physics.

Philosophers don’t argue “evil demons are impossible, therefore we can be sure we’re not deceived by evil demons”. They don’t argue “dreams are never completely realistic, so we can’t just be dreaming right now”.

And they certainly don’t try to argue the reverse: that consistency means there can never be evil demons, or never be realistic dreams.

I was on the debate team in high school. One popular tactic was called the “non-unique”. If your opponent argued that your plan had some negative consequences, you could argue that those consequences would happen regardless of whether you got to enact your plan or not: that the consequences were non-unique.

At this point, philosophers understand that cognitive instability and doubt are “non-unique”. No matter the physics, no matter how the world looks, it’s still possible to argue that reasoning isn’t justified, that even the logic we used to doubt the world in the first place could be flawed.

Carroll’s claim to me seems non-unique. Yes, in a universe that exists for a long time you could be a Boltzmann brain. But even if you don’t live in such a universe, you could still be a brain in a jar or a simulation. You could still be deceived by an “evil demon”.

And so regardless, you need the philosophers. Regardless, you need some argument that reasoning works, that you can ignore doubt. And once you’re happy with that argument, you don’t have to worry about Boltzmann brains.

Have You Given Your Kids “The Talk”?

If you haven’t seen it yet, I recommend reading this delightful collaboration between Scott Aaronson (of Shtetl-Optimized) and Zach Weinersmith (of Saturday Morning Breakfast Cereal). As explanations of a concept beyond the standard popular accounts go, this one is pretty high quality, correcting some common misconceptions about quantum computing.

I especially liked the following exchange:


I’ve complained before about people trying to apply ontology to physics, and I think this gets at the root of one of my objections.

People tend to think that the world should be describable with words. From that perspective, mathematics is just a particular tool, a system we’ve created. If you look at the world in that way, mathematics looks unreasonably effective: it’s ability to describe the real world seems like a miraculous coincidence.

Mathematics isn’t just one tool though, or just one system. It’s all of them: not just numbers and equations, but knots and logic and everything else. Deep down, mathematics is just a collection of all the ways we’ve found to state things precisely.

Because of that, it shouldn’t surprise you that we “put complex numbers in our ontologies”. Complex numbers are just one way we’ve found to make precise statements about the world, one that comes in handy when talking about quantum mechanics. There doesn’t need to be a “correct” description in words: the math is already stating things as precisely as we know how.

That doesn’t mean that ontology is a useless project. It’s worthwhile to develop new ways of talking about things. I can understand the goal of building up a philosophical language powerful enough to describe the world in terms of words, and if such a language was successful it might well inspire us to ask new scientific questions.

But it’s crucial to remember that there’s real work to be done there. There’s no guarantee that the project will work, that words will end up sufficient. When you put aside our best tools to make precise statements, you’re handicapping yourself, making the problem harder than it needed to be. It’s your responsibility to make sure you’re getting something worthwhile out of it.

So You Want to Prove String Theory (Or: Nima Did Something Cool Again)

Nima Arkani-Hamed, of Amplituhedron fame, has been making noises recently about proving string theory.

Now, I can already hear the smartarses in the comments correcting me here. You can’t prove a scientific theory, you can only provide evidence for it.

Well, in this case I don’t mean “provide evidence”. (Direct evidence for string theory is quite unlikely at the moment given the high energies at which it becomes relevant and large number of consistent solutions, but an indirect approach might yet work.) I actually mean “prove”.

See, there are two ways to think about the problem of quantum gravity. One is as an experimental problem: at high enough energies for quantum gravity to be relevant, what actually happens? Since it’s going to be a very long time before we can probe those energies, though, in practice we instead have a technical problem: can we write down a theory that looks like gravity in familiar situations, while avoiding the pesky infinities that come with naive attempts at quantum gravity?

If you can prove that string theory is the only theory that does that, then you’ve proven string theory. If you can prove that string theory is the only theory that does that [with certain conditions] then you’ve proven string theory [with certain conditions].

That, in broad terms, is what Nima has been edging towards. At this year’s Strings conference, he unveiled some progress towards that goal. And since I just recently got around to watching his talk, you get to hear my take on it.

 Nima has been working with Yu-tin Huang, an amplitudeologist who tends to show up everywhere, and one of his students. Working in parallel, an all-star cast has been doing a similar calculation for Yang-Mills theory. The Yang-Mills story is cool, and probably worth a post in its own right, but I think you guys are more interested in the quantum gravity one.

What is Nima doing here?

Nima is looking at scattering amplitudes, probabilities for particles to scatter off of each other. In this case, the particles are gravitons, the particle form of gravitational waves.

Normally, the problems with quantum gravity show up when your scattering amplitudes have loops. Here, Nima is looking at amplitudes without loops, the most important contributions when the force in question is weak (the “weakly coupled” in Nima’s title).

Even for these amplitudes you can gain insight into quantum gravity by seeing what happens at high energies (the “UV” in the title). String amplitudes have nice behavior at high energies, naive gravity amplitudes do not. The question then becomes, are there other amplitudes that preserve this nice behavior, while still obeying the rules of physics? Or is string theory truly unique, the only theory that can do this?

The team that asked a similar question about Yang-Mills theory found that string theory was unique, that every theory that obeyed their conditions was in some sense “stringy”. That makes it even more surprising that, for quantum gravity, the answer was no: the string theory amplitude is not unique. In fact, Nima and his collaborators found an infinite set of amplitudes that met their conditions, related by a parameter they could vary freely.

What are these other amplitudes, then?

Nima thinks they can’t be part of a consistent theory, and he’s probably right. They have a number of tests they haven’t done: in particular, they’ve only been looking at amplitudes involving two gravitons scattering off each other, but a real theory should have consistent answers for any number of gravitons interacting, and it’s doesn’t look like these “alternate” amplitudes can be generalized to work for that.

That said, at this point it’s still possible that these other amplitudes are part of some sort of sensible theory. And that would be incredibly interesting, because we’ve never seen anything like that before.

There are approaches to quantum gravity besides string theory, sure. But common to all of them is an inability to actually calculate scattering amplitudes. If there really were a theory that generated these “alternate” amplitudes, it wouldn’t correspond to any existing quantum gravity proposal.

(Incidentally, this is also why this sort of “proof” of string theory might not convince everyone. Non-string quantum gravity approaches tend to talk about things fairly far removed from scattering amplitudes, so some would see this kind of thing as apples and oranges.)

I’d be fascinated to see where this goes. Either we have a new set of gravity scattering amplitudes to work with, or string theory turns out to be unique in a more rigorous and specific way than we’ve previously known. No matter what, something interesting is going to happen.

After the talk David Gross drew on his experience of the origin of string theory to question whether this work is just retreading the path to an old dead end. String theory arose from an attempt to find a scattering amplitude with nice properties, but it was only by understanding this amplitude physically in terms of vibrating strings that it was able to make real progress.

I generally agree with Nima’s answer, but to re-frame it in my own words: in the amplitudes sub-field, there’s something of a cycle. We try to impose general rules, until by using those rules we have a new calculation technique. We then do a bunch of calculations with the new technique. Finally, we look at the results of those calculations, try to find new general rules, and start the cycle again.

String theory is the result of people applying general rules to scattering amplitudes and learning enough to discover not just a new calculation technique, but a new physical theory. Now, we’ve done quite a lot of string theory calculations, and quite a lot more quantum field theory calculations as well. We have a lot of “data”.

And when you have a lot of data, it becomes much more productive to look for patterns. Now, if we start trying to apply general rules, we have a much better idea of what we’re looking for. This lets us get a lot further than people did the first time through the cycle. It’s what let Nima find the Amplituhedron, and it’s something Yu-tin has a pretty good track record of as well.

So in general, I’m optimistic. As a community, we’re poised to find out some very interesting things about what gravity scattering amplitudes can look like. Maybe, we’ll even prove string theory. [With certain conditions, of course. 😉 ]

Science Is a Collection of Projects, Not a Collection of Beliefs

Read a textbook, and you’ll be confronted by a set of beliefs about the world.

(If it’s a half-decent textbook, it will give justifications for those beliefs, and they will be true, putting you well on the way to knowledge.)

The same is true of most science popularization. In either case, you’ll be instructed that a certain set of statements about the world (or about math, or anything else) are true.

If most of your experience with science comes from popularizations and textbooks, you might think that all of science is like this. In particular, you might think of scientific controversies as matters of contrasting beliefs. Some scientists “believe in” supersymmetry, some don’t. Some “believe in” string theory, some don’t. Some “believe in” a multiverse, some don’t.

In practice, though, only settled science takes the form of beliefs. The rest, science as it is actually practiced, is better understood as a collection of projects.

Scientists spend most of their time working on projects. (Well, or procrastinating in my case.) Those projects, not our beliefs about the world, are how we influence other scientists, because projects build off each other. Any time we successfully do a calculation or make a measurement, we’re opening up new calculations and measurements for others to do. We all need to keep working and publishing, so anything that gives people something concrete to do is going to be influential.

The beliefs that matter come later. They come once projects have been so successful, and so widespread, that their success itself is evidence for beliefs. They’re the beliefs that serve as foundational assumptions for future projects. If you’re going to worry that some scientists are behaving unscientifically, these are the sorts of beliefs you want to worry about. Even then, things are often constrained by viable projects: in many fields, you can’t have a textbook without problem sets.

Far too many people seem to miss this distinction. I’ve seen philosophers focus on scientists’ public statements instead of their projects when trying to understand the implications of their science. I’ve seen bloggers and journalists who mostly describe conflicts of beliefs, what scientists expect and hope to be true rather than what they actually work on.

Do scientists have beliefs about controversial topics? Absolutely. Do those beliefs influence what they work on? Sure. But only so far as there’s actually something there to work on.

That’s why you see quite a few high-profile physicists endorsing some form of multiverse, but barely any actual journal articles about it. The belief in a multiverse may or may not be true, but regardless, there just isn’t much that one can do with the idea right now, and it’s what scientists are doing, not what they believe, that constitutes the health of science.

Different fields seem to understand this to different extents. I’m reminded of a story I heard in grad school, of two dueling psychologists. One of them believed that conversation was inherently cooperative, and showed that, unless unusually stressed or busy, people would put in the effort to understand the other person’s perspective. The other believed that conversation was inherently egocentric, and showed that, the more you stressed or busy people are, the more they assume that everyone else has the same perspective they do.

Strip off the “beliefs”, and these two worked on the exact same thing, with the same results. With their beliefs included, though, they were bitter rivals who bristled if their grad students so much as mentioned the other scientist.

We need to avoid this kind of mistake. The skills we have, the kind of work we do, these are important, these are part of science. The way we talk about it to reporters, the ideas we champion when we debate, those are sidelines. They have some influence, dragging people one way or another. But they’re not what science is, because on the front lines, science is about projects, not beliefs.