Tag Archives: philosophy of science

Boltzmann Brains, Evil Demons, and Why It’s Occasionally a Good Idea to Listen to Philosophers

There’s been a bit of a buzz recently about a paper Sean Carroll posted to the arXiv, “Why Boltzmann Brains Are Bad”. The argument in the paper isn’t new, it’s something Carroll has been arguing for a long time, and the arXiv post was just because he had been invited to contribute a piece to a book on Current Controversies in Philosophy of Science.

(By the way: in our field, invited papers and conference proceedings are almost always reviews of old work, not new results. If you see something on arXiv and want to know whether it’s actually new work, the “Comments:” section will almost always mention this.)

While the argument isn’t new, it is getting new attention. And since I don’t think I’ve said much about my objections to it, now seems like a good time to do so.

Carroll’s argument is based on theoretical beings called Boltzmann brains. The idea is that if you wait a very very long time in a sufficiently random (“high-entropy”) universe, the matter in that universe will arrange itself in pretty much every imaginable way, if only for a moment. In particular, it will eventually form a brain, or enough of a brain to have a conscious experience. Wait long enough, and you can find a momentary brain having any experience you want, with any (fake) memories you want. Long enough, and you can find a brain having the same experience you are having right now.

So, Carroll asks, how do you know you aren’t a Boltzmann brain? If the universe exists for long enough, most of the beings having your current experiences would be Boltzmann brains, not real humans. But if you really are a Boltzmann brain, then you can’t know anything about the universe at all: everything you think are your memories are just random fluctuations with no connection to the real world.

Carroll calls this sort of situation “cognitively unstable”. If you reason scientifically that the universe must be full of Boltzmann brains, then you can’t rule out that you could be a Boltzmann brain, and thus you shouldn’t accept your original reasoning.

The only way out, according to Carroll, is if we live in a universe that will never contain Boltzmann brains, for example one that won’t exist in its current form long enough to create them. So from a general concern about cognitive instability, Carroll argues for specific physics. And if that seems odd…well, it is.

For the purpose of this post, I’m going to take for granted the physics case: that a sufficiently old and random universe would indeed produce Boltzmann brains. That’s far from uncontroversial, and if you’re interested in that side of the argument (and have plenty of patience for tangents and Czech poop jokes) Lubos Motl posted about it recently.

Instead, I’d like to focus on the philosophical side of the argument.

Let’s start with intro philosophy, and talk about Descartes.

Descartes wanted to start philosophy from scratch by questioning everything he thought he knew. In one of his arguments, he asks the reader to imagine an evil demon.


Probably Graz’zt. It’s usually Graz’zt.

Descartes imagines this evil demon exercising all its power to deceive. Perhaps it could confound your senses with illusions, or modify your memories. If such a demon existed, there would be no way to know if anything you believed or reasoned about the world was correct. So, Descartes asked, how do you know you’re not being deceived by an evil demon right now?

Amusingly, like Carroll, Descartes went on to use this uncertainty to argue for specific proposals in physics: in Descartes’ case, everything from the existence of a benevolent god to the idea that gravity was caused by a vortex of fluid around the sun.

Descartes wasn’t the last to propose this kind of uncertainty, and philosophers have asked more sophisticated questions over the years challenging the idea that it makes sense to reason from the past about the future at all.

Carroll is certainly aware of all of this. But I suspect he doesn’t quite appreciate the current opinion philosophers have on these sorts of puzzles.

The impression I’ve gotten from philosophers is that they don’t take this kind of “cognitive instability” very seriously anymore. There are specialists who still work on it, and it’s still of historical interest. But the majority of philosophers have moved on.

How did they move on? How have they dismissed these kinds of arguments?

That varies. Philosophers don’t tend to have the kind of consensus that physicists usually do.

Some reject them on pragmatic grounds: science works, even if we can’t “justify” it. Some use a similar argument to Carroll’s, but take it one step back, arguing that we shouldn’t worry that we could be deceived by an evil demon or be a Boltzmann brain because those worries by themselves are cognitively unstable. Some bite the bullet, that reasoning is impossible, then just ignore it and go on with their lives.

The common trait of all of these rejections, though? They don’t rely on physics.

Philosophers don’t argue “evil demons are impossible, therefore we can be sure we’re not deceived by evil demons”. They don’t argue “dreams are never completely realistic, so we can’t just be dreaming right now”.

And they certainly don’t try to argue the reverse: that consistency means there can never be evil demons, or never be realistic dreams.

I was on the debate team in high school. One popular tactic was called the “non-unique”. If your opponent argued that your plan had some negative consequences, you could argue that those consequences would happen regardless of whether you got to enact your plan or not: that the consequences were non-unique.

At this point, philosophers understand that cognitive instability and doubt are “non-unique”. No matter the physics, no matter how the world looks, it’s still possible to argue that reasoning isn’t justified, that even the logic we used to doubt the world in the first place could be flawed.

Carroll’s claim to me seems non-unique. Yes, in a universe that exists for a long time you could be a Boltzmann brain. But even if you don’t live in such a universe, you could still be a brain in a jar or a simulation. You could still be deceived by an “evil demon”.

And so regardless, you need the philosophers. Regardless, you need some argument that reasoning works, that you can ignore doubt. And once you’re happy with that argument, you don’t have to worry about Boltzmann brains.

Have You Given Your Kids “The Talk”?

If you haven’t seen it yet, I recommend reading this delightful collaboration between Scott Aaronson (of Shtetl-Optimized) and Zach Weinersmith (of Saturday Morning Breakfast Cereal). As explanations of a concept beyond the standard popular accounts go, this one is pretty high quality, correcting some common misconceptions about quantum computing.

I especially liked the following exchange:


I’ve complained before about people trying to apply ontology to physics, and I think this gets at the root of one of my objections.

People tend to think that the world should be describable with words. From that perspective, mathematics is just a particular tool, a system we’ve created. If you look at the world in that way, mathematics looks unreasonably effective: it’s ability to describe the real world seems like a miraculous coincidence.

Mathematics isn’t just one tool though, or just one system. It’s all of them: not just numbers and equations, but knots and logic and everything else. Deep down, mathematics is just a collection of all the ways we’ve found to state things precisely.

Because of that, it shouldn’t surprise you that we “put complex numbers in our ontologies”. Complex numbers are just one way we’ve found to make precise statements about the world, one that comes in handy when talking about quantum mechanics. There doesn’t need to be a “correct” description in words: the math is already stating things as precisely as we know how.

That doesn’t mean that ontology is a useless project. It’s worthwhile to develop new ways of talking about things. I can understand the goal of building up a philosophical language powerful enough to describe the world in terms of words, and if such a language was successful it might well inspire us to ask new scientific questions.

But it’s crucial to remember that there’s real work to be done there. There’s no guarantee that the project will work, that words will end up sufficient. When you put aside our best tools to make precise statements, you’re handicapping yourself, making the problem harder than it needed to be. It’s your responsibility to make sure you’re getting something worthwhile out of it.

So You Want to Prove String Theory (Or: Nima Did Something Cool Again)

Nima Arkani-Hamed, of Amplituhedron fame, has been making noises recently about proving string theory.

Now, I can already hear the smartarses in the comments correcting me here. You can’t prove a scientific theory, you can only provide evidence for it.

Well, in this case I don’t mean “provide evidence”. (Direct evidence for string theory is quite unlikely at the moment given the high energies at which it becomes relevant and large number of consistent solutions, but an indirect approach might yet work.) I actually mean “prove”.

See, there are two ways to think about the problem of quantum gravity. One is as an experimental problem: at high enough energies for quantum gravity to be relevant, what actually happens? Since it’s going to be a very long time before we can probe those energies, though, in practice we instead have a technical problem: can we write down a theory that looks like gravity in familiar situations, while avoiding the pesky infinities that come with naive attempts at quantum gravity?

If you can prove that string theory is the only theory that does that, then you’ve proven string theory. If you can prove that string theory is the only theory that does that [with certain conditions] then you’ve proven string theory [with certain conditions].

That, in broad terms, is what Nima has been edging towards. At this year’s Strings conference, he unveiled some progress towards that goal. And since I just recently got around to watching his talk, you get to hear my take on it.

 Nima has been working with Yu-tin Huang, an amplitudeologist who tends to show up everywhere, and one of his students. Working in parallel, an all-star cast has been doing a similar calculation for Yang-Mills theory. The Yang-Mills story is cool, and probably worth a post in its own right, but I think you guys are more interested in the quantum gravity one.

What is Nima doing here?

Nima is looking at scattering amplitudes, probabilities for particles to scatter off of each other. In this case, the particles are gravitons, the particle form of gravitational waves.

Normally, the problems with quantum gravity show up when your scattering amplitudes have loops. Here, Nima is looking at amplitudes without loops, the most important contributions when the force in question is weak (the “weakly coupled” in Nima’s title).

Even for these amplitudes you can gain insight into quantum gravity by seeing what happens at high energies (the “UV” in the title). String amplitudes have nice behavior at high energies, naive gravity amplitudes do not. The question then becomes, are there other amplitudes that preserve this nice behavior, while still obeying the rules of physics? Or is string theory truly unique, the only theory that can do this?

The team that asked a similar question about Yang-Mills theory found that string theory was unique, that every theory that obeyed their conditions was in some sense “stringy”. That makes it even more surprising that, for quantum gravity, the answer was no: the string theory amplitude is not unique. In fact, Nima and his collaborators found an infinite set of amplitudes that met their conditions, related by a parameter they could vary freely.

What are these other amplitudes, then?

Nima thinks they can’t be part of a consistent theory, and he’s probably right. They have a number of tests they haven’t done: in particular, they’ve only been looking at amplitudes involving two gravitons scattering off each other, but a real theory should have consistent answers for any number of gravitons interacting, and it’s doesn’t look like these “alternate” amplitudes can be generalized to work for that.

That said, at this point it’s still possible that these other amplitudes are part of some sort of sensible theory. And that would be incredibly interesting, because we’ve never seen anything like that before.

There are approaches to quantum gravity besides string theory, sure. But common to all of them is an inability to actually calculate scattering amplitudes. If there really were a theory that generated these “alternate” amplitudes, it wouldn’t correspond to any existing quantum gravity proposal.

(Incidentally, this is also why this sort of “proof” of string theory might not convince everyone. Non-string quantum gravity approaches tend to talk about things fairly far removed from scattering amplitudes, so some would see this kind of thing as apples and oranges.)

I’d be fascinated to see where this goes. Either we have a new set of gravity scattering amplitudes to work with, or string theory turns out to be unique in a more rigorous and specific way than we’ve previously known. No matter what, something interesting is going to happen.

After the talk David Gross drew on his experience of the origin of string theory to question whether this work is just retreading the path to an old dead end. String theory arose from an attempt to find a scattering amplitude with nice properties, but it was only by understanding this amplitude physically in terms of vibrating strings that it was able to make real progress.

I generally agree with Nima’s answer, but to re-frame it in my own words: in the amplitudes sub-field, there’s something of a cycle. We try to impose general rules, until by using those rules we have a new calculation technique. We then do a bunch of calculations with the new technique. Finally, we look at the results of those calculations, try to find new general rules, and start the cycle again.

String theory is the result of people applying general rules to scattering amplitudes and learning enough to discover not just a new calculation technique, but a new physical theory. Now, we’ve done quite a lot of string theory calculations, and quite a lot more quantum field theory calculations as well. We have a lot of “data”.

And when you have a lot of data, it becomes much more productive to look for patterns. Now, if we start trying to apply general rules, we have a much better idea of what we’re looking for. This lets us get a lot further than people did the first time through the cycle. It’s what let Nima find the Amplituhedron, and it’s something Yu-tin has a pretty good track record of as well.

So in general, I’m optimistic. As a community, we’re poised to find out some very interesting things about what gravity scattering amplitudes can look like. Maybe, we’ll even prove string theory. [With certain conditions, of course. 😉 ]

Science Is a Collection of Projects, Not a Collection of Beliefs

Read a textbook, and you’ll be confronted by a set of beliefs about the world.

(If it’s a half-decent textbook, it will give justifications for those beliefs, and they will be true, putting you well on the way to knowledge.)

The same is true of most science popularization. In either case, you’ll be instructed that a certain set of statements about the world (or about math, or anything else) are true.

If most of your experience with science comes from popularizations and textbooks, you might think that all of science is like this. In particular, you might think of scientific controversies as matters of contrasting beliefs. Some scientists “believe in” supersymmetry, some don’t. Some “believe in” string theory, some don’t. Some “believe in” a multiverse, some don’t.

In practice, though, only settled science takes the form of beliefs. The rest, science as it is actually practiced, is better understood as a collection of projects.

Scientists spend most of their time working on projects. (Well, or procrastinating in my case.) Those projects, not our beliefs about the world, are how we influence other scientists, because projects build off each other. Any time we successfully do a calculation or make a measurement, we’re opening up new calculations and measurements for others to do. We all need to keep working and publishing, so anything that gives people something concrete to do is going to be influential.

The beliefs that matter come later. They come once projects have been so successful, and so widespread, that their success itself is evidence for beliefs. They’re the beliefs that serve as foundational assumptions for future projects. If you’re going to worry that some scientists are behaving unscientifically, these are the sorts of beliefs you want to worry about. Even then, things are often constrained by viable projects: in many fields, you can’t have a textbook without problem sets.

Far too many people seem to miss this distinction. I’ve seen philosophers focus on scientists’ public statements instead of their projects when trying to understand the implications of their science. I’ve seen bloggers and journalists who mostly describe conflicts of beliefs, what scientists expect and hope to be true rather than what they actually work on.

Do scientists have beliefs about controversial topics? Absolutely. Do those beliefs influence what they work on? Sure. But only so far as there’s actually something there to work on.

That’s why you see quite a few high-profile physicists endorsing some form of multiverse, but barely any actual journal articles about it. The belief in a multiverse may or may not be true, but regardless, there just isn’t much that one can do with the idea right now, and it’s what scientists are doing, not what they believe, that constitutes the health of science.

Different fields seem to understand this to different extents. I’m reminded of a story I heard in grad school, of two dueling psychologists. One of them believed that conversation was inherently cooperative, and showed that, unless unusually stressed or busy, people would put in the effort to understand the other person’s perspective. The other believed that conversation was inherently egocentric, and showed that, the more you stressed or busy people are, the more they assume that everyone else has the same perspective they do.

Strip off the “beliefs”, and these two worked on the exact same thing, with the same results. With their beliefs included, though, they were bitter rivals who bristled if their grad students so much as mentioned the other scientist.

We need to avoid this kind of mistake. The skills we have, the kind of work we do, these are important, these are part of science. The way we talk about it to reporters, the ideas we champion when we debate, those are sidelines. They have some influence, dragging people one way or another. But they’re not what science is, because on the front lines, science is about projects, not beliefs.

The Metaphysics of Card Games

I tend to be skeptical of attempts to apply metaphysics to physics. In particular, I get leery when someone tries to describe physics in terms of which fundamental things exist, and which things are made up of other things.

Now, I’m not the sort of physicist who thinks metaphysics is useless in general. I’ve seen some impressive uses of supervenience, for example.

But I think that, in physics, talk of “things” is almost always premature. As physicists, we describe the world mathematically. It’s the most precise way we have access to of describing the universe. The trouble is, slightly different mathematics can imply the existence of vastly different “things”.

To give a slightly unusual example, let’s talk about card games.


To defeat metaphysics, we must best it at a children’s card game!

Magic: The Gathering is a collectible card game in which players play powerful spellcasters who fight by casting spells and summoning creatures. Those spells and creatures are represented by cards.

If you wanted to find which “things” exist in Magic: The Gathering, you’d probably start with the cards. And indeed, cards are pretty good candidates for fundamental “things”. As a player, you have a hand of cards, a discard pile (“graveyard”) and a deck (“library”), and all of these are indeed filled with cards.

However, not every “thing” in the game is a card. That’s because the game is in some sense limited: it needs to represent a broad set of concepts while still using physical, purchasable cards.

Suppose you have a card that represents a general. Every turn, the general recruits a soldier. You could represent the soldiers with actual cards, but they’d have to come from somewhere, and over many turns you might quickly run out.

Instead, Magic represents these soldiers with “tokens”. A token is not a card: you can’t shuffle a token into your deck or return it to your hand, and if you try to it just ceases to exist. But otherwise, the tokens behave just like other creatures: they’re both the same type of “thing”, something Magic calls a “permanent”. Permanents live in an area between players called the “battlefield”.

And it gets even more complicated! Some creatures have special abilities. When those abilities are activated, they’re treated like spells in many ways: you can cast spells in response, and even counter them with the right cards. However, they’re not spells, because they’re not cards: like tokens, you can’t shuffle them into your deck. Instead, both they and spells that have just been cast live in another area, the “stack”.

So while Magic might look like it just has one type of “thing”, cards, in fact it has three: cards, permanents, and objects on the stack.

We can contrast this with another card game, Hearthstone.


Hearthstone is much like Magic. You are a spellcaster, you cast spells, you summon creatures, and those spells and creatures are represented by cards.

The difference is, Hearthstone is purely electronic. You can’t go out and buy the cards in a store, they’re simulated in the online game. And this means that Hearthstone’s metaphysics can be a whole lot simpler.

In Hearthstone, if you have a general who recruits a soldier every turn, the soldiers can be cards just like the general. You can return them to your hand, or shuffle them into your deck, just like a normal card. Your computer can keep track of them, and make sure they go away properly at the end of the game.

This means that Hearthstone doesn’t need a concept of “permanents”: everything on its “battlefield” is just a card, which can have some strange consequences. If you return a creature to your hand, and you have room, it will just go there. But if your hand is full, and the creature has nowhere to go, it will “die”, in exactly the same way it would have died in the game if another creature killed it. From the game’s perspective, the creature was always a card, and the card “died”, so the creature died.

These small differences in implementation, in the “mathematics” of the game, change the metaphysics completely. Magic has three types of “things”, Hearthstone has only one.

And card games are a special case, because in some sense they’re built to make metaphysics easy. Cards are intuitive, everyday objects, and both Magic and Hearthstone are built off of our intuitions about them, which is why I can talk about “things” in either game.

Physics doesn’t have to be built that way. Physics is meant to capture our observations, and help us make predictions. It doesn’t have to sort itself neatly into “things”. Even if it does, I hope I’ve convinced you that small changes in physics could lead to large changes in which “things” exist. Unless you’re convinced that you understand the physics of something completely, you might want to skip the metaphysics. A minor mathematical detail could sweep it all away.

Thought Experiments, Minus the Thought

My second-favorite Newton fact is that, despite inventing calculus, he refused to use it for his most famous work of physics, the Principia. Instead, he used geometrical proofs, tweaked to smuggle in calculus without admitting it.

Essentially, these proofs were thought experiments. Newton would start with a standard geometry argument, one that would have been acceptable to mathematicians centuries earlier. Then, he’d imagine taking it further, pushing a line or angle to some infinite point. He’d argue that, if the proof worked for every finite choice, then it should work in the infinite limit as well.

These thought experiments let Newton argue on the basis of something that looked more rigorous than calculus. However, they also held science back. At the time, only a few people in the world could understand what Newton was doing. It was only later, when Newton’s laws were reformulated in calculus terms, that a wider group of researchers could start doing serious physics.

What changed? If Newton could describe his physics with geometrical thought experiments, why couldn’t everyone else?

The trouble with thought experiments is that they require careful setup, setup that has to be thought through for each new thought experiment. Calculus took Newton’s geometrical thought experiments, and took out the need for thought: the setup was automatically a part of calculus, and each new researcher could build on their predecessors without having to set everything up again.

This sort of thing happens a lot in science. An example from my field is the scattering matrix, or S-matrix.

The S-matrix, deep down, is a thought experiment. Take some particles, and put them infinitely far away from each other, off in the infinite past. Then, let them approach, close enough to collide. If they do, new particles can form, and these new particles will travel out again, infinite far away in the infinite future. The S-matrix then is a metaphorical matrix that tells you, for each possible set of incoming particles, what the probability is to get each possible set of outgoing particles.

In a real collider, the particles don’t come from infinitely far away, and they don’t travel infinitely far before they’re stopped. But the distances are long enough, compared to the sizes relevant for particle physics, that the S-matrix is the right idea for the job.

Like calculus, the S-matrix is a thought experiment minus the thought. When we want to calculate the probability of particles scattering, we don’t need to set up the whole thought experiment all over again. Instead, we can start by calculating, and over time we’ve gotten very good at it.

In general, sub-fields in physics can be divided into those that have found their S-matrices, their thought experiments minus thought, and those that have not. When a topic has to rely on thought experiments, progress is much slower: people argue over the details of each setup, and it’s difficult to build something that can last. It’s only when a field turns the corner, removing the thought from its thought experiments, that people can start making real collaborative progress.

Book Review: The Invention of Science

I don’t get a lot of time to read for pleasure these days. When I do, it’s usually fiction. But I’ve always had a weakness for stories from the dawn of science, and David Wootton’s The Invention of Science: A New History of the Scientific Revolution certainly fit the bill.


Wootton’s book is a rambling tour of the early history of science, from Brahe’s nova in 1572 to Newton’s Optics in 1704. Tying everything together is one clear, central argument: that the scientific revolution involved, not just a new understanding of the world, but the creation of new conceptual tools. In other words, the invention of science itself.

Wootton argues this, for the most part, by tracing changes in language. Several chapters have a common structure: Wootton identifies a word, like evidence or hypothesis, that has an important role in how we talk about science. He then tracks that word back to its antecedents, showing how early scientists borrowed and coined the words they needed to describe the new type of reasoning they had pioneered.

Some of the most compelling examples come early on. Wootton points out that the word “discover” only became common in European languages after Columbus’s discovery of the new world: first in Portugese, then later in the rest of Europe. Before then, the closest term meant something more like “find out”, and was ambiguous: it could refer to finding something that was already known to others. Thus, early writers had to use wordy circumlocutions like “found out that which was not known before” to refer to genuine discovery.

The book covers the emergence of new social conventions in a similar way. For example, I was surprised to learn that the first recorded priority disputes were in the sixteenth century. Before then, discoveries weren’t even typically named for their discoverers: “the Pythagorean theorem”, oddly enough, is a name that wasn’t used until after the scientific revolution was underway. Beginning with explorers arguing over the discovery of the new world and anatomists negotiating priority for identifying the bones of the ear or the “discovery” of the clitoris, the competitive element of science began to come into its own.

Along the way, Wootton highlights episodes both familiar and obscure. You’ll find Bruno and Torricelli, yes, but also disputes over whether the seas are higher than the land or whether a weapon could cure wounds it caused via the power of magnetism. For anyone as fascinated by the emergence of science as I am, it’s a joyous wealth of detail.

If I had one complaint, it would be that for a lay reader far too much of Wootton’s book is taken up by disputes with other historians. His particular foes are relativists, though he spares some paragraphs to attack realists too. Overall, his dismissals of his opponents are so pat, and his descriptions of their views so self-evidently silly, that I can’t help but suspect that he’s not presenting them fairly. Even if he is, the discussion is rather inside baseball for a non-historian like me.

I read part of Newton’s Principia in college, and I was hoping for a more thorough discussion of Newton’s role. While he does show up, Wootton seems to view Newton as a bit of an enigma: someone who insisted on using the old language of geometric proofs while clearly mastering the new science of evidence and experiment. In this book, Newton is very much a capstone, not a focus.

Overall, The Invention of Science is a great way to learn about the twists and turns of the scientific revolution. If you set aside the inter-historian squabbling (or if you like that sort of thing) you’ll find a book brim full of anecdotes from the dawn of modern thought, and a compelling argument that what we do as scientists is neither an accident of culture nor obvious common-sense, but a hard-won invention whose rewards we are still reaping today.