Tag Archives: philosophy of science

Our Bargain

Sabine Hossenfelder has a blog post this week chastising particle physicists and cosmologists for following “upside-down Popper”, or assuming a theory is worth working on merely because it’s falsifiable. She describes her colleagues churning out one hypothesis after another, each tweaking an old idea just enough to make it falsifiable in the next experiment, without caring whether the hypothesis is actually likely to be true.

Sabine is much more of an expert in this area of physics (phenomenology) than I am, and I don’t presume to tell her she’s wrong about that community. But the problem she’s describing is part of something bigger, something that affects my part of physics as well.

There’s a core question we’d all like to answer: what should physicists work on? What criteria should guide us?

Falsifiability isn’t the whole story. The next obvious criterion is a sense of simplicity, of Occam’s Razor or mathematical elegance. Sabine has argued against the latter, which prompted a friend of mine to comment that between rejecting falsifiability and elegance, Sabine must want us to stop doing high-energy physics at all!

That’s more than a little unfair, though. I think Sabine has a reasonably clear criterion in mind. It’s the same criterion that most critics of the physics mainstream care about. It’s even the same criterion being used by the “other side”, the sort of people who criticize anything that’s not string/SUSY/inflation.

The criterion is quite a simple one: physics research should be productive. Anything we publish, anything we work on, should bring us closer to understanding the real world.

And before you object that this criterion is obvious, that it’s subjective, that it ignores the very real disagreements between the Sabines and the Luboses of the world…before any of that, please let me finish.

We can’t achieve this criterion. And we shouldn’t.

We can’t demand that all physics be productive without breaking a fundamental bargain, one we made when we accepted that science could be a career.

1200px-13_portrait_of_robert_hooke

The Hunchback of Notre Science

It wasn’t always this way. Up until the nineteenth century, “scientist” was a hobby, not a job.

After Newton published his theory of gravity, he was famously accused by Robert Hooke of stealing the idea. There’s some controversy about this, but historians agree on a few points: that Hooke did write a letter to Newton suggesting a 1/r^2 force law, and that Hooke, unlike Newton, never really worked out the law’s full consequences.

Why not? In part, because Hooke, unlike Newton, had a job.

Hooke was arguably the first person for whom science was a full-time source of income. As curator of experiments for the Royal Society, it was his responsibility to set up demonstrations for each Royal Society meeting. Later, he also handled correspondence for the Royal Society Journal. These responsibilities took up much of his time, and as a result, even if he was capable of following up on the consequences of 1/r^2 he wouldn’t have had time to focus on it. That kind of calculation wasn’t what he was being paid for.

We’re better off than Hooke today. We still have our responsibilities, to journals and teaching and the like, at various stages of our careers. But in the centuries since Hooke expectations have changed, and real original research is no longer something we have to fit in our spare time. It’s now a central expectation of the job.

When scientific research became a career, we accepted a kind of bargain. On the positive side, you no longer have to be independently wealthy to contribute to science. More than that, the existence of professional scientists is the bedrock of technological civilization. With enough scientists around, we get modern medicine and the internet and space programs and the LHC, things that wouldn’t be possible in a world of rare wealthy geniuses.

We pay a price for that bargain, though. If science is a steady job, then it has to provide steady work. A scientist has to be able to go in, every day, and do science.

And the problem is, science doesn’t always work like that. There isn’t always something productive to work on. Even when there is, there isn’t always something productive for you to work on.

Sabine blames “upside-down Popper” on the current publish-or-perish environment in physics. If physics careers weren’t so cut-throat and the metrics they are judged by weren’t so flawed, then maybe people would have time to do slow, careful work on deeper topics rather than pumping out minimally falsifiable papers as fast as possible.

There’s a lot of truth to this, but I think at its core it’s a bit too optimistic. Each of us only has a certain amount of expertise, and sometimes that expertise just isn’t likely to be productive at the moment. Because science is a job, a person in that position can’t just go work at the Royal Mint like Newton did. (The modern-day equivalent would be working for Wall Street, but physicists rarely come back from that.) Instead, they keep doing what they know how to do, slowly branching out, until they’ve either learned something productive or their old topic becomes useful once more. You can think of it as a form of practice, where scientists keep their skills honed until they’re needed.

So if we slow down the rate of publication, if we create metrics for universities that let them hire based on the depth and importance of work and not just number of papers and citations, if we manage all of that then yes we will improve science a great deal. But Lisa Randall still won’t work on Haag’s theorem.

In the end, we’ll still have physicists working on topics that aren’t actually productive.

img_0622

A physicist lazing about unproductively under an apple tree

So do we have to pay physicists to work on whatever they want, no matter how ridiculous?

No, I’m not saying that. We can’t expect everyone to do productive work all the time, but we can absolutely establish standards to make the work more likely to be productive.

Strange as it may sound, I think our standards for this are already quite good, or at least better than many other fields.

First, there’s falsifiability itself, or specifically our attitude towards it.

Physics’s obsession with falsifiability has one important benefit: it means that when someone proposes a new model of dark matter or inflation that they tweaked to be just beyond the current experiments, they don’t claim to know it’s true. They just claim it hasn’t been falsified yet.

This is quite different from what happens in biology and the social sciences. There, if someone tweaks their study to be just within statistical significance, people typically assume the study demonstrated something real. Doctors base treatments on it, and politicians base policy on it. Upside-down Popper has its flaws, but at least it’s never going to kill anybody, or put anyone in prison.

Admittedly, that’s a pretty low bar. Let’s try to set a higher one.

Moving past falsifiability, what about originality? We have very strong norms against publishing work that someone else has already done.

Ok, you (and probably Sabine) would object, isn’t that easy to get around? Aren’t all these Popper-flippers pretending to be original but really just following the same recipe each time, modifying their theory just enough to stay falsifiable?

To some extent. But if they were really following a recipe, you could beat them easily: just write the recipe down.

Physics progresses best when we can generalize, when we skip from case-by-case to understanding whole swaths of cases at once. Over time, there have been plenty of cases in which people have done that, where a number of fiddly hand-made models have been summarized in one parameter space. Once that happens, the rule of originality kicks in: now, no-one can propose another fiddly model like that again. It’s already covered.

As long as the recipe really is just a recipe, you can do this. You can write up what these people are doing in computer code, release the code, and then that’s that, they have to do something else. The problem is, most of the time it’s not really a recipe. It’s close enough to one that they can rely on it, close enough to one that they can get paper after paper when they need to…but it still requires just enough human involvement, just enough genuine originality, to be worth a paper.

The good news is that the range of “recipes” we can code up increases with time. Some spaces of theories we might never be able to describe in full generality (I’m glad there are people trying to do statistics on the string landscape, but good grief it looks quixotic). Some of the time though, we have a real chance of putting a neat little bow on a subject, labeled “no need to talk about this again”.

This emphasis on originality keeps the field moving. It means that despite our bargain, despite having to tolerate “practice” work as part of full-time physics jobs, we can still nudge people back towards productivity.

 

One final point: it’s possible you’re completely ok with the idea of physicists spending most of their time “practicing”, but just wish they wouldn’t make such a big deal about it. Maybe you can appreciate that “can I cook up a model where dark matter kills the dinosaurs” is an interesting intellectual exercise, but you don’t think it should be paraded in front of journalists as if it were actually solving a real problem.

In that case, I agree with you, at least up to a point. It is absolutely true that physics has a dysfunctional relationship with the media. We’re too used to describing whatever we’re working on as the most important thing in the universe, and journalists are convinced that’s the only way to get the public to pay attention. This is something we can and should make progress on. An increasing number of journalists are breaking from the trend and focusing not on covering the “next big thing”, but in telling stories about people. We should do all we can to promote those journalists, to spread their work over the hype, to encourage the kind of stories that treat “practice” as interesting puzzles pursued by interesting people, not the solution to the great mysteries of physics. I know that if I ever do anything newsworthy, there are some journalists I’d give the story to before any others.

At the same time, it’s important to understand that some of the dysfunction here isn’t unique to physics, or even to science. Deep down the reason nobody can admit that their physics is “practice” work is the same reason people at job interviews claim to love the company, the same reason college applicants have to tell stirring stories of hardship and couples spend tens of thousands on weddings. We live in a culture in which nothing can ever just be “ok”, in which admitting things are anything other than exceptional is akin to calling them worthless. It’s an arms-race of exaggeration, and it goes far beyond physics.

(I should note that this “culture” may not be as universal as I think it is. If so, it’s possible its presence in physics is due to you guys letting too many of us Americans into the field.)

 

We made a bargain when we turned science into a career. We bought modernity, but the price we pay is subsidizing some amount of unproductive “practice” work. We can negotiate the terms of our bargain, and we should, tilting the field with incentives to get it closer to the truth. But we’ll never get rid of it entirely, because science is still done by people. And sometimes, despite what we’re willing to admit, people are just “ok”.

Advertisements

The opposite of Witches

On Halloween I have a tradition of posts about spooky topics, whether traditional Halloween fare or things that spook physicists. This year it’s a little of both.

Mage: The Ascension is a role-playing game set in a world in which belief shapes reality. Players take the role of witches and warlocks, casting spells powered by their personal paradigms of belief. The game allows for pretty much any modern-day magic-user you could imagine, from Wiccans to martial artists.

wicked_witch

Even stereotypical green witches, probably

Despite all the options, I was always more interested in the game’s villains, the witches’ opposites, the Technocracy.

The Technocracy answer an inevitable problem with any setting involving modern-day magic: why don’t people notice? If reality is powered by belief, why does no-one believe in magic?

In the Technocracy’s case, the answer is a vast conspiracy of mages with a scientific bent, manipulating public belief. Much like the witches and warlocks of Mage are a grab-bag of every occult belief system, the Technocracy combines every oppressive government conspiracy story you can imagine, all with the express purpose of suppressing the supernatural and maintaining scientific consensus.

This quote is from another game by the same publisher, but it captures the attitude of the Technocracy, and the magnitude of what is being claimed here:

Do not believe what the scientists tell you. The natural history we know is a lie, a falsehood sold to us by wicked old men who would make the world a dull gray prison and protect us from the dangers inherent to freedom. They would have you believe our planet to be a lonely starship, hurtling through the void of space, barren of magic and in need of a stern hand upon the rudder.

Close your mind to their deception. The time before our time was not a time of senseless natural struggle and reptilian rage, but a time of myth and sorcery. It was a time of legend, when heroes walked Creation and wielded the very power of the gods. It was a time before the world was bent, a time before the magic of Creation lessened, a time before the souls of men became the stunted, withered things they are today.

It can be a fun exercise to see how far doubt can take you, how much of the scientific consensus you can really be confident of and how much could be due to a conspiracy. Believing in the Technocracy would be the most extreme version of this, but Flat-Earthers come pretty close. Once you’re doubting whether the Earth is round, you have to imagine a truly absurd conspiracy to back it up.

On the other extreme, there are the kinds of conspiracies that barely take a conspiracy at all. Big experimental collaborations, like ATLAS and CMS at the LHC, keep a tight handle on what their members publish. (If you’re curious how much of one, here’s a talk by a law professor about, among other things, the Constitution of CMS. Yes, it has one!) An actual conspiracy would still be outed in about five minutes, but you could imagine something subtler, the experiment sticking to “safe” explanations and refusing to publish results that look too unusual, on the basis that they’re “probably” wrong. Worries about that sort of thing can make actual physicists spooked.

There’s an important dividing line with doubt: too much and you risk invoking a conspiracy more fantastical than the science you’re doubting in the first place. The Technocracy doesn’t just straddle that line, it hops past it off into the distance. Science is too vast, and too unpredictable, to be controlled by some shadowy conspiracy.

519tdxeawyl

Or maybe that’s just what we want you to think!

One, Two, Infinity

Physicists and mathematicians count one, two, infinity.

We start with the simplest case, as a proof of principle. We take a stripped down toy model or simple calculation and show that our idea works. We count “one”, and we publish.

Next, we let things get a bit more complicated. In the next toy model, or the next calculation, new interactions can arise. We figure out how to deal with those new interactions, our count goes from “one” to “two”, and once again we publish.

By this point, hopefully, we understand the pattern. We know what happens in the simplest case, and we know what happens when the different pieces start to interact. If all goes well, that’s enough: we can extrapolate our knowledge to understand not just case “three”, but any case: any model, any calculation. We publish the general case, the general method. We’ve counted one, two, infinity.

200px-infinite-svg

Once we’ve counted “infinity”, we don’t have to do any more cases. And so “infinity” becomes the new “zero”, and the next type of calculation you don’t know how to do becomes “one”. It’s like going from addition to multiplication, from multiplication to exponentiation, from exponentials up into the wilds of up-arrow notation. Each time, once you understand the general rules you can jump ahead to an entirely new world with new capabilities…and repeat the same process again, on a new scale. You don’t need to count one, two, three, four, on and on and on.

Of course, research doesn’t always work out this way. My last few papers counted three, four, five, with six on the way. (One and two were already known.) Unlike the ideal cases that go one, two, infinity, here “two” doesn’t give all the pieces you need to keep going. You need to go a few numbers more to get novel insights. That said, we are thinking about “infinity” now, so look forward to a future post that says something about that.

A lot of frustration in physics comes from situations when “infinity” remains stubbornly out of reach. When people complain about all the models for supersymmetry, or inflation, in some sense they’re complaining about fields that haven’t taken that “infinity” step. One or two models of inflation are nice, but by the time the count reaches ten you start hoping that someone will describe all possible models of inflation in one paper, and see if they can make any predictions from that.

(In particle physics, there’s an extent to which people can actually do this. There are methods to describe all possible modifications of the Standard Model in terms of what sort of effects they can have on observations of known particles. There’s a group at NBI who work on this sort of thing.)

The gold standard, though, is one, two, infinity. Our ability to step back, stop working case-by-case, and move on to the next level is not just a cute trick: it’s a foundation for exponential progress. If we can count one, two, infinity, then there’s nowhere we can’t reach.

The Multiverse Can Only Kill Physics by Becoming Physics

I’m not a fan of the multiverse. I think it’s over-hyped, way beyond its current scientific support.

But I don’t think it’s going to kill physics.

By “the multiverse” I’m referring to a group of related ideas. There’s the idea that we live in a vast, varied universe, with different physical laws in different regions. Relatedly, there’s the idea that the properties of our region aren’t typical of the universe as a whole, just typical of places where life can exist. It may be that in most of the universe the cosmological constant is enormous, but if life can only exist in places where it is tiny then a tiny cosmological constant is what we’ll see. That sort of logic is called anthropic reasoning. If it seems strange, think about a smaller scale: there are many planets in the universe, but only a small number of them can support life. Still, we shouldn’t be surprised that we live on a planet that can support life: if it couldn’t, we wouldn’t live here!

If we really do live in a multiverse, though, some of what we think of as laws of physics are just due to random chance. Maybe the quarks have the masses they do not for some important reason, but just because they happened to end up that way in our patch of the universe.

This seems to have depressing implications. If the laws of physics are random, or just consequences of where life can exist, then what’s left to discover? Why do experiments at all?

Well, why not ask the geoscientists?

tectonic_plate_boundaries

These guys

We might live in one universe among many, but we definitely live on one planet among many. And somehow, this realization hasn’t killed geoscience.

That’s because knowing we live on a random planet doesn’t actually tell us very much.

Now, I’m not saying you can’t do anthropic reasoning about the Earth. For example, it looks like an active system of plate tectonics is a necessary ingredient for life. Even if plate tectonics is rare, we shouldn’t be surprised to live on a planet that has it.

Ok, so imagine it’s 1900, before Wegener proposed continental drift. Scientists believe there are many planets in the universe, that we live in a “multiplanet”. Could you predict plate tectonics?

Even knowing that we live on one of the few planets that can support life, you don’t know how it supports life. Even living in a “multiplanet”, geoscience isn’t dead. The specifics of our Earth are still going to teach you something important about how planets work.

Physical laws work the same way. I’ve said that the masses of the quarks could be random, but it’s not quite that simple. The underlying reasons why the masses of the quarks are what they are could be random: the specifics of how six extra dimensions happened to curl up in our region of the universe, for example. But there’s important physics in between: the physics of how those random curlings of space give rise to quark masses. There’s a mechanism there, and we can’t just pick one out of a hat or work backwards to it anthropically. We have to actually go out and discover the answer.

Similarly, we don’t know automatically which phenomena are “random”, which are “anthropic”, and which are required by some deep physical principle. Even in a multiverse, we can’t assume that everything comes down to chance, we only know that some things will, much as the geoscientists don’t know what’s unique to Earth and what’s true of every planet without actually going out and checking.

You can even find a notion of “naturalness” here, if you squint. In physics, we find phenomena like the mass of the Higgs “unnatural”, they’re “fine-tuned” in a way that cries out for an explanation. Normally, we think of this in terms of a hypothetical “theory of everything”: the more “fine-tuned” something appears, the harder it would be to explain it in a final theory. In a multiverse, it looks like we’d have to give up on this, because even the most unlikely-looking circumstance would happen somewhere, especially if it’s needed for life.

Once again, though, imagine you’re a geoscientist. Someone suggests a ridiculously fine-tuned explanation for something: perhaps volcanoes only work if they have exactly the right amount of moisture. Even though we live on one planet in a vast universe, you’re still going to look for simpler explanations before you move on to more complicated ones. It’s human nature, and by and large it’s the only way we’re capable of doing science. As physicists, we’ve papered this over with technical definitions of naturalness, but at the end of the day even in a multiverse we’ll still start with less fine-tuned-looking explanations and only accept the fine-tuned ones when the evidence forces us to. It’s just what people do.

The only way for anthropic reasoning to get around this, to really make physics pointless once and for all, is if it actually starts making predictions. If anthropic reasoning in physics can be made much stronger than anthropic reasoning in geoscience (which, as mentioned, didn’t predict tectonic plates until a century after their discovery) then maybe we can imagine getting to a point where it tells us what particles we should expect to discover, and what masses they should have.

At that point, though, anthropic reasoning won’t have made physics pointless: it will have become physics.

If anthropic reasoning is really good enough to make reliable, falsifiable predictions, then we should be ecstatic! I don’t think we’re anywhere near that point, though some people are earnestly trying to get there. But if it really works out, then we’d have a powerful new method to make predictions about the universe.

 

Ok, so with all of this said, there is one other worry.

Karl Popper criticized Marxism and Freudianism for being unfalsifiable. In both disciplines, there was a tendency to tell what were essentially “just-so stories”. They could “explain” any phenomenon by setting it in their framework and explaining how it came to be “just so”. These explanations didn’t make new predictions, and different people often ended up coming up with different explanations with no way to distinguish between them. They were stories, not scientific hypotheses. In more recent times, the same criticism has been made of evolutionary psychology. In each case the field is accused of being able to justify anything and everything in terms of its overly ambiguous principles, whether dialectical materialism, the unconscious mind, or the ancestral environment.

just_so_stories_kipling_1902

Or an elephant’s ‘satiable curtiosity

You’re probably worried that this could happen to physics. With anthropic reasoning and the multiverse, what’s to stop physicists from just proposing some “anthropic” just-so-story for any evidence we happen to find, no matter what it is? Surely anything could be “required for life” given a vague enough argument.

You’re also probably a bit annoyed that I saved this objection for last. I know that for many people, this is precisely what you mean when you say the multiverse will “kill physics”.

I’ve saved this for last for a reason though. It’s because I want to point out something important: this outcome, that our field degenerates into just-so-stories, isn’t required by the physics of the multiverse. Rather, it’s a matter of sociology.

If we hold anthropic reasoning to the same standards as the rest of physics, then there’s no problem: if an anthropic explanation doesn’t make falsifiable predictions then we ignore it. The problem comes if we start loosening our criteria, start letting people publish just-so-stories instead of real science.

This is a real risk! I don’t want to diminish that. It’s harder than it looks for a productive academic field to fall into bullshit, but just-so-stories are a proven way to get there.

What I want to emphasize is that we’re all together in this. We all want to make sure that physics remains scientific. We all need to be vigilant, to prevent a culture of just-so-stories from growing. Regardless of whether the multiverse is the right picture, and regardless of how many annoying TV specials they make about it in the meantime, that’s the key: keeping physics itself honest. If we can manage that, nothing we discover can kill our field.

On the Care and Feeding of Ideas

I read Zen and the Art of Motorcycle Maintenance in high school. It’s got a reputation for being obnoxiously mystical, but one of its points seemed pretty reasonable: the claim that the hard part of science, and the part we understand the least, is coming up with hypotheses.

In some sense, theoretical physics is all about hypotheses. By this I don’t mean that we just say “what if?” all the time. I mean that in theoretical physics most of the work is figuring out the right way to ask a question. Phrase your question in the right way and the answer becomes obvious (or at least, obvious after a straightforward calculation). Because our questions are mathematical, the right question can logically imply its own solution.

From the point of view of “Zen and the Art”, as well as most non-scientists I’ve met, this part is utterly mysterious. The ideas you need here seem like they can’t come from hard work or careful observation. In order to ask the right questions, you just need to be “smart”.

In practice, I’ve noticed there’s more to it than that. We can’t just sit around and wait for an idea to show up. Instead, as physicists we develop a library of tricks, often unstated, that let us work towards the ideas we need.

Sometimes, this involves finding simpler cases, working with them until we understand the right questions to ask. Sometimes it involves doing numerics, or using crude guesses, not because either method will give the final answer but because it will show what the answer should look like. Sometimes we need to rephrase the problem many times, in many different contexts, before we happen on one that works. Most of this doesn’t end up in published papers, so in the end we usually have to pick it up from experience.

Along the way, we often find tricks to help us think better. Mostly this is straightforward stuff: reminders to keep us on-task, keeping our notes organized and our code commented so we have a good idea of what we were doing when we need to go back to it. Everyone has their own personal combination of these things in the background, and they’re rarely discussed.

The upshot is that coming up with ideas is hard work. We need to be smart, sure, but that’s not enough by itself: there are a lot of smart people who aren’t physicists after all.

With all that said, some geniuses really do seem to come up with ideas out of thin air. It’s not the majority of the field: we’re not the idiosyncratic Sheldon Coopers everyone seems to imagine. But for a few people, it really does feel like there’s something magical about where they get their ideas. I’ve had the privilege of working with a couple people like this, and the way they think sometimes seems qualitatively different from our usual way of building ideas. I can’t see any of the standard trappings, the legacy of partial results and tricks of thought, that would lead to where they end up. That doesn’t mean they don’t use tricks just like the rest of us, in the end. But I think genius, if it means anything at all, is thinking in a novel enough way that from the outside it looks like magic.

Most of the time, though, we just need to hone our craft. We build our methods and shape our minds as best we can, and we get better and better at the central mystery of science: asking the right questions.

Textbook Review: Exploring Black Holes

I’m bringing a box of textbooks with me to Denmark. Most of them are for work: a few Quantum Field Theory texts I might use, a Complex Analysis book for when I inevitably forget how to do contour integration.

One of the books, though, is just for fun.

418jsvew76l-_sx378_bo1204203200_

Exploring Black Holes is an introduction to general relativity for undergraduates. The book came out of a collaboration between Edwin F. Taylor, known for his contributions to physics teaching, and John Archibald Wheeler, who among a long list of achievements was responsible for popularizing the term “black hole”. The result is something quite unique: a general relativity course that requires no math more advanced than calculus, and no physics more advanced than special relativity.

It does this by starting, not with the full tensor-riddled glory of Einstein’s equations, but with specialized solutions to those equations, mostly the Schwarzschild solution that describes space around spherical objects (including planets, stars, and black holes). From there, it manages to introduce curved space in a way that is both intuitive and naturally grows out of what students learn about special relativity. It really is the kind of course a student can take right after their first physics course, and indeed as an undergrad that’s exactly what I did.

With just the Schwarzchild solution and its close relatives, you can already answer most of the questions young students have about general relativity. In a series of “projects”, the book explores the corrections GR demands of GPS satellites, the process of falling into a black hole, the famous measurement of the advance of the perihelion of mercury, the behavior of light in a strong gravitational field, and even a bit of cosmology. In the end the students won’t know the full power of the theory, but they’ll get a taste while building valuable physical intuition.

Still, I wouldn’t bring this book with me if it was just an excellent undergraduate textbook. Exploring Black Holes is a great introduction to general relativity, but it also has a hilarious not-so-hidden agenda: inspiring future astronauts to jump into black holes.

“Nowhere could life be simpler or more relaxed than in a free-float frame, such as an unpowered spaceship falling toward a black hole.” – pg. 2-31

The book is full of quotes like this. One of the book’s “projects” involves computing what happens to an astronaut who falls into a black hole. The book takes special care to have students calculate that “spaghettification”, the process by which the tidal forces of a black hole stretch infalling observers into spaghetti, is surprisingly completely painless: the amount of time you experience it is always less than the amount of time it takes light (and thus also pain) to go from your feet to your head, for any (sufficiently calm) black hole.

Why might Taylor and Wheeler want people of the future to jump into black holes? As the discussion on page B-3 of the book describes, the reason is on one level an epistemic one. As theorists, we’d like to reason about what lies inside the event horizon of black holes, but we face a problem: any direct test would be trapped inside, and we would never know the result, which some would argue makes such speculation unscientific. What Taylor and Wheeler point out is that it’s not quite true that no-one would know the results of such a test: if someone jumped into a black hole, they would be able to test our reasoning. If a whole scientific community jumped in, then the question of what is inside a black hole is from their perspective completely scientific.

Of course, I don’t think Taylor and Wheeler seriously thought their book would convince its readers to jump into black holes. For one, it’s unlikely anyone reading the book will get a chance. Still, I suspect that the idea that future generations might explore black holes gave Taylor and Wheeler some satisfaction, and a nice clean refutation of those who think physics inside the horizon is unscientific. Seeing as the result was an excellent textbook full of hilarious prose, I can’t complain.

Boltzmann Brains, Evil Demons, and Why It’s Occasionally a Good Idea to Listen to Philosophers

There’s been a bit of a buzz recently about a paper Sean Carroll posted to the arXiv, “Why Boltzmann Brains Are Bad”. The argument in the paper isn’t new, it’s something Carroll has been arguing for a long time, and the arXiv post was just because he had been invited to contribute a piece to a book on Current Controversies in Philosophy of Science.

(By the way: in our field, invited papers and conference proceedings are almost always reviews of old work, not new results. If you see something on arXiv and want to know whether it’s actually new work, the “Comments:” section will almost always mention this.)

While the argument isn’t new, it is getting new attention. And since I don’t think I’ve said much about my objections to it, now seems like a good time to do so.

Carroll’s argument is based on theoretical beings called Boltzmann brains. The idea is that if you wait a very very long time in a sufficiently random (“high-entropy”) universe, the matter in that universe will arrange itself in pretty much every imaginable way, if only for a moment. In particular, it will eventually form a brain, or enough of a brain to have a conscious experience. Wait long enough, and you can find a momentary brain having any experience you want, with any (fake) memories you want. Long enough, and you can find a brain having the same experience you are having right now.

So, Carroll asks, how do you know you aren’t a Boltzmann brain? If the universe exists for long enough, most of the beings having your current experiences would be Boltzmann brains, not real humans. But if you really are a Boltzmann brain, then you can’t know anything about the universe at all: everything you think are your memories are just random fluctuations with no connection to the real world.

Carroll calls this sort of situation “cognitively unstable”. If you reason scientifically that the universe must be full of Boltzmann brains, then you can’t rule out that you could be a Boltzmann brain, and thus you shouldn’t accept your original reasoning.

The only way out, according to Carroll, is if we live in a universe that will never contain Boltzmann brains, for example one that won’t exist in its current form long enough to create them. So from a general concern about cognitive instability, Carroll argues for specific physics. And if that seems odd…well, it is.

For the purpose of this post, I’m going to take for granted the physics case: that a sufficiently old and random universe would indeed produce Boltzmann brains. That’s far from uncontroversial, and if you’re interested in that side of the argument (and have plenty of patience for tangents and Czech poop jokes) Lubos Motl posted about it recently.

Instead, I’d like to focus on the philosophical side of the argument.

Let’s start with intro philosophy, and talk about Descartes.

Descartes wanted to start philosophy from scratch by questioning everything he thought he knew. In one of his arguments, he asks the reader to imagine an evil demon.

315grazthrone

Probably Graz’zt. It’s usually Graz’zt.

Descartes imagines this evil demon exercising all its power to deceive. Perhaps it could confound your senses with illusions, or modify your memories. If such a demon existed, there would be no way to know if anything you believed or reasoned about the world was correct. So, Descartes asked, how do you know you’re not being deceived by an evil demon right now?

Amusingly, like Carroll, Descartes went on to use this uncertainty to argue for specific proposals in physics: in Descartes’ case, everything from the existence of a benevolent god to the idea that gravity was caused by a vortex of fluid around the sun.

Descartes wasn’t the last to propose this kind of uncertainty, and philosophers have asked more sophisticated questions over the years challenging the idea that it makes sense to reason from the past about the future at all.

Carroll is certainly aware of all of this. But I suspect he doesn’t quite appreciate the current opinion philosophers have on these sorts of puzzles.

The impression I’ve gotten from philosophers is that they don’t take this kind of “cognitive instability” very seriously anymore. There are specialists who still work on it, and it’s still of historical interest. But the majority of philosophers have moved on.

How did they move on? How have they dismissed these kinds of arguments?

That varies. Philosophers don’t tend to have the kind of consensus that physicists usually do.

Some reject them on pragmatic grounds: science works, even if we can’t “justify” it. Some use a similar argument to Carroll’s, but take it one step back, arguing that we shouldn’t worry that we could be deceived by an evil demon or be a Boltzmann brain because those worries by themselves are cognitively unstable. Some bite the bullet, that reasoning is impossible, then just ignore it and go on with their lives.

The common trait of all of these rejections, though? They don’t rely on physics.

Philosophers don’t argue “evil demons are impossible, therefore we can be sure we’re not deceived by evil demons”. They don’t argue “dreams are never completely realistic, so we can’t just be dreaming right now”.

And they certainly don’t try to argue the reverse: that consistency means there can never be evil demons, or never be realistic dreams.

I was on the debate team in high school. One popular tactic was called the “non-unique”. If your opponent argued that your plan had some negative consequences, you could argue that those consequences would happen regardless of whether you got to enact your plan or not: that the consequences were non-unique.

At this point, philosophers understand that cognitive instability and doubt are “non-unique”. No matter the physics, no matter how the world looks, it’s still possible to argue that reasoning isn’t justified, that even the logic we used to doubt the world in the first place could be flawed.

Carroll’s claim to me seems non-unique. Yes, in a universe that exists for a long time you could be a Boltzmann brain. But even if you don’t live in such a universe, you could still be a brain in a jar or a simulation. You could still be deceived by an “evil demon”.

And so regardless, you need the philosophers. Regardless, you need some argument that reasoning works, that you can ignore doubt. And once you’re happy with that argument, you don’t have to worry about Boltzmann brains.