Tag Archives: cosmic inflation

Our Bargain

Sabine Hossenfelder has a blog post this week chastising particle physicists and cosmologists for following “upside-down Popper”, or assuming a theory is worth working on merely because it’s falsifiable. She describes her colleagues churning out one hypothesis after another, each tweaking an old idea just enough to make it falsifiable in the next experiment, without caring whether the hypothesis is actually likely to be true.

Sabine is much more of an expert in this area of physics (phenomenology) than I am, and I don’t presume to tell her she’s wrong about that community. But the problem she’s describing is part of something bigger, something that affects my part of physics as well.

There’s a core question we’d all like to answer: what should physicists work on? What criteria should guide us?

Falsifiability isn’t the whole story. The next obvious criterion is a sense of simplicity, of Occam’s Razor or mathematical elegance. Sabine has argued against the latter, which prompted a friend of mine to comment that between rejecting falsifiability and elegance, Sabine must want us to stop doing high-energy physics at all!

That’s more than a little unfair, though. I think Sabine has a reasonably clear criterion in mind. It’s the same criterion that most critics of the physics mainstream care about. It’s even the same criterion being used by the “other side”, the sort of people who criticize anything that’s not string/SUSY/inflation.

The criterion is quite a simple one: physics research should be productive. Anything we publish, anything we work on, should bring us closer to understanding the real world.

And before you object that this criterion is obvious, that it’s subjective, that it ignores the very real disagreements between the Sabines and the Luboses of the world…before any of that, please let me finish.

We can’t achieve this criterion. And we shouldn’t.

We can’t demand that all physics be productive without breaking a fundamental bargain, one we made when we accepted that science could be a career.

1200px-13_portrait_of_robert_hooke

The Hunchback of Notre Science

It wasn’t always this way. Up until the nineteenth century, “scientist” was a hobby, not a job.

After Newton published his theory of gravity, he was famously accused by Robert Hooke of stealing the idea. There’s some controversy about this, but historians agree on a few points: that Hooke did write a letter to Newton suggesting a 1/r^2 force law, and that Hooke, unlike Newton, never really worked out the law’s full consequences.

Why not? In part, because Hooke, unlike Newton, had a job.

Hooke was arguably the first person for whom science was a full-time source of income. As curator of experiments for the Royal Society, it was his responsibility to set up demonstrations for each Royal Society meeting. Later, he also handled correspondence for the Royal Society Journal. These responsibilities took up much of his time, and as a result, even if he was capable of following up on the consequences of 1/r^2 he wouldn’t have had time to focus on it. That kind of calculation wasn’t what he was being paid for.

We’re better off than Hooke today. We still have our responsibilities, to journals and teaching and the like, at various stages of our careers. But in the centuries since Hooke expectations have changed, and real original research is no longer something we have to fit in our spare time. It’s now a central expectation of the job.

When scientific research became a career, we accepted a kind of bargain. On the positive side, you no longer have to be independently wealthy to contribute to science. More than that, the existence of professional scientists is the bedrock of technological civilization. With enough scientists around, we get modern medicine and the internet and space programs and the LHC, things that wouldn’t be possible in a world of rare wealthy geniuses.

We pay a price for that bargain, though. If science is a steady job, then it has to provide steady work. A scientist has to be able to go in, every day, and do science.

And the problem is, science doesn’t always work like that. There isn’t always something productive to work on. Even when there is, there isn’t always something productive for you to work on.

Sabine blames “upside-down Popper” on the current publish-or-perish environment in physics. If physics careers weren’t so cut-throat and the metrics they are judged by weren’t so flawed, then maybe people would have time to do slow, careful work on deeper topics rather than pumping out minimally falsifiable papers as fast as possible.

There’s a lot of truth to this, but I think at its core it’s a bit too optimistic. Each of us only has a certain amount of expertise, and sometimes that expertise just isn’t likely to be productive at the moment. Because science is a job, a person in that position can’t just go work at the Royal Mint like Newton did. (The modern-day equivalent would be working for Wall Street, but physicists rarely come back from that.) Instead, they keep doing what they know how to do, slowly branching out, until they’ve either learned something productive or their old topic becomes useful once more. You can think of it as a form of practice, where scientists keep their skills honed until they’re needed.

So if we slow down the rate of publication, if we create metrics for universities that let them hire based on the depth and importance of work and not just number of papers and citations, if we manage all of that then yes we will improve science a great deal. But Lisa Randall still won’t work on Haag’s theorem.

In the end, we’ll still have physicists working on topics that aren’t actually productive.

img_0622

A physicist lazing about unproductively under an apple tree

So do we have to pay physicists to work on whatever they want, no matter how ridiculous?

No, I’m not saying that. We can’t expect everyone to do productive work all the time, but we can absolutely establish standards to make the work more likely to be productive.

Strange as it may sound, I think our standards for this are already quite good, or at least better than many other fields.

First, there’s falsifiability itself, or specifically our attitude towards it.

Physics’s obsession with falsifiability has one important benefit: it means that when someone proposes a new model of dark matter or inflation that they tweaked to be just beyond the current experiments, they don’t claim to know it’s true. They just claim it hasn’t been falsified yet.

This is quite different from what happens in biology and the social sciences. There, if someone tweaks their study to be just within statistical significance, people typically assume the study demonstrated something real. Doctors base treatments on it, and politicians base policy on it. Upside-down Popper has its flaws, but at least it’s never going to kill anybody, or put anyone in prison.

Admittedly, that’s a pretty low bar. Let’s try to set a higher one.

Moving past falsifiability, what about originality? We have very strong norms against publishing work that someone else has already done.

Ok, you (and probably Sabine) would object, isn’t that easy to get around? Aren’t all these Popper-flippers pretending to be original but really just following the same recipe each time, modifying their theory just enough to stay falsifiable?

To some extent. But if they were really following a recipe, you could beat them easily: just write the recipe down.

Physics progresses best when we can generalize, when we skip from case-by-case to understanding whole swaths of cases at once. Over time, there have been plenty of cases in which people have done that, where a number of fiddly hand-made models have been summarized in one parameter space. Once that happens, the rule of originality kicks in: now, no-one can propose another fiddly model like that again. It’s already covered.

As long as the recipe really is just a recipe, you can do this. You can write up what these people are doing in computer code, release the code, and then that’s that, they have to do something else. The problem is, most of the time it’s not really a recipe. It’s close enough to one that they can rely on it, close enough to one that they can get paper after paper when they need to…but it still requires just enough human involvement, just enough genuine originality, to be worth a paper.

The good news is that the range of “recipes” we can code up increases with time. Some spaces of theories we might never be able to describe in full generality (I’m glad there are people trying to do statistics on the string landscape, but good grief it looks quixotic). Some of the time though, we have a real chance of putting a neat little bow on a subject, labeled “no need to talk about this again”.

This emphasis on originality keeps the field moving. It means that despite our bargain, despite having to tolerate “practice” work as part of full-time physics jobs, we can still nudge people back towards productivity.

 

One final point: it’s possible you’re completely ok with the idea of physicists spending most of their time “practicing”, but just wish they wouldn’t make such a big deal about it. Maybe you can appreciate that “can I cook up a model where dark matter kills the dinosaurs” is an interesting intellectual exercise, but you don’t think it should be paraded in front of journalists as if it were actually solving a real problem.

In that case, I agree with you, at least up to a point. It is absolutely true that physics has a dysfunctional relationship with the media. We’re too used to describing whatever we’re working on as the most important thing in the universe, and journalists are convinced that’s the only way to get the public to pay attention. This is something we can and should make progress on. An increasing number of journalists are breaking from the trend and focusing not on covering the “next big thing”, but in telling stories about people. We should do all we can to promote those journalists, to spread their work over the hype, to encourage the kind of stories that treat “practice” as interesting puzzles pursued by interesting people, not the solution to the great mysteries of physics. I know that if I ever do anything newsworthy, there are some journalists I’d give the story to before any others.

At the same time, it’s important to understand that some of the dysfunction here isn’t unique to physics, or even to science. Deep down the reason nobody can admit that their physics is “practice” work is the same reason people at job interviews claim to love the company, the same reason college applicants have to tell stirring stories of hardship and couples spend tens of thousands on weddings. We live in a culture in which nothing can ever just be “ok”, in which admitting things are anything other than exceptional is akin to calling them worthless. It’s an arms-race of exaggeration, and it goes far beyond physics.

(I should note that this “culture” may not be as universal as I think it is. If so, it’s possible its presence in physics is due to you guys letting too many of us Americans into the field.)

 

We made a bargain when we turned science into a career. We bought modernity, but the price we pay is subsidizing some amount of unproductive “practice” work. We can negotiate the terms of our bargain, and we should, tilting the field with incentives to get it closer to the truth. But we’ll never get rid of it entirely, because science is still done by people. And sometimes, despite what we’re willing to admit, people are just “ok”.

Advertisements

The Multiverse Can Only Kill Physics by Becoming Physics

I’m not a fan of the multiverse. I think it’s over-hyped, way beyond its current scientific support.

But I don’t think it’s going to kill physics.

By “the multiverse” I’m referring to a group of related ideas. There’s the idea that we live in a vast, varied universe, with different physical laws in different regions. Relatedly, there’s the idea that the properties of our region aren’t typical of the universe as a whole, just typical of places where life can exist. It may be that in most of the universe the cosmological constant is enormous, but if life can only exist in places where it is tiny then a tiny cosmological constant is what we’ll see. That sort of logic is called anthropic reasoning. If it seems strange, think about a smaller scale: there are many planets in the universe, but only a small number of them can support life. Still, we shouldn’t be surprised that we live on a planet that can support life: if it couldn’t, we wouldn’t live here!

If we really do live in a multiverse, though, some of what we think of as laws of physics are just due to random chance. Maybe the quarks have the masses they do not for some important reason, but just because they happened to end up that way in our patch of the universe.

This seems to have depressing implications. If the laws of physics are random, or just consequences of where life can exist, then what’s left to discover? Why do experiments at all?

Well, why not ask the geoscientists?

tectonic_plate_boundaries

These guys

We might live in one universe among many, but we definitely live on one planet among many. And somehow, this realization hasn’t killed geoscience.

That’s because knowing we live on a random planet doesn’t actually tell us very much.

Now, I’m not saying you can’t do anthropic reasoning about the Earth. For example, it looks like an active system of plate tectonics is a necessary ingredient for life. Even if plate tectonics is rare, we shouldn’t be surprised to live on a planet that has it.

Ok, so imagine it’s 1900, before Wegener proposed continental drift. Scientists believe there are many planets in the universe, that we live in a “multiplanet”. Could you predict plate tectonics?

Even knowing that we live on one of the few planets that can support life, you don’t know how it supports life. Even living in a “multiplanet”, geoscience isn’t dead. The specifics of our Earth are still going to teach you something important about how planets work.

Physical laws work the same way. I’ve said that the masses of the quarks could be random, but it’s not quite that simple. The underlying reasons why the masses of the quarks are what they are could be random: the specifics of how six extra dimensions happened to curl up in our region of the universe, for example. But there’s important physics in between: the physics of how those random curlings of space give rise to quark masses. There’s a mechanism there, and we can’t just pick one out of a hat or work backwards to it anthropically. We have to actually go out and discover the answer.

Similarly, we don’t know automatically which phenomena are “random”, which are “anthropic”, and which are required by some deep physical principle. Even in a multiverse, we can’t assume that everything comes down to chance, we only know that some things will, much as the geoscientists don’t know what’s unique to Earth and what’s true of every planet without actually going out and checking.

You can even find a notion of “naturalness” here, if you squint. In physics, we find phenomena like the mass of the Higgs “unnatural”, they’re “fine-tuned” in a way that cries out for an explanation. Normally, we think of this in terms of a hypothetical “theory of everything”: the more “fine-tuned” something appears, the harder it would be to explain it in a final theory. In a multiverse, it looks like we’d have to give up on this, because even the most unlikely-looking circumstance would happen somewhere, especially if it’s needed for life.

Once again, though, imagine you’re a geoscientist. Someone suggests a ridiculously fine-tuned explanation for something: perhaps volcanoes only work if they have exactly the right amount of moisture. Even though we live on one planet in a vast universe, you’re still going to look for simpler explanations before you move on to more complicated ones. It’s human nature, and by and large it’s the only way we’re capable of doing science. As physicists, we’ve papered this over with technical definitions of naturalness, but at the end of the day even in a multiverse we’ll still start with less fine-tuned-looking explanations and only accept the fine-tuned ones when the evidence forces us to. It’s just what people do.

The only way for anthropic reasoning to get around this, to really make physics pointless once and for all, is if it actually starts making predictions. If anthropic reasoning in physics can be made much stronger than anthropic reasoning in geoscience (which, as mentioned, didn’t predict tectonic plates until a century after their discovery) then maybe we can imagine getting to a point where it tells us what particles we should expect to discover, and what masses they should have.

At that point, though, anthropic reasoning won’t have made physics pointless: it will have become physics.

If anthropic reasoning is really good enough to make reliable, falsifiable predictions, then we should be ecstatic! I don’t think we’re anywhere near that point, though some people are earnestly trying to get there. But if it really works out, then we’d have a powerful new method to make predictions about the universe.

 

Ok, so with all of this said, there is one other worry.

Karl Popper criticized Marxism and Freudianism for being unfalsifiable. In both disciplines, there was a tendency to tell what were essentially “just-so stories”. They could “explain” any phenomenon by setting it in their framework and explaining how it came to be “just so”. These explanations didn’t make new predictions, and different people often ended up coming up with different explanations with no way to distinguish between them. They were stories, not scientific hypotheses. In more recent times, the same criticism has been made of evolutionary psychology. In each case the field is accused of being able to justify anything and everything in terms of its overly ambiguous principles, whether dialectical materialism, the unconscious mind, or the ancestral environment.

just_so_stories_kipling_1902

Or an elephant’s ‘satiable curtiosity

You’re probably worried that this could happen to physics. With anthropic reasoning and the multiverse, what’s to stop physicists from just proposing some “anthropic” just-so-story for any evidence we happen to find, no matter what it is? Surely anything could be “required for life” given a vague enough argument.

You’re also probably a bit annoyed that I saved this objection for last. I know that for many people, this is precisely what you mean when you say the multiverse will “kill physics”.

I’ve saved this for last for a reason though. It’s because I want to point out something important: this outcome, that our field degenerates into just-so-stories, isn’t required by the physics of the multiverse. Rather, it’s a matter of sociology.

If we hold anthropic reasoning to the same standards as the rest of physics, then there’s no problem: if an anthropic explanation doesn’t make falsifiable predictions then we ignore it. The problem comes if we start loosening our criteria, start letting people publish just-so-stories instead of real science.

This is a real risk! I don’t want to diminish that. It’s harder than it looks for a productive academic field to fall into bullshit, but just-so-stories are a proven way to get there.

What I want to emphasize is that we’re all together in this. We all want to make sure that physics remains scientific. We all need to be vigilant, to prevent a culture of just-so-stories from growing. Regardless of whether the multiverse is the right picture, and regardless of how many annoying TV specials they make about it in the meantime, that’s the key: keeping physics itself honest. If we can manage that, nothing we discover can kill our field.

You Can’t Smooth the Big Bang

As a kid, I was fascinated by cosmology. I wanted to know how the universe began, possibly disproving gods along the way, and I gobbled up anything that hinted at the answer.

At the time, I had to be content with vague slogans. As I learned more, I could match the slogans to the physics, to see what phrases like “the Big Bang” actually meant. A large part of why I went into string theory was to figure out what all those documentaries are actually about.

In the end, I didn’t end up working on cosmology due my ignorance of a few key facts while in college (mostly, who Vilenkin was). Thus, while I could match some of the old popularization stories to the science, there were a few I never really understood. In particular, there were two claims I never quite saw fleshed out: “The universe emerged from nothing via quantum tunneling” and “According to Hawking, the big bang was not a singularity, but a smooth change with no true beginning.”

As a result, I’m delighted that I’ve recently learned the physics behind these claims, in the context of a spirited take-down of both by Perimeter’s Director Neil Turok.

neil20turok_cropped_photo20credit20jens20langen

My boss

Neil held a surprise string group meeting this week to discuss the paper I linked above, “No smooth beginning for spacetime” with Job Feldbrugge and Jean-Luc Lehners, as well as earlier work with Steffen Gielen. In it, he talked about problems in the two proposals I mentioned: Hawking’s suggestion that the big bang was smooth with no true beginning (really, the Hartle-Hawking no boundary proposal) and the idea that the universe emerged from nothing via quantum tunneling (really, Vilenkin’s tunneling from nothing proposal).

In popularization-speak, these two proposals sound completely different. In reality, though, they’re quite similar (and as Neil argues, they end up amounting to the same thing). I’ll steal a picture from his paper to illustrate:

neilpaperpic

The picture on the left depicts the universe under the Hartle-Hawking proposal, with time increasing upwards on the page. As the universe gets older, it looks like the expanding (de Sitter) universe we live in. At the beginning, though, there’s a cap, one on which time ends up being treated not in the usual way (Lorentzian space) but on the same footing as the other dimensions (Euclidean space). This lets space be smooth, rather than bunching up in a big bang singularity. After treating time in this way the result is reinterpreted (via a quantum field theory trick called Wick rotation) as part of normal space-time.

What’s the connection to Vilenkin’s tunneling picture? Well, when we talk about quantum tunneling, we also end up describing it with Euclidean space. Saying that the universe tunneled from nothing and saying it has a Euclidean “cap” then end up being closely related claims.

Before Neil’s work these two proposals weren’t thought of as the same because they were thought to give different results. What Neil is arguing is that this is due to a fundamental mistake on Hartle and Hawking’s part. Specifically, Neil is arguing that the Wick rotation trick that Hartle and Hawking used doesn’t work in this context, when you’re trying to calculate small quantum corrections for gravity. In normal quantum field theory, it’s often easier to go to Euclidean space and use Wick rotation, but for quantum gravity Neil is arguing that this technique stops being rigorous. Instead, you should stay in Lorentzian space, and use a more powerful mathematical technique called Picard-Lefschetz theory.

Using this technique, Neil found that Hartle and Hawking’s nicely behaved result was mistaken, and the real result of what Hartle and Hawking were proposing looks more like Vilenkin’s tunneling proposal.

Neil then tried to see what happens when there’s some small perturbation from a perfect de Sitter universe. In general in physics if you want to trust a result it ought to be stable: small changes should stay small. Otherwise, you’re not really starting from the right point, and you should instead be looking at wherever the changes end up taking you. What Neil found was that the Hartle-Hawking and Vilenkin proposals weren’t stable. If you start with a small wiggle in your no-boundary universe you get, not the purple middle drawing with small wiggles, but the red one with wiggles that rapidly grow unstable. The implication is that the Hartle-Hawking and Vilenkin proposals aren’t just secretly the same, they also both can’t be the stable state of the universe.

Neil argues that this problem is quite general, and happens under the following conditions:

  1. A universe that begins smoothly and semi-classically (where quantum corrections are small) with no sharp boundary,
  2. with a positive cosmological constant (the de Sitter universe mentioned earlier),
  3. under which the universe expands many times, allowing the small fluctuations to grow large.

If the universe avoids one of those conditions (maybe the cosmological constant changes in the future and the universe stops expanding, for example) then you might be able to avoid Neil’s argument. But if not, you can’t have a smooth semi-classical beginning and still have a stable universe.

Now, no debate in physics ends just like that. Hartle (and collaborators) don’t disagree with Neil’s insistence on Picard-Lefschetz theory, but they argue there’s still a way to make their proposal work. Neil mentioned at the group meeting that he thinks even the new version of Hartle’s proposal doesn’t solve the problem, he’s been working out the calculation with his collaborators to make sure.

Often, one hears about an idea from science popularization and then it never gets mentioned again. The public hears about a zoo of proposals without ever knowing which ones worked out. I think child-me would appreciate hearing what happened to Hawking’s proposal for a universe with no boundary, and to Vilenkin’s proposal for a universe emerging from nothing. Adult-me certainly does. I hope you do too.

Pop Goes the Universe and Other Cosmic Microwave Background Games

(With apologies to whoever came up with this “book”.)

Back in February, Ijjas, Steinhardt, and Loeb wrote an article for Scientific American titled “Pop Goes the Universe” criticizing cosmic inflation, the proposal that the universe underwent a period of rapid expansion early in its life, smoothing it out to achieve the (mostly) uniform universe we see today. Recently, Scientific American published a response by Guth, Kaiser, Linde, Nomura, and 29 co-signers. This was followed by a counterresponse, which is the usual number of steps for this sort of thing before it dissipates harmlessly into the blogosphere.

In general, string theory, supersymmetry, and inflation tend to be criticized in very similar ways. Each gets accused of being unverifiable, able to be tuned to match any possible experimental result. Each has been claimed to be unfairly dominant, its position as “default answer” more due to the bandwagon effect than the idea’s merits. All three tend to get discussed in association with the multiverse, and blamed for dooming physics as a result. And all are frequently defended with one refrain: “If you have a better idea, what is it?”

It’s probably tempting (on both sides) to view this as just another example of that argument. In reality, though, string theory, supersymmetry, and inflation are all in very different situations. The details matter. And I worry that in this case both sides are too ready to assume the other is just making the “standard argument”, and ended up talking past each other.

When people say that string theory makes no predictions, they’re correct in a sense, but off topic: the majority of string theorists aren’t making the sort of claims that require successful predictions. When people say that inflation makes no predictions, if you assume they mean the same thing that people mean when they accuse string theory of making no predictions, then they’re flat-out wrong. Unlike string theorists, most people who work on inflation care a lot about experiment. They write papers filled with predictions, consequences for this or that model if this or that telescope sees something in the near future.

I don’t think Ijjas, Steinhardt, and Loeb were making that kind of argument.

When people say that supersymmetry makes no predictions, there’s some confusion of scope. (Low-energy) supersymmetry isn’t one specific proposal that needs defending on its own. It’s a class of different models, each with its own predictions. Given a specific proposal, one can see if it’s been ruled out by experiment, and predict what future experiments might say about it. Ruling out one model doesn’t rule out supersymmetry as a whole, but it doesn’t need to, because any given researcher isn’t arguing for supersymmetry as a whole: they’re arguing for their particular setup. The right “scope” is between specific supersymmetric models and specific non-supersymmetric models, not both as general principles.

Guth, Kaiser, Linde, and Nomura’s response follows similar lines in defending inflation. They point out that the wide variety of models are subject to being ruled out in the face of observation, and compare to the construction of the Standard Model in particle physics, with many possible parameters under the overall framework of Quantum Field Theory.

Ijjas, Steinhardt, and Loeb’s article certainly looked like it was making this sort of mistake. But as they clarify in the FAQ of their counter-response, they’ve got a more serious objection. They’re arguing that, unlike in the case of supersymmetry or the Standard Model, specific inflation models do not lead to specific predictions. They’re arguing that, because inflation typically leads to a multiverse, any specific model will in fact lead to a wide variety of possible observations. In effect, they’re arguing that the multitude of people busily making predictions based on inflationary models are missing a step in their calculations, underestimating their errors by a huge margin.

This is where I really regret that these arguments usually end after three steps (article, response, counter-response). Here Ijjas, Steinhardt, and Loeb are making what is essentially a technical claim, one that Guth, Kaiser, Linde, and Nomura could presumably respond to with a technical response, after which the rest of us would actually learn something. As-is, I certainly don’t have the background in inflation to know whether or not this point makes sense, and I’d love to hear from someone who does.

One aspect of this exchange that baffled me was the “accusation” that Ijjas, Steinhardt, and Loeb were just promoting their own work on bouncing cosmologies. (I put “accusation” in quotes because while Ijjas, Steinhardt, and Loeb seem to treat it as if it were an accusation, Guth, Kaiser, Linde, and Nomura don’t obviously mean it as one.)

“Bouncing cosmology” is Ijjas, Steinhardt, and Loeb’s answer to the standard “If you have a better idea, what is it?” response. It wasn’t the focus of their article, but while they seem to think this speaks well of them (hence their treatment of “promoting their own work” as if it were an accusation), I don’t. I read a lot of Scientific American growing up, and the best articles focused on explaining a positive vision: some cool new idea, mainstream or not, that could capture the public’s interest. That kind of article could still have included criticism of inflation, you’d want it in there to justify the use of a bouncing cosmology. But by going beyond that, it would have avoided falling into the standard back and forth that these arguments tend to, and maybe we would have actually learned from the exchange.

What Space Can Tell Us about Fundamental Physics

Back when LIGO announced its detection of gravitational waves, there was one question people kept asking me: “what does this say about quantum gravity?”

The answer, each time, was “nothing”. LIGO’s success told us nothing about quantum gravity, and very likely LIGO will never tell us anything about quantum gravity.

The sheer volume of questions made me think, though. Astronomy, astrophysics, and cosmology fascinate people. They capture the public’s imagination in a way that makes them expect breakthroughs about fundamental questions. Especially now, with the LHC so far seeing nothing new since the Higgs, people are turning to space for answers.

Is that a fair expectation? Well, yes and no.

Most astrophysicists aren’t concerned with finding new fundamental laws of nature. They’re interested in big systems like stars and galaxies, where we know most of the basic rules but can’t possibly calculate all their consequences. Like most physicists, they’re doing the vital work of “physics of decimals”.

At the same time, there’s a decent chunk of astrophysics and cosmology that does matter for fundamental physics. Just not all of it. Here are some of the key areas where space has something important to say about the fundamental rules that govern our world:

 

1. Dark Matter:

Galaxies rotate at different speeds than their stars would alone. Clusters of galaxies bend light that passes by, and do so more than their visible mass would suggest. And when scientists try to model the evolution of the universe, from early images to its current form, the models require an additional piece: extra matter that cannot interact with light. All of this suggests that there is some extra “dark” matter in the universe, not described by our standard model of particle physics.

If we want to understand this dark matter, we need to know more about its properties, and much of that can be learned from astronomy. If it turns out dark matter isn’t really matter after all, if it can be explained by a modification of gravity or better calculations of gravity’s effects, then it still will have important implications for fundamental physics, and astronomical evidence will still be key to finding those implications.

2. Dark Energy (/Cosmological Constant/Inflation/…):

The universe is expanding, and its expansion appears to be accelerating. It also seems more smooth and uniform than expected, suggesting that it had a period of much greater acceleration early on. Both of these suggest some extra quantity: a changing acceleration, a “dark energy”, the sort of thing that can often be explained by a new scalar field like the Higgs.

Again, the specifics: how (and perhaps if) the universe is expanding now, what kinds of early expansion (if any) the shape of the universe suggests, these will almost certainly have implications for fundamental physics.

3. Limits on stable stuff:

Let’s say you have a new proposal for particle physics. You’ve predicted a new particle, but it can’t interact with anything else, or interacts so weakly we’d never detect it. If your new particle is stable, then you can still say something about it, because its mass would have an effect on the early universe. Too many such particles and they would throw off cosmologists’ models, ruling them out.

Alternatively, you might predict something that could be detected, but hasn’t, like a magnetic monopole. Then cosmologists can tell you how many such particles would have been produced in the early universe, and thus how likely we would be to detect them today. If you predict too many particles and we don’t see them, then that becomes evidence against your proposal.

4. “Cosmological Collider Physics”:

A few years back, Nima Arkani-Hamed and Juan Maldacena suggested that the early universe could be viewed as an extremely high energy particle collider. While this collider performed only one experiment, the results from that experiment are spread across the sky, and observed patterns in the early universe should tell us something about the particles produced by the cosmic collider.

People are still teasing out the implications of this idea, but it looks promising, and could mean we have a lot more to learn from examining the structure of the universe.

5. Big Weird Space Stuff:

If you suspect we live in a multiverse, you might want to look for signs of other universes brushing up against our own. If your model of the early universe predicts vast cosmic strings, maybe a gravitational wave detector like LIGO will be able to see them.

6. Unexpected weirdness:

In all likelihood, nothing visibly “quantum” happens at the event horizons of astrophysical black holes. If you think there’s something to see though, the Event Horizon Telescope might be able to see it. There’s a grab bag of other predictions like this: situations where we probably won’t see anything, but where at least one person thinks there’s a question worth asking.

 

I’ve probably left something out here, but this should give you a general idea. There is a lot that fundamental physics can learn from astronomy, from the overall structure and origins of the universe to unexplained phenomena like dark matter. But not everything in astronomy has these sorts of implications: for the most part, astronomy is interesting not because it tells us something about the fundamental laws of nature, but because it tells us how the vast space above us actually happens to work.

Who Needs Non-Empirical Confirmation?

I’ve figured out what was bugging me about Dawid’s workshop on non-empirical theory confirmation.

It’s not the concept itself that bothers me. While you might think of science as entirely based on observations of the real world, in practice we can’t test everything. Inevitably, we have to add in other sorts of evidence: judgments based on precedent, philosophical considerations, or sociological factors.

It’s Dawid’s examples that annoy me: string theory, inflation, and the multiverse. Misleading popularizations aside, none of these ideas involve non-empirical confirmation. In particular, string theory doesn’t need non-empirical confirmation, inflation doesn’t want it, and the multiverse, as of yet, doesn’t merit it.

In order for non-empirical confirmation to matter, it needs to affect how people do science. Public statements aren’t very relevant from a philosophy of science perspective; they ebb and flow based on how people promote themselves. Rather, we should care about what scientists assume in the course of their work. If people are basing new work on assumptions that haven’t been established experimentally, then we need to make sure their confidence isn’t misplaced.

String theory hasn’t been established experimentally…but it fails the other side of this test: almost no-one is assuming string theory is true.

I’ve talked before about theorists who study theories that aren’t true. String theory isn’t quite in that category, it’s still quite possible that it describes the real world. Nonetheless, for most string theorists, the distinction is irrelevant: string theory is a way to relate different quantum field theories together, and to formulate novel ones with interesting properties. That sort of research doesn’t rely on string theory being true, often it doesn’t directly involve strings at all. Rather, it relies on string theory’s mathematical abundance, its versatility and power as a lens to look at the world.

There are string theorists who are more directly interested in describing the world with string theory, though they’re a minority. They’re called String Phenomenologists. By itself, “phenomenologist” refers to particle physicists who try to propose theories that can be tested in the real world. “String phenomenology” is actually a bit misleading, since most string phenomenologists aren’t actually in the business of creating new testable theories. Rather, they try to reproduce some of the more common proposals of phenomenologists, like the MSSM, from within the framework of string theory. While string theory can reproduce many possible descriptions of the world (10^500 by some estimates), that doesn’t mean it covers every possible theory; making sure it can cover realistic options is an important, ongoing technical challenge. Beyond that, a minority within a minority of string phenomenologists actually try to make testable predictions, though often these are controversial.

None of these people need non-empirical confirmation. For the majority of string theorists, string theory doesn’t need to be “confirmed” at all. And for the minority who work on string phenomenology, empirical confirmation is still the order of the day, either directly from experiment or indirectly from the particle phenomenologists struggling to describe it.

What about inflation?

Cosmic inflation was proposed to solve an empirical problem, the surprising uniformity of the observed universe. Look through a few papers in the field, and you’ll notice that most are dedicated to finding empirical confirmation: they’re proposing observable effects on the cosmic microwave background, or on the distribution of large-scale structures in the universe. Cosmologists who study inflation aren’t claiming to be certain, and they aren’t rejecting experiment: overall, they don’t actually want non-empirical confirmation.

To be honest, though, I’m being a little unfair to Dawid here. The reason that string theory and inflation are in the name of his workshop aren’t because he thinks they independently use non-empirical confirmation. Rather, it’s because, if you view both as confirmed (and make a few other assumptions), then you’ve got a multiverse.

In this case, it’s again important to compare what people are doing in their actual work to what they’re saying in public. While a lot of people have made public claims about the existence of a multiverse, very few of them actually work on it. In fact, the two sets of people seem to be almost entirely disjoint.

People who make public statements about the multiverse tend to be older prominent physicists, often ones who’ve worked on supersymmetry as a solution to the naturalness problem. For them, the multiverse is essentially an excuse. Naturalness predicted new particles, we didn’t find new particles, so we need an excuse to have an “unnatural” universe, and for many people the multiverse is that excuse. As I’ve argued before, though, this excuse doesn’t have much of an impact on research. These people aren’t discouraged from coming up with new ideas because they believe in the multiverse, rather, they’re talking about the multiverse because they’re currently out of new ideas. Nima Arkani-Hamed is a pretty clear case of someone who has supported the multiverse in pieces like Particle Fever, but who also gets thoroughly excited about new ideas to rescue naturalness.

By contrast, there are many fewer people who actually work on the multiverse itself, and they’re usually less prominent. For the most part, they actually seem concerned with empirical confirmation, trying to hone tricks like anthropic reasoning to the point where they can actually make predictions about future experiments. It’s unclear whether this tiny group of people are on the right track…but what they’re doing definitely doesn’t seem like something that merits non-empirical confirmation, at least at this point.

It’s a shame that Dawid chose the focus he did for his workshop. Non-empirical theory confirmation is an interesting idea (albeit one almost certainly known to philosophy long before Dawid), and there are plenty of places in physics where it could use some examination. We seem to have come to our current interpretation of renormalization non-empirically, and while string theory itself doesn’t rely on non-empirical conformation many of its arguments with loop quantum gravity seem to rely on non-empirical considerations, in particular arguments about what is actually required for a proper theory of quantum gravity. But string theory, inflation, and the multiverse aren’t the examples he’s looking for.

A Tale of Two CMB Measurements

While trying to decide what to blog about this week, I happened to run across this article by Matthew Francis on Ars Technica.

Apparently, researchers have managed to use Planck‘s measurement of the Cosmic Microwave Background to indirectly measure a more obscure phenomenon, the Cosmic Neutrino Background.

The Cosmic Microwave Background, or CMB is often described as the light of the Big Bang, dimmed and spread to the present day. More precisely, it’s the light released from the first time the universe became transparent. When electrons and protons joined to form the first atoms, light no longer spent all its time being absorbed and released by electrical charges, and was free to travel in a mostly-neutral universe.

This means that the CMB is less like a view of the Big Bang, and more like a screen separating us from it. Light and charged particles from before the CMB was formed will never be observable to us, because they would have been absorbed by the early universe. If we want to see beyond this screen, we need something with no electric charge.

That’s where the Cosmic Neutrino Background comes in. Much as the CMB consists of light from the first time the universe became transparent, the CNB consists of neutrinos from the first time the universe was cool enough for them to travel freely. Since this happened a bit before the universe was transparent to light, the CNB gives information about an earlier stage in the universe’s history.

Unfortunately, neutrinos are very difficult to detect, the low-energy ones left over from the CNB even more so. Rather than detecting the CNB directly, it has to be observed through its indirect effects on the CMB, and that’s exactly what these researchers did.

Now does all of this sound just a little bit familiar?

Gravitational waves are also hard to detect, hard enough that we haven’t directly detected any yet. They’re also electrically neutral, so they can also give us information from behind the screen of the CMB, letting us learn about the very early universe. And when the team at BICEP2 purported to measure these primordial gravitational waves indirectly, by measuring the CMB, the press went crazy about it.

This time, though? That Ars Technica article is the most prominent I could find. There’s nothing in major news outlets at all.

I don’t think that this is just a case of people learning from past mistakes. I also don’t think that BICEP2’s results were just that much more interesting: they were making a claim about cosmic inflation rather than just buttressing the standard Big Bang model, but (outside of certain contrarians here at Perimeter) inflation is not actually all that controversial. It really looks like hype is the main difference here, and that’s kind of sad. The difference between a big (premature) announcement that got me to write four distinct posts and an article I almost didn’t notice is just one of how the authors chose to make their work known.