What Space Can Tell Us about Fundamental Physics

Back when LIGO announced its detection of gravitational waves, there was one question people kept asking me: “what does this say about quantum gravity?”

The answer, each time, was “nothing”. LIGO’s success told us nothing about quantum gravity, and very likely LIGO will never tell us anything about quantum gravity.

The sheer volume of questions made me think, though. Astronomy, astrophysics, and cosmology fascinate people. They capture the public’s imagination in a way that makes them expect breakthroughs about fundamental questions. Especially now, with the LHC so far seeing nothing new since the Higgs, people are turning to space for answers.

Is that a fair expectation? Well, yes and no.

Most astrophysicists aren’t concerned with finding new fundamental laws of nature. They’re interested in big systems like stars and galaxies, where we know most of the basic rules but can’t possibly calculate all their consequences. Like most physicists, they’re doing the vital work of “physics of decimals”.

At the same time, there’s a decent chunk of astrophysics and cosmology that does matter for fundamental physics. Just not all of it. Here are some of the key areas where space has something important to say about the fundamental rules that govern our world:

 

1. Dark Matter:

Galaxies rotate at different speeds than their stars would alone. Clusters of galaxies bend light that passes by, and do so more than their visible mass would suggest. And when scientists try to model the evolution of the universe, from early images to its current form, the models require an additional piece: extra matter that cannot interact with light. All of this suggests that there is some extra “dark” matter in the universe, not described by our standard model of particle physics.

If we want to understand this dark matter, we need to know more about its properties, and much of that can be learned from astronomy. If it turns out dark matter isn’t really matter after all, if it can be explained by a modification of gravity or better calculations of gravity’s effects, then it still will have important implications for fundamental physics, and astronomical evidence will still be key to finding those implications.

2. Dark Energy (/Cosmological Constant/Inflation/…):

The universe is expanding, and its expansion appears to be accelerating. It also seems more smooth and uniform than expected, suggesting that it had a period of much greater acceleration early on. Both of these suggest some extra quantity: a changing acceleration, a “dark energy”, the sort of thing that can often be explained by a new scalar field like the Higgs.

Again, the specifics: how (and perhaps if) the universe is expanding now, what kinds of early expansion (if any) the shape of the universe suggests, these will almost certainly have implications for fundamental physics.

3. Limits on stable stuff:

Let’s say you have a new proposal for particle physics. You’ve predicted a new particle, but it can’t interact with anything else, or interacts so weakly we’d never detect it. If your new particle is stable, then you can still say something about it, because its mass would have an effect on the early universe. Too many such particles and they would throw off cosmologists’ models, ruling them out.

Alternatively, you might predict something that could be detected, but hasn’t, like a magnetic monopole. Then cosmologists can tell you how many such particles would have been produced in the early universe, and thus how likely we would be to detect them today. If you predict too many particles and we don’t see them, then that becomes evidence against your proposal.

4. “Cosmological Collider Physics”:

A few years back, Nima Arkani-Hamed and Juan Maldacena suggested that the early universe could be viewed as an extremely high energy particle collider. While this collider performed only one experiment, the results from that experiment are spread across the sky, and observed patterns in the early universe should tell us something about the particles produced by the cosmic collider.

People are still teasing out the implications of this idea, but it looks promising, and could mean we have a lot more to learn from examining the structure of the universe.

5. Big Weird Space Stuff:

If you suspect we live in a multiverse, you might want to look for signs of other universes brushing up against our own. If your model of the early universe predicts vast cosmic strings, maybe a gravitational wave detector like LIGO will be able to see them.

6. Unexpected weirdness:

In all likelihood, nothing visibly “quantum” happens at the event horizons of astrophysical black holes. If you think there’s something to see though, the Event Horizon Telescope might be able to see it. There’s a grab bag of other predictions like this: situations where we probably won’t see anything, but where at least one person thinks there’s a question worth asking.

 

I’ve probably left something out here, but this should give you a general idea. There is a lot that fundamental physics can learn from astronomy, from the overall structure and origins of the universe to unexplained phenomena like dark matter. But not everything in astronomy has these sorts of implications: for the most part, astronomy is interesting not because it tells us something about the fundamental laws of nature, but because it tells us how the vast space above us actually happens to work.

Visiting LBNL

I’ve been traveling this week, giving a talk at Lawrence Berkeley National Laboratory, so this will be a short post.

In my experience, most non-scientists don’t know about the national labs. In the US, the majority of scientists work for universities, but a substantial number work at one of the seventeen national labs overseen by the Department of Energy. It’s a good gig, if you can get it: no teaching duties, and a fair amount of freedom in what you research.

Each lab has its own focus, and its own culture. In the past I’ve spent a lot of time at SLAC, which runs a particle accelerator near Stanford (among other things). Visiting LBNL, I was amused by some of the differences. At SLAC, the guest rooms have ads for Stanford-branded bed covers. LBNL, meanwhile, brags about its beeswax-based toiletries in recyclable cardboard bottles. SLAC is flat, spread out, and fairly easy to navigate. LBNL is a maze of buildings arranged in tight terraces on a steep hill.

240px-rivendell_illustration

I forgot to take a picture, but someone appears to have drawn one.

While the differences were amusing, physicists are physicists everywhere. It was nice to share my work with people who mostly hadn’t heard about it before, and to get an impression of what they were working on.

Valentine’s Day Physics Poem 2017

It’s that time of year again! Valentine’s Day was this week, so to continue this blog’s tradition it’s time for me to post one of my physics poems. I wrote this back before I fully understood quantum field theory, so you’ll have to excuse any inaccuracies in the metaphor (at least on the physics side 😉 ).

 

Perturbation Theory II – Going in Loops

 

In order to interact, two particles must collide.

But a particle is a small thing, moving in its own circles, covering little space in its lonely life.

So we will never interact.

 

But particles emit bosons,

Tiny messengers of force,

Tendrils of interaction.

When these find us,

As they sometimes do,

We can interact.

 

But a boson is a small thing, moving in its own circles, covering little space in its lonely life.

So we will never interact.

 

But each boson has its own retinue,

Particles and their bosons in turn,

Spawned from its self-energy, uncertainty in its own nature,

Each, unobserved, with infinite possibilities.

 

And to compensate for these infinities

The charged nature of our naked selves

Must in turn be infinitely repressed.

 

So perhaps interaction would still be understandable

For those with simple repressions,

Matching constraints.

 

But we are not such people.

Complicated beings, we spin and twirl.

We hide our charge behind an infinity of possible terms,

So we can never know

If we will interact.

 

But perhaps we are not simply isolated points.

Perhaps we have extension,

Dimension,

Reach, beyond the confines of zero-dimensional selves.

And with that reach

Perhaps we can understand.

Perhaps

We can interact.

Boltzmann Brains, Evil Demons, and Why It’s Occasionally a Good Idea to Listen to Philosophers

There’s been a bit of a buzz recently about a paper Sean Carroll posted to the arXiv, “Why Boltzmann Brains Are Bad”. The argument in the paper isn’t new, it’s something Carroll has been arguing for a long time, and the arXiv post was just because he had been invited to contribute a piece to a book on Current Controversies in Philosophy of Science.

(By the way: in our field, invited papers and conference proceedings are almost always reviews of old work, not new results. If you see something on arXiv and want to know whether it’s actually new work, the “Comments:” section will almost always mention this.)

While the argument isn’t new, it is getting new attention. And since I don’t think I’ve said much about my objections to it, now seems like a good time to do so.

Carroll’s argument is based on theoretical beings called Boltzmann brains. The idea is that if you wait a very very long time in a sufficiently random (“high-entropy”) universe, the matter in that universe will arrange itself in pretty much every imaginable way, if only for a moment. In particular, it will eventually form a brain, or enough of a brain to have a conscious experience. Wait long enough, and you can find a momentary brain having any experience you want, with any (fake) memories you want. Long enough, and you can find a brain having the same experience you are having right now.

So, Carroll asks, how do you know you aren’t a Boltzmann brain? If the universe exists for long enough, most of the beings having your current experiences would be Boltzmann brains, not real humans. But if you really are a Boltzmann brain, then you can’t know anything about the universe at all: everything you think are your memories are just random fluctuations with no connection to the real world.

Carroll calls this sort of situation “cognitively unstable”. If you reason scientifically that the universe must be full of Boltzmann brains, then you can’t rule out that you could be a Boltzmann brain, and thus you shouldn’t accept your original reasoning.

The only way out, according to Carroll, is if we live in a universe that will never contain Boltzmann brains, for example one that won’t exist in its current form long enough to create them. So from a general concern about cognitive instability, Carroll argues for specific physics. And if that seems odd…well, it is.

For the purpose of this post, I’m going to take for granted the physics case: that a sufficiently old and random universe would indeed produce Boltzmann brains. That’s far from uncontroversial, and if you’re interested in that side of the argument (and have plenty of patience for tangents and Czech poop jokes) Lubos Motl posted about it recently.

Instead, I’d like to focus on the philosophical side of the argument.

Let’s start with intro philosophy, and talk about Descartes.

Descartes wanted to start philosophy from scratch by questioning everything he thought he knew. In one of his arguments, he asks the reader to imagine an evil demon.

315grazthrone

Probably Graz’zt. It’s usually Graz’zt.

Descartes imagines this evil demon exercising all its power to deceive. Perhaps it could confound your senses with illusions, or modify your memories. If such a demon existed, there would be no way to know if anything you believed or reasoned about the world was correct. So, Descartes asked, how do you know you’re not being deceived by an evil demon right now?

Amusingly, like Carroll, Descartes went on to use this uncertainty to argue for specific proposals in physics: in Descartes’ case, everything from the existence of a benevolent god to the idea that gravity was caused by a vortex of fluid around the sun.

Descartes wasn’t the last to propose this kind of uncertainty, and philosophers have asked more sophisticated questions over the years challenging the idea that it makes sense to reason from the past about the future at all.

Carroll is certainly aware of all of this. But I suspect he doesn’t quite appreciate the current opinion philosophers have on these sorts of puzzles.

The impression I’ve gotten from philosophers is that they don’t take this kind of “cognitive instability” very seriously anymore. There are specialists who still work on it, and it’s still of historical interest. But the majority of philosophers have moved on.

How did they move on? How have they dismissed these kinds of arguments?

That varies. Philosophers don’t tend to have the kind of consensus that physicists usually do.

Some reject them on pragmatic grounds: science works, even if we can’t “justify” it. Some use a similar argument to Carroll’s, but take it one step back, arguing that we shouldn’t worry that we could be deceived by an evil demon or be a Boltzmann brain because those worries by themselves are cognitively unstable. Some bite the bullet, that reasoning is impossible, then just ignore it and go on with their lives.

The common trait of all of these rejections, though? They don’t rely on physics.

Philosophers don’t argue “evil demons are impossible, therefore we can be sure we’re not deceived by evil demons”. They don’t argue “dreams are never completely realistic, so we can’t just be dreaming right now”.

And they certainly don’t try to argue the reverse: that consistency means there can never be evil demons, or never be realistic dreams.

I was on the debate team in high school. One popular tactic was called the “non-unique”. If your opponent argued that your plan had some negative consequences, you could argue that those consequences would happen regardless of whether you got to enact your plan or not: that the consequences were non-unique.

At this point, philosophers understand that cognitive instability and doubt are “non-unique”. No matter the physics, no matter how the world looks, it’s still possible to argue that reasoning isn’t justified, that even the logic we used to doubt the world in the first place could be flawed.

Carroll’s claim to me seems non-unique. Yes, in a universe that exists for a long time you could be a Boltzmann brain. But even if you don’t live in such a universe, you could still be a brain in a jar or a simulation. You could still be deceived by an “evil demon”.

And so regardless, you need the philosophers. Regardless, you need some argument that reasoning works, that you can ignore doubt. And once you’re happy with that argument, you don’t have to worry about Boltzmann brains.

Thoughts from the Winter School

There are two things I’d like to talk about this week.

First, as promised, I’ll talk about what I worked on at the PSI Winter School.

Freddy Cachazo and I study what are called scattering amplitudes. At first glance, these are probabilities that two subatomic particles scatter off each other, relevant for experiments like the Large Hadron Collider. In practice, though, they can calculate much more.

For example, let’s say you have two black holes circling each other, like the ones LIGO detected. Zoom out far enough, and you can think of each one as a particle. The two particle-black holes exchange gravitons, and those exchanges give rise to the force of gravity between them.

bhmerger_ligo_3600

In the end, it’s all just particle physics.

 

Based on that, we can use our favorite scattering amplitudes to make predictions for gravitational wave telescopes like LIGO.

There’s a bit of weirdness to this story, though, because these amplitudes don’t line up with predictions in quite the way we’re used to. The way we calculate amplitudes involves drawing diagrams, and those diagrams have loops. Normally, each “loop” makes the amplitude more quantum-mechanical. Only the diagrams with no loops (“tree diagrams”) come from classical physics alone.

(Here “classical physics” just means “not quantum”: I’m calling general relativity “classical”.)

For this problem, we only care about classical physics: LIGO isn’t sensitive enough to see quantum effects. The weird thing is, despite that, we still need loops.

(Why? This is a story I haven’t figured out how to tell in a non-technical way. The technical explanation has to do with the fact that we’re calculating a potential, not an amplitude, so there’s a Fourier transformation, and keeping track of the dimensions entails tossing around some factors of Planck’s constant. But I feel like this still isn’t quite the full story.)

So if we want to make predictions for LIGO, we want to compute amplitudes with loops. And as amplitudeologists, we should be pretty good at that.

As it turns out, plenty of other people have already had that idea, but there’s still room for improvement.

Our time with the students at the Winter School was limited, so our goal was fairly modest. We wanted to understand those other peoples’ calculations, and perhaps to think about them in a slightly cleaner way. In particular, we wanted to understand why “loops” are really necessary, and whether there was some way of understanding what the “loops” were doing in a more purely classical picture.

At this point, we feel like we’ve got the beginning of an idea of what’s going on. Time will tell whether it works out, and I’ll update you guys when we have a more presentable picture.


 

Unfortunately, physics wasn’t the only thing I was thinking about last week, which brings me to my other topic.

This blog has a fairly strong policy against talking politics. This is for several reasons. Partly, it’s because politics simply isn’t my area of expertise. Partly, it’s because talking politics tends to lead to long arguments in which nobody manages to learn anything. Despite this, I’m about to talk politics.

Last week, citizens of Iran, Iraq, Libya, Somalia, Sudan, Syria and Yemen were barred from entering the US. This included not only new visa applicants, but also those who already have visas or green cards. The latter group includes long-term residents of the US, many of whom were detained in airports and threatened with deportation when their flights arrived shortly after the ban was announced. Among those was the president of the Graduate Student Organization at my former grad school.

A federal judge has blocked parts of the order, and the Department of Homeland Security has announced that there will be case-by-case exceptions. Still, plenty of people are stuck: either abroad if they didn’t get in in time, or in the US, afraid that if they leave they won’t be able to return.

Politics isn’t in my area of expertise. But…

I travel for work pretty often. I know how terrifying and arbitrary border enforcement can be. I know how it feels to risk thousands of dollars and months of planning because some consulate or border official is having a bad day.

I also know how essential travel is to doing science. When there’s only one expert in the world who does the sort of work you need, you can’t just find a local substitute.

And so for this, I don’t need to be an expert in politics. I don’t need a detailed case about the risks of terrorism. I already know what I need to, and I know that this is cruel.

And so I stand in solidarity with the people who were trapped in airports, and those still trapped abroad and trapped in the US. You have been treated cruelly, and you shouldn’t have been. Hopefully, that sort of message can transcend politics.

 

One final thing: I’m going to be a massive hypocrite and continue to ban political comments on this blog. If you want to talk to me about any of this (and you think one or both of us might actually learn something from the exchange) please contact me in private.

PSI Winter School 2017

It’s that time of year again! Perimeter Scholars International, Perimeter’s Master’s program in theoretical physics, is holding its Winter School up in Ontario’s copious backwoods.

img_20170125_161426

Ominous antlered snowmen included

Like last year, the students are spending mornings and evenings doing research supervised by PI grad students, postdocs, and faculty, and the afternoons on a variety of winter activities, including skiing and snowshoeing.

Last year, my group worked on the “POPE”, a proposal by Basso, Sever, and Vieira, and we ended up getting a paper out of it. This year, I’ve teamed up with Freddy Cachazo on a gravity-related project. We’ve got a group of enthusiastic students and are making decent progress, I’ll have more to say about it next week.

Digging up Variations

The best parts of physics research are when I get a chance to push out into the unknown, doing calculations no-one has done before. Sometimes, though, research is more…archeological.

2016-05-441-134ap_archeologyexcavation_loropc3a9ni_ruins_nr-loropc3a9niponi_prv-bf_sun15may2016-1119h

Pictured: not what I signed up for

Recently, I’ve been digging through a tangle of papers, each of which calculates roughly the same thing in a slightly different way. Like any good archeologist, I need to figure out not just what the authors of these papers were doing, but also why.

(As a physicist, why do I care about “why”? In this case, it’s because I want to know which of the authors’ choices are worth building on. If I can figure out why they made the choices they did, I can decide whether I share their motivations, and thus which aspects of their calculations are useful for mine.)

My first guess at “why” was a deeply cynical one. Why would someone publish slight variations on an old calculation? To get more publications!

This is a real problem in science. In certain countries in particular, promotions and tenure are based not on honestly assessing someone’s work but on quick and dirty calculations based on how many papers they’ve published. This motivates scientists to do the smallest amount possible in order to get a paper out.

That wasn’t what was happening in these papers, though. None of the authors lived in those kinds of countries, and most were pretty well established people: not the sort who worry about keeping up with publications.

So I put aside my cynical first-guess, and actually looked at the papers. Doing that, I found a more optimistic explanation.

These authors were in the process of building research programs. Each had their own long-term goal, a set of concepts and methods they were building towards. And each stopped along the way, to do another variation on this well-trod calculation. They weren’t doing this just because they needed a paper, or just because they could. They were trying to sift out insights, to debug their nascent research program in a well-understood case.

Thinking about it this way helped untwist the tangle of papers. The confusion of different choices suddenly made sense, as the result of different programs with different goals. And in turn, understanding which goals contributed to which papers helped me sort out which goals I shared, and which ideas would turn out to be helpful.

Would it have been less confusing if some of these people had sat on their calculations, and not published? Maybe at first. But in the end, the variations help, giving me a clearer understanding of the whole.