Tag Archives: cosmology

Pop Goes the Universe and Other Cosmic Microwave Background Games

(With apologies to whoever came up with this “book”.)

Back in February, Ijjas, Steinhardt, and Loeb wrote an article for Scientific American titled “Pop Goes the Universe” criticizing cosmic inflation, the proposal that the universe underwent a period of rapid expansion early in its life, smoothing it out to achieve the (mostly) uniform universe we see today. Recently, Scientific American published a response by Guth, Kaiser, Linde, Nomura, and 29 co-signers. This was followed by a counterresponse, which is the usual number of steps for this sort of thing before it dissipates harmlessly into the blogosphere.

In general, string theory, supersymmetry, and inflation tend to be criticized in very similar ways. Each gets accused of being unverifiable, able to be tuned to match any possible experimental result. Each has been claimed to be unfairly dominant, its position as “default answer” more due to the bandwagon effect than the idea’s merits. All three tend to get discussed in association with the multiverse, and blamed for dooming physics as a result. And all are frequently defended with one refrain: “If you have a better idea, what is it?”

It’s probably tempting (on both sides) to view this as just another example of that argument. In reality, though, string theory, supersymmetry, and inflation are all in very different situations. The details matter. And I worry that in this case both sides are too ready to assume the other is just making the “standard argument”, and ended up talking past each other.

When people say that string theory makes no predictions, they’re correct in a sense, but off topic: the majority of string theorists aren’t making the sort of claims that require successful predictions. When people say that inflation makes no predictions, if you assume they mean the same thing that people mean when they accuse string theory of making no predictions, then they’re flat-out wrong. Unlike string theorists, most people who work on inflation care a lot about experiment. They write papers filled with predictions, consequences for this or that model if this or that telescope sees something in the near future.

I don’t think Ijjas, Steinhardt, and Loeb were making that kind of argument.

When people say that supersymmetry makes no predictions, there’s some confusion of scope. (Low-energy) supersymmetry isn’t one specific proposal that needs defending on its own. It’s a class of different models, each with its own predictions. Given a specific proposal, one can see if it’s been ruled out by experiment, and predict what future experiments might say about it. Ruling out one model doesn’t rule out supersymmetry as a whole, but it doesn’t need to, because any given researcher isn’t arguing for supersymmetry as a whole: they’re arguing for their particular setup. The right “scope” is between specific supersymmetric models and specific non-supersymmetric models, not both as general principles.

Guth, Kaiser, Linde, and Nomura’s response follows similar lines in defending inflation. They point out that the wide variety of models are subject to being ruled out in the face of observation, and compare to the construction of the Standard Model in particle physics, with many possible parameters under the overall framework of Quantum Field Theory.

Ijjas, Steinhardt, and Loeb’s article certainly looked like it was making this sort of mistake. But as they clarify in the FAQ of their counter-response, they’ve got a more serious objection. They’re arguing that, unlike in the case of supersymmetry or the Standard Model, specific inflation models do not lead to specific predictions. They’re arguing that, because inflation typically leads to a multiverse, any specific model will in fact lead to a wide variety of possible observations. In effect, they’re arguing that the multitude of people busily making predictions based on inflationary models are missing a step in their calculations, underestimating their errors by a huge margin.

This is where I really regret that these arguments usually end after three steps (article, response, counter-response). Here Ijjas, Steinhardt, and Loeb are making what is essentially a technical claim, one that Guth, Kaiser, Linde, and Nomura could presumably respond to with a technical response, after which the rest of us would actually learn something. As-is, I certainly don’t have the background in inflation to know whether or not this point makes sense, and I’d love to hear from someone who does.

One aspect of this exchange that baffled me was the “accusation” that Ijjas, Steinhardt, and Loeb were just promoting their own work on bouncing cosmologies. (I put “accusation” in quotes because while Ijjas, Steinhardt, and Loeb seem to treat it as if it were an accusation, Guth, Kaiser, Linde, and Nomura don’t obviously mean it as one.)

“Bouncing cosmology” is Ijjas, Steinhardt, and Loeb’s answer to the standard “If you have a better idea, what is it?” response. It wasn’t the focus of their article, but while they seem to think this speaks well of them (hence their treatment of “promoting their own work” as if it were an accusation), I don’t. I read a lot of Scientific American growing up, and the best articles focused on explaining a positive vision: some cool new idea, mainstream or not, that could capture the public’s interest. That kind of article could still have included criticism of inflation, you’d want it in there to justify the use of a bouncing cosmology. But by going beyond that, it would have avoided falling into the standard back and forth that these arguments tend to, and maybe we would have actually learned from the exchange.

What Space Can Tell Us about Fundamental Physics

Back when LIGO announced its detection of gravitational waves, there was one question people kept asking me: “what does this say about quantum gravity?”

The answer, each time, was “nothing”. LIGO’s success told us nothing about quantum gravity, and very likely LIGO will never tell us anything about quantum gravity.

The sheer volume of questions made me think, though. Astronomy, astrophysics, and cosmology fascinate people. They capture the public’s imagination in a way that makes them expect breakthroughs about fundamental questions. Especially now, with the LHC so far seeing nothing new since the Higgs, people are turning to space for answers.

Is that a fair expectation? Well, yes and no.

Most astrophysicists aren’t concerned with finding new fundamental laws of nature. They’re interested in big systems like stars and galaxies, where we know most of the basic rules but can’t possibly calculate all their consequences. Like most physicists, they’re doing the vital work of “physics of decimals”.

At the same time, there’s a decent chunk of astrophysics and cosmology that does matter for fundamental physics. Just not all of it. Here are some of the key areas where space has something important to say about the fundamental rules that govern our world:

 

1. Dark Matter:

Galaxies rotate at different speeds than their stars would alone. Clusters of galaxies bend light that passes by, and do so more than their visible mass would suggest. And when scientists try to model the evolution of the universe, from early images to its current form, the models require an additional piece: extra matter that cannot interact with light. All of this suggests that there is some extra “dark” matter in the universe, not described by our standard model of particle physics.

If we want to understand this dark matter, we need to know more about its properties, and much of that can be learned from astronomy. If it turns out dark matter isn’t really matter after all, if it can be explained by a modification of gravity or better calculations of gravity’s effects, then it still will have important implications for fundamental physics, and astronomical evidence will still be key to finding those implications.

2. Dark Energy (/Cosmological Constant/Inflation/…):

The universe is expanding, and its expansion appears to be accelerating. It also seems more smooth and uniform than expected, suggesting that it had a period of much greater acceleration early on. Both of these suggest some extra quantity: a changing acceleration, a “dark energy”, the sort of thing that can often be explained by a new scalar field like the Higgs.

Again, the specifics: how (and perhaps if) the universe is expanding now, what kinds of early expansion (if any) the shape of the universe suggests, these will almost certainly have implications for fundamental physics.

3. Limits on stable stuff:

Let’s say you have a new proposal for particle physics. You’ve predicted a new particle, but it can’t interact with anything else, or interacts so weakly we’d never detect it. If your new particle is stable, then you can still say something about it, because its mass would have an effect on the early universe. Too many such particles and they would throw off cosmologists’ models, ruling them out.

Alternatively, you might predict something that could be detected, but hasn’t, like a magnetic monopole. Then cosmologists can tell you how many such particles would have been produced in the early universe, and thus how likely we would be to detect them today. If you predict too many particles and we don’t see them, then that becomes evidence against your proposal.

4. “Cosmological Collider Physics”:

A few years back, Nima Arkani-Hamed and Juan Maldacena suggested that the early universe could be viewed as an extremely high energy particle collider. While this collider performed only one experiment, the results from that experiment are spread across the sky, and observed patterns in the early universe should tell us something about the particles produced by the cosmic collider.

People are still teasing out the implications of this idea, but it looks promising, and could mean we have a lot more to learn from examining the structure of the universe.

5. Big Weird Space Stuff:

If you suspect we live in a multiverse, you might want to look for signs of other universes brushing up against our own. If your model of the early universe predicts vast cosmic strings, maybe a gravitational wave detector like LIGO will be able to see them.

6. Unexpected weirdness:

In all likelihood, nothing visibly “quantum” happens at the event horizons of astrophysical black holes. If you think there’s something to see though, the Event Horizon Telescope might be able to see it. There’s a grab bag of other predictions like this: situations where we probably won’t see anything, but where at least one person thinks there’s a question worth asking.

 

I’ve probably left something out here, but this should give you a general idea. There is a lot that fundamental physics can learn from astronomy, from the overall structure and origins of the universe to unexplained phenomena like dark matter. But not everything in astronomy has these sorts of implications: for the most part, astronomy is interesting not because it tells us something about the fundamental laws of nature, but because it tells us how the vast space above us actually happens to work.

Boltzmann Brains, Evil Demons, and Why It’s Occasionally a Good Idea to Listen to Philosophers

There’s been a bit of a buzz recently about a paper Sean Carroll posted to the arXiv, “Why Boltzmann Brains Are Bad”. The argument in the paper isn’t new, it’s something Carroll has been arguing for a long time, and the arXiv post was just because he had been invited to contribute a piece to a book on Current Controversies in Philosophy of Science.

(By the way: in our field, invited papers and conference proceedings are almost always reviews of old work, not new results. If you see something on arXiv and want to know whether it’s actually new work, the “Comments:” section will almost always mention this.)

While the argument isn’t new, it is getting new attention. And since I don’t think I’ve said much about my objections to it, now seems like a good time to do so.

Carroll’s argument is based on theoretical beings called Boltzmann brains. The idea is that if you wait a very very long time in a sufficiently random (“high-entropy”) universe, the matter in that universe will arrange itself in pretty much every imaginable way, if only for a moment. In particular, it will eventually form a brain, or enough of a brain to have a conscious experience. Wait long enough, and you can find a momentary brain having any experience you want, with any (fake) memories you want. Long enough, and you can find a brain having the same experience you are having right now.

So, Carroll asks, how do you know you aren’t a Boltzmann brain? If the universe exists for long enough, most of the beings having your current experiences would be Boltzmann brains, not real humans. But if you really are a Boltzmann brain, then you can’t know anything about the universe at all: everything you think are your memories are just random fluctuations with no connection to the real world.

Carroll calls this sort of situation “cognitively unstable”. If you reason scientifically that the universe must be full of Boltzmann brains, then you can’t rule out that you could be a Boltzmann brain, and thus you shouldn’t accept your original reasoning.

The only way out, according to Carroll, is if we live in a universe that will never contain Boltzmann brains, for example one that won’t exist in its current form long enough to create them. So from a general concern about cognitive instability, Carroll argues for specific physics. And if that seems odd…well, it is.

For the purpose of this post, I’m going to take for granted the physics case: that a sufficiently old and random universe would indeed produce Boltzmann brains. That’s far from uncontroversial, and if you’re interested in that side of the argument (and have plenty of patience for tangents and Czech poop jokes) Lubos Motl posted about it recently.

Instead, I’d like to focus on the philosophical side of the argument.

Let’s start with intro philosophy, and talk about Descartes.

Descartes wanted to start philosophy from scratch by questioning everything he thought he knew. In one of his arguments, he asks the reader to imagine an evil demon.

315grazthrone

Probably Graz’zt. It’s usually Graz’zt.

Descartes imagines this evil demon exercising all its power to deceive. Perhaps it could confound your senses with illusions, or modify your memories. If such a demon existed, there would be no way to know if anything you believed or reasoned about the world was correct. So, Descartes asked, how do you know you’re not being deceived by an evil demon right now?

Amusingly, like Carroll, Descartes went on to use this uncertainty to argue for specific proposals in physics: in Descartes’ case, everything from the existence of a benevolent god to the idea that gravity was caused by a vortex of fluid around the sun.

Descartes wasn’t the last to propose this kind of uncertainty, and philosophers have asked more sophisticated questions over the years challenging the idea that it makes sense to reason from the past about the future at all.

Carroll is certainly aware of all of this. But I suspect he doesn’t quite appreciate the current opinion philosophers have on these sorts of puzzles.

The impression I’ve gotten from philosophers is that they don’t take this kind of “cognitive instability” very seriously anymore. There are specialists who still work on it, and it’s still of historical interest. But the majority of philosophers have moved on.

How did they move on? How have they dismissed these kinds of arguments?

That varies. Philosophers don’t tend to have the kind of consensus that physicists usually do.

Some reject them on pragmatic grounds: science works, even if we can’t “justify” it. Some use a similar argument to Carroll’s, but take it one step back, arguing that we shouldn’t worry that we could be deceived by an evil demon or be a Boltzmann brain because those worries by themselves are cognitively unstable. Some bite the bullet, that reasoning is impossible, then just ignore it and go on with their lives.

The common trait of all of these rejections, though? They don’t rely on physics.

Philosophers don’t argue “evil demons are impossible, therefore we can be sure we’re not deceived by evil demons”. They don’t argue “dreams are never completely realistic, so we can’t just be dreaming right now”.

And they certainly don’t try to argue the reverse: that consistency means there can never be evil demons, or never be realistic dreams.

I was on the debate team in high school. One popular tactic was called the “non-unique”. If your opponent argued that your plan had some negative consequences, you could argue that those consequences would happen regardless of whether you got to enact your plan or not: that the consequences were non-unique.

At this point, philosophers understand that cognitive instability and doubt are “non-unique”. No matter the physics, no matter how the world looks, it’s still possible to argue that reasoning isn’t justified, that even the logic we used to doubt the world in the first place could be flawed.

Carroll’s claim to me seems non-unique. Yes, in a universe that exists for a long time you could be a Boltzmann brain. But even if you don’t live in such a universe, you could still be a brain in a jar or a simulation. You could still be deceived by an “evil demon”.

And so regardless, you need the philosophers. Regardless, you need some argument that reasoning works, that you can ignore doubt. And once you’re happy with that argument, you don’t have to worry about Boltzmann brains.

A Response from Nielsen, Guffanti and Sarkar

I have been corresponding with Subir Sarkar, one of the authors of the paper I mentioned a few weeks ago arguing that the evidence for cosmic acceleration was much weaker than previously thought. He believes that the criticisms of Rubin and Hayden (linked to in my post) are deeply flawed. Since he and his coauthors haven’t responded publicly to Rubin and Hayden yet, they graciously let me post a summary of their objections.

Dear Matt,

This concerns the discussion on your blog of our recent paper showing that the evidence for cosmic acceleration from supernovae is only 3 sigma. Your obviously annoyed response is in fact to inflated headlines in the media about our work – our paper does just what it does on the can: “Marginal evidence for cosmic acceleration from Type Ia supernovae“. Nevertheless you make a fair assessment of the actual result in our paper and we are grateful for that.

However we feel you are not justified in going on further to state: “In the twenty years since it was discovered that the universe was accelerating, people have built that discovery into the standard model of cosmology. They’ve used that model to make other predictions, explaining a wide range of other observations. People have built on the discovery, and their success in doing so is its own kind of evidence”. If you were as expert in cosmology as you evidently are concerning amplitudes you would know that much of the reasoning you allude to is circular. There are also other instances (which we are looking into) of using statistical methods that assume the answer to shore up the ‘standard model’ of cosmology. Does it not worry you that the evidence from supernovae – which is widely believed to be compelling – turns out to be less so when examined closely? There is a danger of confirmation bias in that cosmologists making poor measurements with large systematic uncertainties nevertheless keep finding the ‘right answer’. See e.g. Croft & Dailey (http://adsabs.harvard.edu/cgi-bin/bib_query?arXiv:1112.3108) who noted “… of the 28 measurements of Omega_Lambda in our sample published since 2003, only 2 are more than 1 sigma from the WMAP results. Wider use of blind analyses in cosmology could help to avoid this”. Unfortunately the situation has not improved in subsequent years.

You are of course entitled to air your personal views on your blog. But please allow us to point out that you are being unfair to us by uncritically stating in the second part of  your sentence: “EDIT: More arguments against the paper in question, pointing out that they made some fairly dodgy assumptions” in which you link to the arXiv eprint by Rubin & Hayden.

These authors make a claim similar to Riess & Scolnic (https://blogs.scientificamerican.com/guest-blog/no-astronomers-haven-t-decided-dark-energy-is-nonexistent/) that we “assume that the mean properties of supernovae from each of the samples used to measure the expansion history are the same, even though they have been shown to be different and past analyses have accounted for these differences”. In fact we are using exactly the same dataset (called JLA) as Adam Riess and co. have done in their own analysis (Betoule et alhttp://adsabs.harvard.edu/cgi-bin/bib_query?arXiv:1401.4064). They found  stronger evidence for acceleration because of using a flawed statistical method (“constrained \chi^2”). The reason why we find weaker evidence is that we use the Maximum Likelihood Estimator – it is not because of making “dodgy assumptions”. We show our results in the same \Omega_\Lambda – \Omega_m plane simply for ease of comparison with the previous result – as seen in the attached plot, the contours move to the right … and now enclose the “no acceleration” line within 3 \sigma. Our analysis is not – as Brian Schmidt tweeted – “at best unorthodox” … even if this too has been uncritically propagated on social media.

In fact the result from our (frequentist) statistical procedure has been confirmed by an independent analysis using a ‘Bayesian Hierarchical Model’ (Shariff et alhttp://adsabs.harvard.edu/cgi-bin/bib_query?arXiv:1510.05954). This is a more sophisticated approach because it does not adopt a Gaussian approximation as we did for the distribution of the light curve parameters (x_1 and c), however their contours are more ragged because of numerical computation limitations.

all_contours_colors

Rubin & Hayden do not mention this paper (although bizarrely they ascribe to us the ‘Bayesian Hierarchical Model’). Nevertheless they find more-or-less the same result as us, namely 3.1 sigma evidence for acceleration, using the same dataset as we did (left panel of their Fig.2). They argue however that there are selection effects in this dataset – which have not already been corrected for by the JLA collaboration (which incidentally included Adam Riess, Saul Perlmutter and most other supernova experts in the world). To address this Rubin & Hayden  introduce a redshift-dependent prior on the x_1 and c distributions. This increases the significance to 4.2 sigma (right panel of their Fig.2). If such a procedure is indeed valid then it does mark progress in the field, but that does not mean that these authors have “demonstrated errors in (our) analysis” as they state in their Abstract. Their result also begs the question why has the significance increased so little in going from the initial 50 supernovae which yielded 3.9 sigma evidence for acceleration (Riess et alhttp://adsabs.harvard.edu/cgi-bin/bib_query?arXiv:astro-ph/9805201) to 740 supernovae in JLA? Maybe this is news … at least to anyone interested in cosmology and fundamental physics!

Rubin & Hayden also make the usual criticism that we have ignored evidence from other observations e.g. of baryon acoustic oscillations and the cosmic microwave background. We are of course very aware of these observations but as we say in the paper the interpretation of such data is very model-dependent. For example dark energy has no direct influence on the cosmic microwave background. What is deduced from the data is the spatial curvature (adopting the value of the locally measured Hubble expansion rate H_0) and the fractional matter content of the universe (assuming the primordial fluctuation spectrum to be a close-to-scale-invariant power law). Dark energy is then *assumed* to make up the rest (using the sum rule: 1 = \Omega_m + \Omega_\Lambda for a spatially flat universe as suggested by the data). This need not be correct however if there are in fact other terms that should be added to this sum rule (corresponding to corrections to the Friedman equation to account e.g. for averaging over inhomogeneities or for non-ideal gas behaviour of the matter content). It is important to emphasise that there is no convincing (i.e. >5 sigma) dynamical evidence for dark energy, e.g. the late integrated Sachs-Wolfe effect which induces subtle correlations between the CMB and large-scale structure. Rubin & Hayden even claim in their Abstract (v1) that “The combined analysis of modern cosmological experiments … indicate 75 sigma evidence for positive Omega_\Lambda” – which is surely a joke! Nevertheless this is being faithfully repeated on newsgroups, presumably by those somewhat challenged in their grasp of basic statistics.

Apologies for the long post but we would like to explain that the technical criticism of our work by Rubin & Hayden and by Riess & Scolnic is rather disingenuous and it is easy to be misled if you are not an expert. You are entitled to rail against the standards of science journalism but please do not taint us by association.

As a last comment, surely we all want to make progress in cosmology but this will be hard if cosmologists are so keen to cling on to their ‘standard model’ instead of subjecting it to critical tests (as particle physicists continually do to their Standard Model). Moreover the fundamental assumptions of the cosmological model (homogeneity, ideal fluids) have not been tested rigorously (unlike the Standard Model which has been tested at the level of quantum corrections). This is all the more important in cosmology because there is simply no physical explanation for why \Lambda should be of order H_0^2.

Best regards,

 

Jeppe Trøst Nielsen, Alberto Guffanti and Subir Sarkar

 


 

On an unrelated note, Perimeter’s PSI program is now accepting applications for 2017. It’s something I wish I knew about when I was an undergrad, for those interested in theoretical physics it can be an enormous jump-start to your career. Here’s their blurb:

Perimeter Scholars International (PSI) is now accepting applications for Perimeter Institute for Theoretical Physics’ unique 10-month Master’s program.

Features of the program include:

  • All student costs (tuition and living) are covered, removing financial and/or geographical barriers to entry
  • Students learn from world-leading theoretical physicists – resident Perimeter researchers and visiting scientists – within the inspiring environment of Perimeter Institute
  • Collaboration is valued over competition; deep understanding and creativity are valued over rote learning and examination
  • PSI recruits worldwide: 85 percent of students come from outside of Canada
  • PSI takes calculated risks, seeking extraordinary talent who may have non-traditional academic backgrounds but have demonstrated exceptional scientific aptitude

Apply online at http://perimeterinstitute.ca/apply.

Applications are due by February 1, 2017.

“Maybe” Isn’t News

It’s been published several places, but you’ve probably seen this headline:

expansionheadlineIf you’ve been following me for a while, you know where this is going:

No, these physicists haven’t actually shown that the Universe isn’t expanding at an accelerated rate.

What they did show is that the original type of data used to discover that the universe was accelerating back in the 90’s, measurements of supernovae, doesn’t live up to the rigorous standards that we physicists use to evaluate discoveries. We typically only call something a discovery if the evidence is good enough that, in a world where the discovery wasn’t actually true, we’d only have a one in 3.5 million chance of getting the same evidence (“five sigma” evidence). In their paper, Nielsen, Guffanti, and Sarkar argue that looking at a bigger collection of supernovae leads to a hazier picture: the chance that we could get the same evidence in a universe that isn’t accelerating is closer to one in a thousand, giving “three sigma” evidence.

This might sound like statistical quibbling: one in a thousand is still pretty unlikely, after all. But a one in a thousand chance still happens once in a thousand times, and there’s a long history of three sigma evidence turning out to just be random noise. If the discovery of the accelerating universe was new, this would be an important objection, a reason to hold back and wait for more data before announcing a discovery.

The trouble is, the discovery isn’t new. In the twenty years since it was discovered that the universe was accelerating, people have built that discovery into the standard model of cosmology. They’ve used that model to make other predictions, explaining a wide range of other observations. People have built on the discovery, and their success in doing so is its own kind of evidence.

So the objection, that one source of evidence isn’t as strong as people thought, doesn’t kill cosmic acceleration. What it is is a “maybe”, showing that there is at least room in some of the data for a non-accelerating universe.

People publish “maybes” all the time, nothing bad about that. There’s a real debate to be had about how strong the evidence is, and how much it really establishes. (And there are already voices on the other side of that debate.)

But a “maybe” isn’t news. It just isn’t.

Science journalists (and university press offices) have a habit of trying to turn “maybes” into stories. I’ve lost track of the times I’ve seen ideas that were proposed a long time ago (technicolor, MOND, SUSY) get new headlines not for new evidence or new ideas, but just because they haven’t been ruled out yet. “SUSY hasn’t been ruled out yet” is an opinion piece, perhaps a worthwhile one, but it’s no news article.

The thing is, I can understand why journalists do this. So much of science is building on these kinds of “maybes”, working towards the tipping point where a “maybe” becomes a “yes” (or a “no”). And journalists (and university press offices, and to some extent the scientists themselves) can’t just take time off and wait for something legitimately newsworthy. They’ve got pages to fill and careers to advance, they need to say something.

I post once a week. As a consequence, a meaningful fraction of my posts are garbage. I’m sure that if I posted every day, most of my posts would be garbage.

Many science news sites post multiple times a day. They’ve got multiple writers, sure, and wider coverage…but they still don’t have the luxury of skipping a “maybe” when someone hands it to them.

I don’t know if there’s a way out of this. Maybe we need a new model for science journalism, something that doesn’t try to ape the pace of the rest of the news cycle. For the moment, though, it’s publish or perish, and that means lots and lots of “maybes”.

EDIT: More arguments against the paper in question, pointing out that they made some fairly dodgy assumptions.

EDIT: The paper’s authors respond here.

Things You Don’t Know about the Power of the Dark Side

Last Wednesday, Katherine Freese gave a Public Lecture at Perimeter on the topic of Dark Matter and Dark Energy. The talk should be on Perimeter’s YouTube page by the time this post is up.

Answering twitter questions during the talk made me realize that there’s a lot the average person finds confusing about Dark Matter and Dark Energy. Freese addressed much of this pretty well in her talk, but I felt like there was room for improvement. Rather than try to tackle it myself, I decided to interview an expert on the Dark Side of the universe.

darth_vader

Twitter doesn’t know the power of the dark side!

Lord Vader, some people have a hard time distinguishing Dark Matter and Dark Energy. What do you have to say to them?

Fools! Light side astronomers call “dark” that which they cannot observe and cannot understand. “Fear” and “anger” are different heights of emotion, but to the Jedi they are only the path to the Dark Side. Dark Energy and Dark Matter are much the same: both distinct, both essential to the universe, and both “dark” to the telescopes of the light.

Let’s start with Dark Matter. Is it really matter?

You ask an empty question. “Matter” has been defined in many ways. When we on the Dark Side refer to Dark Matter, we merely mean to state that it behaves much like the matter you know: it is drawn to and fro by gravity, sloshing about.

It is distinct from your ordinary matter in that two of the forces of nature, the strong nuclear force and electromagnetism, do not concern it. Ordinary matter is bound together in the nuclei of atoms by the strong force, or woven into atoms and molecules by electromagnetism. This makes it subject to all manner of messy collisions.

Dark Matter, in contrast, is pure, partaking neither of nuclear nor chemical reactions. It passes through each of us with no notice. Only the weak nuclear force and gravity affect it. The latter has brought it slowly into clumps and threads through the universe, each one a vast nest for groupings of stars. Truly, Dark Matter surrounds us, penetrates us, and binds the galaxy together.

Could Dark Matter be something we’re more familiar with, like neutrinos or black holes? What about a modification of gravity?

Many wondered as much, when the study of the Dark Side was young. They were wrong.

The matter you are accustomed to composes merely a twentieth of the universe, while Dark Matter is more than a quarter. There is simply not enough of these minor contributions, neutrinos and black holes, to account for the vast darkness that surrounds the galaxy, and with each astronomer’s investigation we grow more assured.

As for modifying gravity, do you seek to modify a fundamental Force?

If so, you should be wary. Forces, by their nature, are accompanied by particles, and gravity is no exception. Take care that your tinkering does not result in a new sort of particle. If so, you may be unknowingly walking the path of the Dark Side, for your modification may be just another form of Dark Matter.

What sort of things could Dark Matter be? Can Dark Matter decay into ordinary matter? Could there be anti-Dark Matter?

As of yet, your scientists are still baffled by the nature of Dark Matter. Still, there are limits. Since only rare events could produce it from ordinary matter, the universe’s supply of Dark Matter must be ancient, dating back to the dawn of the cosmos. In that case, it must decay only slowly, if at all. Similarly, if Dark Matter had antimatter forms then its interactions must be so weak that it has not simply annihilated with its antimatter half across the universe. So while either is possible, it may be simpler for your theorists if Dark Matter did not decay, and was its own antimatter counterpart. On the other hand, if Dark Matter did undergo such reactions, your kind may one day be able to detect it.

Of course, as a master of the Dark Side I know the true nature of Dark Matter. However, I could only impart it to a loyal apprentice…

Yeah, I think I’ll pass on that. They say you can only get a job in academia when someone dies, but unlike the Sith they don’t mean it literally.

Let’s move on to Dark Energy. What can you tell us about it?

Dark “Energy”, like Dark Matter, is named for what people on your Earth cannot comprehend. Nothing, not even Dark Energy, is “made of energy”. Dark Energy is “energy” merely because it behaves unlike matter.

Matter, even Dark Matter, is drawn together by the force of gravity. Under its yoke, the universe would slow down in its expansion and eventually collapse into a crunch, like the throat of an incompetent officer.

However, the universe is not collapsing, but accelerating, galaxies torn away from each other by a force that must compose more than two thirds of the universe. It is rather like the Yuuzhan Vong, a mysterious force from outside the galaxy that scouts persistently under- or over-estimate.

Umm, I’m pretty sure the Yuuzhan Vong don’t exist anymore, since Disney got rid of the Expanded Universe.

That perfidious Mouse!

Well folks, Vader is now on a rampage of revenge in the Disney offices, so I guess we’ll have to end the interview. Tune in next week, and until then, may the Force be with you!

Is Everything Really Astonishingly Simple?

Neil Turok gave a talk last week, entitled The Astonishing Simplicity of Everything. In it, he argued that our current understanding of physics is really quite astonishingly simple, and that recent discoveries seem to be confirming this simplicity.

For the right sort of person, this can be a very uplifting message. The audience was spellbound. But a few of my friends were pretty thoroughly annoyed, so I thought I’d dedicate a post to explaining why.

Neil’s talk built up to showing this graphic, one of the masterpieces of Perimeter’s publications department:

Looked at in this way, the laws of physics look astonishingly simple. One equation, a few terms, each handily labeled with a famous name of some (occasionally a little hazy) relevance to the symbol in question.

In a sense, the world really is that simple. There are only a few kinds of laws that govern the universe, and the concepts behind them are really, deep down, very simple concepts. Neil adroitly explained some of the concepts behind quantum mechanics in his talk (here represented by the Schrodinger, Feynman, and Planck parts of the equation), and I have a certain fondness for the Maxwell-Yang-Mills part. The other parts represent different kinds of particles, and different ways they can interact.

While there are only a few different kinds of laws, though, that doesn’t mean the existing laws are simple. That nice, elegant equation hides 25 arbitrary parameters, hidden in the Maxwell-Yang-Mills, Dirac, Kobayashi-Masakawa, and Higgs parts. It also omits the cosmological constant, which fuels the expansion of the universe. And there are problems if you try to claim that the gravity part, for example, is complete.

When Neil mentions recent discoveries, he’s referring to the LHC not seeing new supersymmetric particles, to telescopes not seeing any unusual features in the cosmic microwave background. The theories that were being tested, supersymmetry and inflation, are in many ways more complicated than the Standard Model, adding new parameters without getting rid of old ones. But I think it’s a mistake to say that if these theories are ruled out, the world is astonishingly simple. These theories are attempts to explain unlikely features of the old parameters, or unlikely features of the universe we observe. Without them, we’ve still got those unlikely, awkward, complicated bits.

Of course, Neil doesn’t think the Standard Model is all there is either, and while he’s not a fan of inflation, he does have proposals he’s worked on that explain the same observations, proposals that are also beyond the current picture. More broadly, he’s not suggesting here that the universe is just what we’ve figured out so far and no more. Rather, he’s suggesting that new proposals ought to build on the astonishing simplicity of the universe, instead of adding complexity, that we need to go back to the conceptual drawing board rather than correcting the universe with more gears and wheels.

On the one hand, that’s Perimeter’s mission statement in a nutshell. Perimeter’s independent nature means that folks here can focus on deeper conceptual modifications to the laws of physics, rather than playing with the sorts of gears and wheels that people already know how to work with.

On the other hand, a lack of new evidence doesn’t do anyone any favors. It doesn’t show the way for supersymmetry, but it doesn’t point to any of the “deep conceptual” approaches either. And so for some people, Neil’s glee at the lack of new evidence feels less like admiration for the simplicity of the cosmos and more like that one guy in a group project who sits back chuckling while everyone else fails. You can perhaps understand why some people felt resentful.