# Popularization as News, Popularization as Signpost

Lubos Motl has responded to my post from last week about the recent Caltech short, Quantum is Calling. His response is pretty much exactly what you’d expect, including the cameos by Salma Hayek and Kaley Cuoco.

The only surprise was his lack of concern for accuracy. Quantum is Calling got the conjecture it was trying to popularize almost precisely backwards. I was expecting that to bother him, at least a little.

Should it bother you?

That depends on what you think Quantum is Calling is trying to do.

Science popularization, even good science popularization, tends to get things wrong. Some of that is inevitable, a result of translating complex concepts to a wider audience.

Sometimes, though, you can’t really chalk it up to translation. Interstellar had some extremely accurate visualizations of black holes, but it also had an extremely silly love-powered tesseract. That wasn’t their attempt to convey some subtle scientific truth, it was just meant to sound cool.

And the thing is, that’s not a bad thing to do. For a certain kind of piece, sounding cool really is the point.

Imagine being an explorer. You travel out into the wilderness and find a beautiful waterfall.

Example:

How do you tell people about it?

One option is the press. The news can cover your travels, so people can stay up to date with the latest in waterfall discoveries. In general, you’d prefer this sort of thing to be fairly accurate: the goal here is to inform people, to give them a better idea of the world around them.

Alternatively, you can advertise. You put signposts up around town pointing toward the waterfall, complete with vivid pictures. Here, accuracy matters a lot less: you’re trying to get people excited, knowing that as they get closer they can get more detailed information.

In science popularization, the “news” here isn’t just news. It’s also blog posts, press releases, and public lectures. It’s the part of science popularization that’s supposed to keep people informed, and it’s one that we hope is mostly accurate, at least as far as possible.

The “signposts”, meanwhile, are things like Interstellar. Their audience is as wide as it can possibly be, and we don’t expect them to get things right. They’re meant to excite people, to get them interested in science. The expectation is that a few students will find the imagery interesting enough to go further, at which point they can learn the full story and clear up any remaining misconceptions.

Quantum is Calling is pretty clearly meant to be a signpost. The inaccuracy is one way to tell, but it should be clear just from the context. We’re talking about a piece with Hollywood stars here. The relative star-dom of Zoe Saldana and Keanu Reeves doesn’t matter, the presence of any mainstream film stars whatsoever means they’re going for the broadest possible audience.

(Of course, the fact that it’s set up to look like an official tie-in to the Star Trek films doesn’t hurt matters either.)

They’re also quite explicit about their goals. The piece’s predecessor has Keanu Reeves send a message back in time, with the goal of inspiring a generation of young scientists to build a future paradise. They’re not subtle about this.

Ok, so what’s the problem? Signposts are allowed to be inaccurate, so the inaccuracy shouldn’t matter. Eventually people will climb up to the waterfall and see it for themselves, right?

What if the waterfall isn’t there?

Like so:

The evidence for ER=EPR (the conjecture that Quantum is Calling is popularizing) isn’t like seeing a waterfall. It’s more like finding it via surveying. By looking at the slope of nearby terrain and following the rivers, you can get fairly confident that there should be a waterfall there, even if you can’t yet see it over the next ridge. You can then start sending scouts, laying in supplies, and getting ready for a push to the waterfall. You can alert the news, telling journalists of the magnificent waterfall you expect to find, so the public can appreciate the majesty of your achievement.

What you probably shouldn’t do is put up a sign for tourists.

As I hope I made clear in my last post, ER=EPR has some decent evidence. It hasn’t shown that it can handle “foot traffic”, though. The number of researchers working on it is still small. (For a fun but not especially rigorous exercise, try typing “ER=EPR” and “AdS/CFT” into physics database INSPIRE.) Conjectures at this stage are frequently successful, but they often fail, and ER=EPR still has a decent chance of doing so. Tying your inspiring signpost to something that may well not be there risks sending tourists up to an empty waterfall. They won’t come down happy.

As such, I’m fine with “news-style” popularizations of ER=EPR. And I’m fine with “signposts” for conjectures that have shown they can handle some foot traffic. (A piece that sends Zoe Saldana to the holodeck to learn about holography could be fun, for example.) But making this sort of high-profile signpost for ER=EPR feels irresponsible and premature. There will be plenty of time for a Star Trek tie-in to ER=EPR once it’s clear the idea is here to stay.

# What’s in a Conjecture? An ER=EPR Example

A few weeks back, Caltech’s Institute of Quantum Information and Matter released a short film titled Quantum is Calling. It’s the second in what looks like will become a series of pieces featuring Hollywood actors popularizing ideas in physics. The first used the game of Quantum Chess to talk about superposition and entanglement. This one, featuring Zoe Saldana, is about a conjecture by Juan Maldacena and Leonard Susskind called ER=EPR. The conjecture speculates that pairs of entangled particles (as investigated by Einstein, Podolsky, and Rosen) are in some sense secretly connected by wormholes (or Einstein-Rosen bridges).

The film is fun, but I’m not sure ER=EPR is established well enough to deserve this kind of treatment.

At this point, some of you are nodding your heads for the wrong reason. You’re thinking I’m saying this because ER=EPR is a conjecture.

I’m not saying that.

The fact of the matter is, conjectures play a very important role in theoretical physics, and “conjecture” covers a wide range. Some conjectures are supported by incredibly strong evidence, just short of mathematical proof. Others are wild speculations, “wouldn’t it be convenient if…” ER=EPR is, well…somewhere in the middle.

Most popularizers don’t spend much effort distinguishing things in this middle ground. I’d like to talk a bit about the different sorts of evidence conjectures can have, using ER=EPR as an example.

Our friendly neighborhood space octopus

The first level of evidence is motivation.

At its weakest, motivation is the “wouldn’t it be convenient if…” line of reasoning. Some conjectures never get past this point. Hawking’s chronology protection conjecture, for instance, points out that physics (and to some extent logic) has a hard time dealing with time travel, and wouldn’t it be convenient if time travel was impossible?

For ER=EPR, this kind of motivation comes from the black hole firewall paradox. Without going into it in detail, arguments suggested that the event horizons of older black holes would resemble walls of fire, incinerating anything that fell in, in contrast with Einstein’s picture in which passing the horizon has no obvious effect at the time. ER=EPR provides one way to avoid this argument, making event horizons subtle and smooth once more.

Motivation isn’t just “wouldn’t it be convenient if…” though. It can also include stronger arguments: suggestive comparisons that, while they could be coincidental, when put together draw a stronger picture.

In ER=EPR, this comes from certain similarities between the type of wormhole Maldacena and Susskind were considering, and pairs of entangled particles. Both connect two different places, but both do so in an unusually limited way. The wormholes of ER=EPR are non-traversable: you cannot travel through them. Entangled particles can’t be traveled through (as you would expect), but more generally can’t be communicated through: there are theorems to prove it. This is the kind of suggestive similarity that can begin to motivate a conjecture.

(Amusingly, the plot of the film breaks this in both directions. Keanu Reeves can neither steal your cat through a wormhole, nor send you coded messages with entangled particles.)

Nor live forever as the portrait in his attic withers away

Motivation is a good reason to investigate something, but a bad reason to believe it. Luckily, conjectures can have stronger forms of evidence. Many of the strongest conjectures are correspondences, supported by a wealth of non-trivial examples.

In science, the gold standard has always been experimental evidence. There’s a reason for that: when you do an experiment, you’re taking a risk. Doing an experiment gives reality a chance to prove you wrong. In a good experiment (a non-trivial one) the result isn’t obvious from the beginning, so that success or failure tells you something new about the universe.

In theoretical physics, there are things we can’t test with experiments, either because they’re far beyond our capabilities or because the claims are mathematical. Despite this, the overall philosophy of experiments is still relevant, especially when we’re studying a correspondence.

“Correspondence” is a word we use to refer to situations where two different theories are unexpectedly computing the same thing. Often, these are very different theories, living in different dimensions with different sorts of particles. With the right “dictionary”, though, you can translate between them, doing a calculation in one theory that matches a calculation in the other one.

Even when we can’t do non-trivial experiments, then, we can still have non-trivial examples. When the result of a calculation isn’t obvious from the beginning, showing that it matches on both sides of a correspondence takes the same sort of risk as doing an experiment, and gives the same sort of evidence.

Some of the best-supported conjectures in theoretical physics have this form. AdS/CFT is technically a conjecture: a correspondence between string theory in a hyperbola-shaped space and my favorite theory, N=4 super Yang-Mills. Despite being a conjecture, the wealth of nontrivial examples is so strong that it would be extremely surprising if it turned out to be false.

ER=EPR is also a correspondence, between entangled particles on the one hand and wormholes on the other. Does it have nontrivial examples?

Some, but not enough. Originally, it was based on one core example, an entangled state that could be cleanly matched to the simplest wormhole. Now, new examples have been added, covering wormholes with electric fields and higher spins. The full “dictionary” is still unclear, with some pairs of entangled particles being harder to describe in terms of wormholes. So while this kind of evidence is being built, it isn’t as solid as our best conjectures yet.

I’m fine with people popularizing this kind of conjecture. It deserves blog posts and press articles, and it’s a fine idea to have fun with. I wouldn’t be uncomfortable with the Bohemian Gravity guy doing a piece on it, for example. But for the second installment of a star-studded series like the one Caltech is doing…it’s not really there yet, and putting it there gives people the wrong idea.

I hope I’ve given you a better idea of the different types of conjectures, from the most fuzzy to those just shy of certain. I’d like to do this kind of piece more often, though in future I’ll probably stick with topics in my sub-field (where I actually know what I’m talking about 😉 ). If there’s a particular conjecture you’re curious about, ask in the comments!

# A Tale of Two Archives

When it comes to articles about theoretical physics, I have a pet peeve, one made all the more annoying by the fact that it appears even in pieces that are otherwise well written. It involves the following disclaimer:

Here’s the thing: if you’re dealing with experiments, peer review is very important. Plenty of experiments have subtle problems with their methods, enough that it’s important to have a group of experts who can check them. In experimental fields, you really shouldn’t trust things that haven’t been through a journal yet: there’s just a lot that can go wrong.

In theoretical physics, though, peer review is important for different reasons. Most papers are mathematically rigorous enough that they’re not going to be wrong per se, and most of the ways they could be wrong won’t be caught by peer review. While peer review sometimes does catch mistakes, much more often it’s about assessing the significance of a result. Peer review determines whether a result gets into a prestigious journal or a less prestigious one, which in turn matters for job and grant applications.

As such, it doesn’t really make sense for a journalist to point out that a theoretical physics paper hasn’t been peer reviewed yet. If you think it’s important enough to write an article about, then you’ve already decided it’s significant: peer review wasn’t going to tell you anything else.

We physicists post our papers to arXiv, a free-to-access paper repository, before submitting them to journals. While arXiv does have some moderation, it’s not much: pretty much anyone in the field can post whatever they want.

This leaves a lot of people confused. In that sort of system, how do we know which papers to trust?

Let’s compare to another archive: Archive of Our Own, or AO3 for short.

Unlike arXiv, AO3 hosts not physics, but fanfiction. However, like arXiv it’s quite lightly moderated and free to access. On arXiv you want papers you can trust, on AO3 you want stories you enjoy. In each case, if anyone can post, how do you find them?

The first step is filtering. AO3 and arXiv both have systems of tags and subject headings. The headings on arXiv are simpler and more heavily moderated than those on AO3, but they both serve the purpose of letting people filter out the subjects, whether scientific or fictional, that they find interesting. If you’re interested in astrophysics, try astro-ph on arXiv. If you want Harry Potter fanfiction, try the “Harry Potter – J.K. Rowling” tag on AO3.

Beyond that, it helps to pay attention to authors. When an author has written something you like, it’s worth it not only to keep up with other things they write, but to see which other authors they like and pay attention to them as well. That’s true whether the author is Juan Maldacena or your favorite source of Twilight fanfic.

Even if you follow all of this, you can’t trust every paper you find on arXiv. You also won’t enjoy everything you dig up on AO3. Either way, publication (in journals or books) won’t solve your problem: both are an additional filter, but not an infallible one. Judgement is still necessary.

This is all to say that “this article has not been peer-reviewed” can be a useful warning, but often isn’t. In theoretical physics, knowing who wrote an article and what it’s about will often tell you much more than whether or not it’s been peer-reviewed yet.

# A Response from Nielsen, Guffanti and Sarkar

I have been corresponding with Subir Sarkar, one of the authors of the paper I mentioned a few weeks ago arguing that the evidence for cosmic acceleration was much weaker than previously thought. He believes that the criticisms of Rubin and Hayden (linked to in my post) are deeply flawed. Since he and his coauthors haven’t responded publicly to Rubin and Hayden yet, they graciously let me post a summary of their objections.

Dear Matt,

This concerns the discussion on your blog of our recent paper showing that the evidence for cosmic acceleration from supernovae is only 3 sigma. Your obviously annoyed response is in fact to inflated headlines in the media about our work – our paper does just what it does on the can: “Marginal evidence for cosmic acceleration from Type Ia supernovae“. Nevertheless you make a fair assessment of the actual result in our paper and we are grateful for that.

However we feel you are not justified in going on further to state: “In the twenty years since it was discovered that the universe was accelerating, people have built that discovery into the standard model of cosmology. They’ve used that model to make other predictions, explaining a wide range of other observations. People have built on the discovery, and their success in doing so is its own kind of evidence”. If you were as expert in cosmology as you evidently are concerning amplitudes you would know that much of the reasoning you allude to is circular. There are also other instances (which we are looking into) of using statistical methods that assume the answer to shore up the ‘standard model’ of cosmology. Does it not worry you that the evidence from supernovae – which is widely believed to be compelling – turns out to be less so when examined closely? There is a danger of confirmation bias in that cosmologists making poor measurements with large systematic uncertainties nevertheless keep finding the ‘right answer’. See e.g. Croft & Dailey (http://adsabs.harvard.edu/cgi-bin/bib_query?arXiv:1112.3108) who noted “… of the 28 measurements of Omega_Lambda in our sample published since 2003, only 2 are more than 1 sigma from the WMAP results. Wider use of blind analyses in cosmology could help to avoid this”. Unfortunately the situation has not improved in subsequent years.

You are of course entitled to air your personal views on your blog. But please allow us to point out that you are being unfair to us by uncritically stating in the second part of  your sentence: “EDIT: More arguments against the paper in question, pointing out that they made some fairly dodgy assumptions” in which you link to the arXiv eprint by Rubin & Hayden.

These authors make a claim similar to Riess & Scolnic (https://blogs.scientificamerican.com/guest-blog/no-astronomers-haven-t-decided-dark-energy-is-nonexistent/) that we “assume that the mean properties of supernovae from each of the samples used to measure the expansion history are the same, even though they have been shown to be different and past analyses have accounted for these differences”. In fact we are using exactly the same dataset (called JLA) as Adam Riess and co. have done in their own analysis (Betoule et alhttp://adsabs.harvard.edu/cgi-bin/bib_query?arXiv:1401.4064). They found  stronger evidence for acceleration because of using a flawed statistical method (“constrained \chi^2”). The reason why we find weaker evidence is that we use the Maximum Likelihood Estimator – it is not because of making “dodgy assumptions”. We show our results in the same \Omega_\Lambda – \Omega_m plane simply for ease of comparison with the previous result – as seen in the attached plot, the contours move to the right … and now enclose the “no acceleration” line within 3 \sigma. Our analysis is not – as Brian Schmidt tweeted – “at best unorthodox” … even if this too has been uncritically propagated on social media.

In fact the result from our (frequentist) statistical procedure has been confirmed by an independent analysis using a ‘Bayesian Hierarchical Model’ (Shariff et alhttp://adsabs.harvard.edu/cgi-bin/bib_query?arXiv:1510.05954). This is a more sophisticated approach because it does not adopt a Gaussian approximation as we did for the distribution of the light curve parameters (x_1 and c), however their contours are more ragged because of numerical computation limitations.

Rubin & Hayden do not mention this paper (although bizarrely they ascribe to us the ‘Bayesian Hierarchical Model’). Nevertheless they find more-or-less the same result as us, namely 3.1 sigma evidence for acceleration, using the same dataset as we did (left panel of their Fig.2). They argue however that there are selection effects in this dataset – which have not already been corrected for by the JLA collaboration (which incidentally included Adam Riess, Saul Perlmutter and most other supernova experts in the world). To address this Rubin & Hayden  introduce a redshift-dependent prior on the x_1 and c distributions. This increases the significance to 4.2 sigma (right panel of their Fig.2). If such a procedure is indeed valid then it does mark progress in the field, but that does not mean that these authors have “demonstrated errors in (our) analysis” as they state in their Abstract. Their result also begs the question why has the significance increased so little in going from the initial 50 supernovae which yielded 3.9 sigma evidence for acceleration (Riess et alhttp://adsabs.harvard.edu/cgi-bin/bib_query?arXiv:astro-ph/9805201) to 740 supernovae in JLA? Maybe this is news … at least to anyone interested in cosmology and fundamental physics!

Rubin & Hayden also make the usual criticism that we have ignored evidence from other observations e.g. of baryon acoustic oscillations and the cosmic microwave background. We are of course very aware of these observations but as we say in the paper the interpretation of such data is very model-dependent. For example dark energy has no direct influence on the cosmic microwave background. What is deduced from the data is the spatial curvature (adopting the value of the locally measured Hubble expansion rate H_0) and the fractional matter content of the universe (assuming the primordial fluctuation spectrum to be a close-to-scale-invariant power law). Dark energy is then *assumed* to make up the rest (using the sum rule: 1 = \Omega_m + \Omega_\Lambda for a spatially flat universe as suggested by the data). This need not be correct however if there are in fact other terms that should be added to this sum rule (corresponding to corrections to the Friedman equation to account e.g. for averaging over inhomogeneities or for non-ideal gas behaviour of the matter content). It is important to emphasise that there is no convincing (i.e. >5 sigma) dynamical evidence for dark energy, e.g. the late integrated Sachs-Wolfe effect which induces subtle correlations between the CMB and large-scale structure. Rubin & Hayden even claim in their Abstract (v1) that “The combined analysis of modern cosmological experiments … indicate 75 sigma evidence for positive Omega_\Lambda” – which is surely a joke! Nevertheless this is being faithfully repeated on newsgroups, presumably by those somewhat challenged in their grasp of basic statistics.

Apologies for the long post but we would like to explain that the technical criticism of our work by Rubin & Hayden and by Riess & Scolnic is rather disingenuous and it is easy to be misled if you are not an expert. You are entitled to rail against the standards of science journalism but please do not taint us by association.

As a last comment, surely we all want to make progress in cosmology but this will be hard if cosmologists are so keen to cling on to their ‘standard model’ instead of subjecting it to critical tests (as particle physicists continually do to their Standard Model). Moreover the fundamental assumptions of the cosmological model (homogeneity, ideal fluids) have not been tested rigorously (unlike the Standard Model which has been tested at the level of quantum corrections). This is all the more important in cosmology because there is simply no physical explanation for why \Lambda should be of order H_0^2.

Best regards,

Jeppe Trøst Nielsen, Alberto Guffanti and Subir Sarkar

On an unrelated note, Perimeter’s PSI program is now accepting applications for 2017. It’s something I wish I knew about when I was an undergrad, for those interested in theoretical physics it can be an enormous jump-start to your career. Here’s their blurb:

Perimeter Scholars International (PSI) is now accepting applications for Perimeter Institute for Theoretical Physics’ unique 10-month Master’s program.

Features of the program include:

• All student costs (tuition and living) are covered, removing financial and/or geographical barriers to entry
• Students learn from world-leading theoretical physicists – resident Perimeter researchers and visiting scientists – within the inspiring environment of Perimeter Institute
• Collaboration is valued over competition; deep understanding and creativity are valued over rote learning and examination
• PSI recruits worldwide: 85 percent of students come from outside of Canada
• PSI takes calculated risks, seeking extraordinary talent who may have non-traditional academic backgrounds but have demonstrated exceptional scientific aptitude

Apply online at http://perimeterinstitute.ca/apply.

Applications are due by February 1, 2017.

# “Maybe” Isn’t News

It’s been published several places, but you’ve probably seen this headline:

If you’ve been following me for a while, you know where this is going:

No, these physicists haven’t actually shown that the Universe isn’t expanding at an accelerated rate.

What they did show is that the original type of data used to discover that the universe was accelerating back in the 90’s, measurements of supernovae, doesn’t live up to the rigorous standards that we physicists use to evaluate discoveries. We typically only call something a discovery if the evidence is good enough that, in a world where the discovery wasn’t actually true, we’d only have a one in 3.5 million chance of getting the same evidence (“five sigma” evidence). In their paper, Nielsen, Guffanti, and Sarkar argue that looking at a bigger collection of supernovae leads to a hazier picture: the chance that we could get the same evidence in a universe that isn’t accelerating is closer to one in a thousand, giving “three sigma” evidence.

This might sound like statistical quibbling: one in a thousand is still pretty unlikely, after all. But a one in a thousand chance still happens once in a thousand times, and there’s a long history of three sigma evidence turning out to just be random noise. If the discovery of the accelerating universe was new, this would be an important objection, a reason to hold back and wait for more data before announcing a discovery.

The trouble is, the discovery isn’t new. In the twenty years since it was discovered that the universe was accelerating, people have built that discovery into the standard model of cosmology. They’ve used that model to make other predictions, explaining a wide range of other observations. People have built on the discovery, and their success in doing so is its own kind of evidence.

So the objection, that one source of evidence isn’t as strong as people thought, doesn’t kill cosmic acceleration. What it is is a “maybe”, showing that there is at least room in some of the data for a non-accelerating universe.

People publish “maybes” all the time, nothing bad about that. There’s a real debate to be had about how strong the evidence is, and how much it really establishes. (And there are already voices on the other side of that debate.)

But a “maybe” isn’t news. It just isn’t.

Science journalists (and university press offices) have a habit of trying to turn “maybes” into stories. I’ve lost track of the times I’ve seen ideas that were proposed a long time ago (technicolor, MOND, SUSY) get new headlines not for new evidence or new ideas, but just because they haven’t been ruled out yet. “SUSY hasn’t been ruled out yet” is an opinion piece, perhaps a worthwhile one, but it’s no news article.

The thing is, I can understand why journalists do this. So much of science is building on these kinds of “maybes”, working towards the tipping point where a “maybe” becomes a “yes” (or a “no”). And journalists (and university press offices, and to some extent the scientists themselves) can’t just take time off and wait for something legitimately newsworthy. They’ve got pages to fill and careers to advance, they need to say something.

I post once a week. As a consequence, a meaningful fraction of my posts are garbage. I’m sure that if I posted every day, most of my posts would be garbage.

Many science news sites post multiple times a day. They’ve got multiple writers, sure, and wider coverage…but they still don’t have the luxury of skipping a “maybe” when someone hands it to them.

I don’t know if there’s a way out of this. Maybe we need a new model for science journalism, something that doesn’t try to ape the pace of the rest of the news cycle. For the moment, though, it’s publish or perish, and that means lots and lots of “maybes”.

EDIT: More arguments against the paper in question, pointing out that they made some fairly dodgy assumptions.

EDIT: The paper’s authors respond here.

# Starshot: The Right Kind of Longshot

On Tuesday, Yuri Milner and Stephen Hawking announced Starshot, a $100 million dollar research initiative. The goal is to lay the groundwork for a very ambitious, but surprisingly plausible project: sending probes to the nearest star, Alpha Centauri. Their idea is to have hundreds of ultra-light probes, each with a reflective sail a few meters in diameter. By aiming an extremely powerful laser at these sails, it should be possible to accelerate the probes up to around a fifth of the speed of light, enough to make the trip in twenty years. Here’s the most complete article I’ve found on the topic. I can’t comment on the engineering side of the project. The impression I get is that nothing they’re proposing is known to be impossible, but there are a lot of “ifs” along the way that might scupper things. What I can comment on is the story. Milner and Hawking have both put quite a bit of effort recently into what essentially amounts to telling stories. Milner’s Breakthrough Prizes involve giving awards of$3 million to prominent theoretical physicists (and, more recently, mathematicians). Quite a few of my fellow theorists have criticized these prizes, arguing that the money would be better spent in a grant program like that of the Simons Foundation. While that would likely be better for science, the Breakthrough Prize isn’t really about that. Instead, it’s about telling a story: a story in which progress in theoretical physics is exalted in a public, Nobel-sized way.

Similarly, Hawking’s occasional pronouncements about aliens or AI aren’t science per se, and the media has a tendency to talk about his contributions to ongoing scientific debates out of proportion to their importance. Both of these things, though, contribute to the story of Hawking: a mascot for physics, someone to carry Einstein’s role of the most recognizable genius in the world. Hawking Inc. is about a role as much as it is about a man.

In calling Hawking and Milner’s activity “stories”, I’m not dismissing them. Stories can be important. And the story told by Starshot is a particularly important one.

Cosmology isn’t just a scientific subject, it contributes to how people see themselves. Here I don’t just mean cosmology the field, but cosmology in the broader sense of our understanding of the universe and our place in it.

A while back, I read a book called The View from the Center of the Universe. The book starts by describing the worldviews of the ancients, cosmologies in which they really did think of themselves as the center of the universe. It then suggests that this played an important role: that this kind of view of the world, in which humans have a place in the cosmos, is important to how we view ourselves. The rest of the book then attempts to construct this sort of mythological understanding out of the modern cosmological picture, with some success.

One thing the book doesn’t discuss very much, though, is the future. We care about our place in the universe not just because we want to know where we came from, but because we want to have some idea of where we’re going. We want to contribute to a greater goal, to see ourselves making progress towards something important and vast and different. That’s why so many religions have not just cosmologies, but eschatologies, why people envision armageddons and raptures.

Starshot places the future in our sight in a way that few other things do. Humanity’s spread among the stars seems like something so far distant that nothing we do now could matter to it. What Starshot does is give us something concrete, a conceptual stepping-stone that can link people in to the broader narrative. Right now, people can work on advanced laser technology and optics, work on making smaller chips and lighter materials, work that would be useful and worth funding regardless of whether it was going to lead to Alpha Centauri. But because of Starshot, we can view that work as the near-term embodiment of humanity’s interstellar destiny.

That combination, bridging the gap between the distant future and our concrete present, is the kind of story people need right now. And so for once, I think Milner’s storytelling is doing exactly what it should.

# You Go, LIGO!

Well folks, they did it. LIGO has detected gravitational waves!

FAQ:

What’s a gravitational wave?

Gravitational waves are ripples in space and time. As Einstein figured out a century ago, masses bend space and time, which causes gravity. Wiggle masses in the right way and you get a gravity wave, like a ripple on a pond.

Ok, but what is actually rippling? It’s some stuff, right? Dust or something?

In a word, no. Not everything has to be “stuff”. Energy isn’t “stuff”, and space-time isn’t either, but space-time is really what vibrates when a gravitational wave passes by. Distances themselves are changing, in a way that is described by the same math and physics as a ripple in a pond.

What’s LIGO?

LIGO is the Laser Interferometer Gravitational-Wave Observatory. In simple terms, it’s an observatory (or rather, a pair of observatories in Washington and Louisiana) that can detect gravitational waves. It does this using beams of laser light four kilometers long. Gravitational waves change the length of these beams when they pass through, causing small but measurable changes in the laser light observed.

Are there other gravitational wave observatories?

Not currently in operation. LIGO originally ran from 2002 to 2010, and during that time there were other gravitational wave observatories also in operation (VIRGO in Italy and GEO600 in Germany). All of them (including LIGO) failed to detect anything, and so LIGO and VIRGO were shut down in order for them to be upgraded to more sensitive, advanced versions. Advanced LIGO went into operation first, and made the detection. VIRGO is still under construction, as is KAGRA, a detector in Japan. There are also plans for a detector in India.

Other sorts of experiments can detect gravitational waves on different scales. eLISA is a planned space-based gravitational wave observatory, while Pulsar Timing Arrays could use distant neutron stars as an impromptu detector.

What did they detect? What could they detect?

The gravitational waves that LIGO detected came from a pair of black holes merging. In general, gravitational waves come from a pair of masses, or one mass with an uneven and rapidly changing shape. As such, LIGO and future detectors might be able to observe binary stars, supernovas, weird-shaped neutron stars, colliding galaxies…pretty much any astrophysical event involving large things moving comparatively fast.

What does this say about string theory?

Basically nothing. There are gravity waves in string theory, sure (and they play a fairly important role), but there were gravity waves in Einstein’s general relativity. As far as I’m aware, no-one at this point seriously thought that gravitational waves didn’t exist. Nothing that LIGO observed has any bearing on the quantum properties of gravity.

But what about cosmic strings? They mentioned those in the announcement!

Cosmic strings, despite the name, aren’t a unique prediction of string theory. They’re big, string-shaped wrinkles in space and time, possible results of the rapid expansion of space during cosmic inflation. You can think of them a bit like the cracks that form in an over-inflated balloon right before it bursts.

Cosmic strings, if they exist, should produce gravitational waves. This means that in the future we may have concrete evidence of whether or not they exist. This wouldn’t say all that much about string theory: while string theory does have its own explanations for cosmic strings, it’s unclear whether it actually has unique predictions about them. It would say a lot about cosmic inflation, though, and would presumably help distinguish it from proposed alternatives. So keep your eyes open: in the next few years, gravitational wave observatories may well have something important to say about the overall history of the universe.

Why is this discovery important, though? If we already knew that gravitational waves existed, why does discovering them matter?

LIGO didn’t discover that gravitational waves exist. LIGO discovered that we can detect them.

The existence of gravitational waves is no discovery. But the fact that we now have observatories sensitive enough to detect them is huge. It opens up a whole new type of astronomy: we can now observe the universe not just by the light it sheds (and neutrinos), but through a whole new lens. And every time we get another observational tool like this, we notice new things, things we couldn’t have seen without it. It’s the dawn of a new era in astronomy, and LIGO was right to announce it with all the pomp and circumstance they could muster.

My impressions from the announcement:

Speaking of pomp and circumstance, I was impressed by just how well put-together LIGO’s announcement was.

As the US presidential election heats up, I’ve seen a few articles about the various candidates’ (well, usually Trump’s) use of the language of political propaganda. The idea is that there are certain visual symbols at political events for which people have strong associations, whether with historical events or specific ideas or the like, and that using these symbols makes propaganda more powerful.

What I haven’t seen is much discussion of a language of scientific propaganda. Still, the overwhelming impression I got from LIGO’s announcement is that it was shaped by a master in the use of such a language. They tapped in to a wide variety of powerful images: from the documentary-style interviews at the beginning, to Weiss’s tweed jacket and handmade demos, to the American flag in the background, that tied LIGO’s result to the history of scientific accomplishment.

Perimeter’s presentations tend to have a slicker look, my friends at Stony Brook are probably better at avoiding jargon. But neither is quite as good at propaganda, at saying “we are part of history” and doing so without a hitch, as the folks at LIGO have shown themselves to be with this announcement.

I was also fairly impressed that they kept this under wraps for so long. While there were leaks, I don’t think many people had a complete grasp of what was going to be announced until the week before. Somehow, LIGO made sure a collaboration of thousands was able to (mostly) keep their mouths shut!

Beyond the organizational and stylistic notes, my main thought was “What’s next?” They’ve announced the detection of one event. I’ve heard others rattle off estimates, that they should be detecting anywhere from one black hole merger per year to a few hundred. Are we going to see more events soon, or should we settle into a long wait? Could they already have detected more, with the evidence buried in their data, to be revealed by careful analysis? (The waves from this black hole merger were clear enough for them to detect them in real-time, but more subtle events might not make things so easy!) Should we be seeing more events already, and does not seeing them tell us something important about the universe?

Most of the reason I delayed my post till this week was to see if anyone had an answer to these questions. So far, I haven’t seen one, besides the “one to a few hundred” estimate mentioned. As more people weigh in and more of LIGO’s run is analyzed, it will be interesting to see where that side of the story goes.