Tag Archives: astronomy

A LIGO in the Darkness

For the few of you who haven’t yet heard: LIGO has detected gravitational waves from a pair of colliding neutron stars, and that detection has been confirmed by observations of the light from those stars.

gw170817_factsheet

They also provide a handy fact sheet.

This is a big deal! On a basic level, it means that we now have confirmation from other instruments and sources that LIGO is really detecting gravitational waves.

The implications go quite a bit further than that, though. You wouldn’t think that just one observation could tell you very much, but this is an observation of an entirely new type, the first time an event has been seen in both gravitational waves and light.

That, it turns out, means that this one observation clears up a whole pile of mysteries in one blow. It shows that at least some gamma ray bursts are caused by colliding neutron stars, that neutron star collisions can give rise to the high-power “kilonovas” capable of forming heavy elements like gold…well, I’m not going to be able to give justice to the full implications in this post. Matt Strassler has a pair of quite detailed posts on the subject, and Quanta magazine’s article has a really great account of the effort that went into the detection, including coordinating the network of telescopes that made it possible.

I’ll focus here on a few aspects that stood out to me.

One fun part of the story behind this detection was how helpful “failed” observations were. VIRGO (the European gravitational wave experiment) was running alongside LIGO at the time, but VIRGO didn’t see the event (or saw it so faintly it couldn’t be sure it saw it). This was actually useful, because VIRGO has a blind spot, and VIRGO’s non-observation told them the event had to have happened in that blind spot. That narrowed things down considerably, and allowed telescopes to close in on the actual merger. IceCube, the neutrino observatory that is literally a cubic kilometer chunk of Antarctica filled with sensors, also failed to detect the event, and this was also useful: along with evidence from other telescopes, it suggests that the “jet” of particles emitted by the merged neutron stars is tilted away from us.

One thing brought up at LIGO’s announcement was that seeing gravitational waves and electromagnetic light at roughly the same time puts limits on any difference between the speed of light and the speed of gravity. At the time I wondered if this was just a throwaway line, but it turns out a variety of proposed modifications of gravity predict that gravitational waves will travel slower than light. This event rules out many of those models, and tightly constrains others.

The announcement from LIGO was screened at NBI, but they didn’t show the full press release. Instead, they cut to a discussion for local news featuring NBI researchers from the various telescope collaborations that observed the event. Some of this discussion was in Danish, so it was only later that I heard about the possibility of using the simultaneous measurement of gravitational waves and light to measure the expansion of the universe. While this event by itself didn’t result in a very precise measurement, as more collisions are observed the statistics will get better, which will hopefully clear up a discrepancy between two previous measures of the expansion rate.

A few news sources made it sound like observing the light from the kilonova has let scientists see directly which heavy elements were produced by the event. That isn’t quite true, as stressed by some of the folks I talked to at NBI. What is true is that the light was consistent with patterns observed in past kilonovas, which are estimated to be powerful enough to produce these heavy elements. However, actually pointing out the lines corresponding to these elements in the spectrum of the event hasn’t been done yet, though it may be possible with further analysis.

A few posts back, I mentioned a group at NBI who had been critical of LIGO’s data analysis and raised doubts of whether they detected gravitational waves at all. There’s not much I can say about this until they’ve commented publicly, but do keep an eye on the arXiv in the next week or two. Despite the optimistic stance I take in the rest of this post, the impression I get from folks here is that things are far from fully resolved.

Advertisements

Visiting Uppsala

I’ve been in Uppsala this week, visiting Henrik Johansson‘s group.

IMG_20170927_095605609

The Ångström Laboratory here is substantially larger than an ångström, a clear example of false advertising.

As such, I haven’t had time to write a long post about the recent announcement by the LIGO and VIRGO collaborations. Luckily, Matt Strassler has written one of his currently all-too-rare posts on the subject, so if you’re curious you should check out what he has to say.

Looking at the map of black hole collisions in that post, I’m struck by how quickly things have improved. The four old detections are broad slashes across the sky, the newest is a small patch. Now that there are enough detectors to triangulate, all detections will be located that precisely, or better. A future map might be dotted with precise locations of black hole collisions, but it would still be marred by those four slashes: relics of the brief time when only two machines in the world could detect gravitational waves.

Movie Review: The Truth is in the Stars

Recently, Perimeter aired a showing of The Truth is in the Stars, a documentary about the influence of Star Trek on science and culture, with a panel discussion afterwards. The documentary follows William Shatner as he wanders around the world interviewing scientists and film industry people about how Star Trek inspired them. Along the way he learns a bit about physics, and collects questions to ask Steven Hawking at the end.

5834308d05e48__po__ipho__380_568

I’ll start with the good: the piece is cute. They managed to capture some fun interactions with the interviewees, there are good (if occasionally silly) visuals, and the whole thing seems fairly well edited. If you’re looking for an hour of Star Trek nostalgia and platitudes about physics, this is the documentary for you.

That said, it doesn’t go much beyond cute, and it dances between topics in a way that felt unsatisfying.

The piece has a heavy focus on Shatner, especially early on, beginning with a clumsily shoehorned-in visit to his ranch to hear his thoughts on horses. For a while, the interviews are all about him: his jokes, his awkward questions, his worries about getting old. He has a habit of asking the scientists he talks to whether “everything is connected”, which to the scientists’ credit is usually met by a deft change of subject. All of this fades somewhat as the movie progresses, though: whether by a trick of editing, or because after talking to so many scientists he begins to pick up some humility.

(Incidentally, I really ought to have a blog post debunking the whole “everything is connected” thing. The tricky part is that it involves so many different misunderstandings, from confusion around entanglement to the role of strings to “we are all star-stuff” that it’s hard to be comprehensive.)

Most of the scientific discussions are quite superficial, to the point that they’re more likely to confuse inexperienced viewers than to tell them something new (especially the people who hinted at dark energy-based technology…no, just no). While I don’t expect a documentary like this to cover the science in-depth, trying to touch on so many topics in this short a time mostly just fuels the “everything is connected” misunderstanding. One surprising element of the science coverage was the choice to have both Michio Kaku giving a passionate description of string theory and Neil Turok bluntly calling string theory “a mess”. While giving the public “both sides” like that isn’t unusual in other contexts, for some reason most science documentaries I’ve seen take one side or the other.

Of course, the point of the documentary isn’t really to teach science, it’s to show how Star Trek influenced science. Here too, though, the piece was disappointing. Most of the scientists interviewed could tell their usual story about the power of science fiction in their childhood, but didn’t have much to say about Star Trek specifically. It was the actors and producers who had the most to say about Star Trek, from Ben Stiller showing off his Gorn mask to Seth MacFarlane admiring the design of the Enterprise. The best of these was probably Whoopi Goldberg’s story of being inspired by Uhura, which has been covered better elsewhere (and might have been better as Mae Jemison’s similar story, which would at least have involved an astronaut rather than another actor). I did enjoy Neil deGrasse Tyson’s explanation of how as a kid he thought everything on Star Trek was plausible…except for the automatic doors.

Shatner’s meeting with Hawking is the finale, and is the documentary’s strongest section. Shatner is humbled, even devout, in Hawking’s presence, while Hawking seems to show genuine joy swapping jokes with Captain Kirk.

Overall, the piece felt more than a little disjointed. It’s not really about the science, but it didn’t have enough content to be really about Star Trek either. If it was “about” anything, it was Shatner’s journey: an aging actor getting to hang out and chat with interesting people around the world. If that sounds fun, you should watch it: but don’t expect much deeper than that.

Simple Rules Don’t Mean a Simple Universe

It’s always fun when nature surprises you.

This week, the Perimeter Colloquium was given by Laura Nuttall, a member of the LIGO collaboration.

In a physics department, the colloquium is a regularly scheduled talk that’s supposed to be of interest to the entire department. Some are better at this than others, but this one was pretty fun. The talk explored the sorts of questions gravitational wave telescopes like LIGO can answer about the world.

At one point during the talk, Nuttall showed a plot of what happens when a star collapses into a supernova. For a range of masses, the supernova leaves behind a neutron star (shown on the plot in purple). For heavier stars, it instead results in a black hole, a big black region of the plot.

What surprised me was that inside the black region, there was an unexpected blob: a band of white in the middle of the black holes. Heavier than that band, the star forms a black hole. Lighter, it also forms a black hole. But inside?

Nothing. The star leaves nothing behind. It just explodes.

The physical laws that govern collapsing stars might not be simple, but at least they sound straightforward. Stars are constantly collapsing under their own weight, held up only by the raging heat of nuclear fire. If that heat isn’t strong enough, the star collapses, and other forces take over, so the star becomes a white dwarf, or a neutron star. And if none of those forces is strong enough, the star collapses completely, forming a black hole.

Too small, neutron star. Big enough, black hole. It seems obvious. But reality makes things more complicated.

It turns out, if a star is both massive and has comparatively little metal in it, the core of the star can get very very hot. That heat powers an explosion more powerful than a typical star, one that tears the star apart completely, leaving nothing behind that could form a black hole. Lighter stars don’t get as hot, so they can still form black holes, and heavier stars are so heavy they form black holes anyway. But for those specific stars, in the middle, nothing gets left behind.

This isn’t due to mysterious unknown physics. It’s actually been understood for quite some time. It’s a consequence of those seemingly straightforward laws, one that isn’t at all obvious until you do the work and run the simulations and observe the universe and figure it out.

Just because our world is governed by simple laws, doesn’t mean the universe itself is simple. Give it a little room (and several stars’ worth of hydrogen) and it can still surprise you.

What Space Can Tell Us about Fundamental Physics

Back when LIGO announced its detection of gravitational waves, there was one question people kept asking me: “what does this say about quantum gravity?”

The answer, each time, was “nothing”. LIGO’s success told us nothing about quantum gravity, and very likely LIGO will never tell us anything about quantum gravity.

The sheer volume of questions made me think, though. Astronomy, astrophysics, and cosmology fascinate people. They capture the public’s imagination in a way that makes them expect breakthroughs about fundamental questions. Especially now, with the LHC so far seeing nothing new since the Higgs, people are turning to space for answers.

Is that a fair expectation? Well, yes and no.

Most astrophysicists aren’t concerned with finding new fundamental laws of nature. They’re interested in big systems like stars and galaxies, where we know most of the basic rules but can’t possibly calculate all their consequences. Like most physicists, they’re doing the vital work of “physics of decimals”.

At the same time, there’s a decent chunk of astrophysics and cosmology that does matter for fundamental physics. Just not all of it. Here are some of the key areas where space has something important to say about the fundamental rules that govern our world:

 

1. Dark Matter:

Galaxies rotate at different speeds than their stars would alone. Clusters of galaxies bend light that passes by, and do so more than their visible mass would suggest. And when scientists try to model the evolution of the universe, from early images to its current form, the models require an additional piece: extra matter that cannot interact with light. All of this suggests that there is some extra “dark” matter in the universe, not described by our standard model of particle physics.

If we want to understand this dark matter, we need to know more about its properties, and much of that can be learned from astronomy. If it turns out dark matter isn’t really matter after all, if it can be explained by a modification of gravity or better calculations of gravity’s effects, then it still will have important implications for fundamental physics, and astronomical evidence will still be key to finding those implications.

2. Dark Energy (/Cosmological Constant/Inflation/…):

The universe is expanding, and its expansion appears to be accelerating. It also seems more smooth and uniform than expected, suggesting that it had a period of much greater acceleration early on. Both of these suggest some extra quantity: a changing acceleration, a “dark energy”, the sort of thing that can often be explained by a new scalar field like the Higgs.

Again, the specifics: how (and perhaps if) the universe is expanding now, what kinds of early expansion (if any) the shape of the universe suggests, these will almost certainly have implications for fundamental physics.

3. Limits on stable stuff:

Let’s say you have a new proposal for particle physics. You’ve predicted a new particle, but it can’t interact with anything else, or interacts so weakly we’d never detect it. If your new particle is stable, then you can still say something about it, because its mass would have an effect on the early universe. Too many such particles and they would throw off cosmologists’ models, ruling them out.

Alternatively, you might predict something that could be detected, but hasn’t, like a magnetic monopole. Then cosmologists can tell you how many such particles would have been produced in the early universe, and thus how likely we would be to detect them today. If you predict too many particles and we don’t see them, then that becomes evidence against your proposal.

4. “Cosmological Collider Physics”:

A few years back, Nima Arkani-Hamed and Juan Maldacena suggested that the early universe could be viewed as an extremely high energy particle collider. While this collider performed only one experiment, the results from that experiment are spread across the sky, and observed patterns in the early universe should tell us something about the particles produced by the cosmic collider.

People are still teasing out the implications of this idea, but it looks promising, and could mean we have a lot more to learn from examining the structure of the universe.

5. Big Weird Space Stuff:

If you suspect we live in a multiverse, you might want to look for signs of other universes brushing up against our own. If your model of the early universe predicts vast cosmic strings, maybe a gravitational wave detector like LIGO will be able to see them.

6. Unexpected weirdness:

In all likelihood, nothing visibly “quantum” happens at the event horizons of astrophysical black holes. If you think there’s something to see though, the Event Horizon Telescope might be able to see it. There’s a grab bag of other predictions like this: situations where we probably won’t see anything, but where at least one person thinks there’s a question worth asking.

 

I’ve probably left something out here, but this should give you a general idea. There is a lot that fundamental physics can learn from astronomy, from the overall structure and origins of the universe to unexplained phenomena like dark matter. But not everything in astronomy has these sorts of implications: for the most part, astronomy is interesting not because it tells us something about the fundamental laws of nature, but because it tells us how the vast space above us actually happens to work.

A Response from Nielsen, Guffanti and Sarkar

I have been corresponding with Subir Sarkar, one of the authors of the paper I mentioned a few weeks ago arguing that the evidence for cosmic acceleration was much weaker than previously thought. He believes that the criticisms of Rubin and Hayden (linked to in my post) are deeply flawed. Since he and his coauthors haven’t responded publicly to Rubin and Hayden yet, they graciously let me post a summary of their objections.

Dear Matt,

This concerns the discussion on your blog of our recent paper showing that the evidence for cosmic acceleration from supernovae is only 3 sigma. Your obviously annoyed response is in fact to inflated headlines in the media about our work – our paper does just what it does on the can: “Marginal evidence for cosmic acceleration from Type Ia supernovae“. Nevertheless you make a fair assessment of the actual result in our paper and we are grateful for that.

However we feel you are not justified in going on further to state: “In the twenty years since it was discovered that the universe was accelerating, people have built that discovery into the standard model of cosmology. They’ve used that model to make other predictions, explaining a wide range of other observations. People have built on the discovery, and their success in doing so is its own kind of evidence”. If you were as expert in cosmology as you evidently are concerning amplitudes you would know that much of the reasoning you allude to is circular. There are also other instances (which we are looking into) of using statistical methods that assume the answer to shore up the ‘standard model’ of cosmology. Does it not worry you that the evidence from supernovae – which is widely believed to be compelling – turns out to be less so when examined closely? There is a danger of confirmation bias in that cosmologists making poor measurements with large systematic uncertainties nevertheless keep finding the ‘right answer’. See e.g. Croft & Dailey (http://adsabs.harvard.edu/cgi-bin/bib_query?arXiv:1112.3108) who noted “… of the 28 measurements of Omega_Lambda in our sample published since 2003, only 2 are more than 1 sigma from the WMAP results. Wider use of blind analyses in cosmology could help to avoid this”. Unfortunately the situation has not improved in subsequent years.

You are of course entitled to air your personal views on your blog. But please allow us to point out that you are being unfair to us by uncritically stating in the second part of  your sentence: “EDIT: More arguments against the paper in question, pointing out that they made some fairly dodgy assumptions” in which you link to the arXiv eprint by Rubin & Hayden.

These authors make a claim similar to Riess & Scolnic (https://blogs.scientificamerican.com/guest-blog/no-astronomers-haven-t-decided-dark-energy-is-nonexistent/) that we “assume that the mean properties of supernovae from each of the samples used to measure the expansion history are the same, even though they have been shown to be different and past analyses have accounted for these differences”. In fact we are using exactly the same dataset (called JLA) as Adam Riess and co. have done in their own analysis (Betoule et alhttp://adsabs.harvard.edu/cgi-bin/bib_query?arXiv:1401.4064). They found  stronger evidence for acceleration because of using a flawed statistical method (“constrained \chi^2”). The reason why we find weaker evidence is that we use the Maximum Likelihood Estimator – it is not because of making “dodgy assumptions”. We show our results in the same \Omega_\Lambda – \Omega_m plane simply for ease of comparison with the previous result – as seen in the attached plot, the contours move to the right … and now enclose the “no acceleration” line within 3 \sigma. Our analysis is not – as Brian Schmidt tweeted – “at best unorthodox” … even if this too has been uncritically propagated on social media.

In fact the result from our (frequentist) statistical procedure has been confirmed by an independent analysis using a ‘Bayesian Hierarchical Model’ (Shariff et alhttp://adsabs.harvard.edu/cgi-bin/bib_query?arXiv:1510.05954). This is a more sophisticated approach because it does not adopt a Gaussian approximation as we did for the distribution of the light curve parameters (x_1 and c), however their contours are more ragged because of numerical computation limitations.

all_contours_colors

Rubin & Hayden do not mention this paper (although bizarrely they ascribe to us the ‘Bayesian Hierarchical Model’). Nevertheless they find more-or-less the same result as us, namely 3.1 sigma evidence for acceleration, using the same dataset as we did (left panel of their Fig.2). They argue however that there are selection effects in this dataset – which have not already been corrected for by the JLA collaboration (which incidentally included Adam Riess, Saul Perlmutter and most other supernova experts in the world). To address this Rubin & Hayden  introduce a redshift-dependent prior on the x_1 and c distributions. This increases the significance to 4.2 sigma (right panel of their Fig.2). If such a procedure is indeed valid then it does mark progress in the field, but that does not mean that these authors have “demonstrated errors in (our) analysis” as they state in their Abstract. Their result also begs the question why has the significance increased so little in going from the initial 50 supernovae which yielded 3.9 sigma evidence for acceleration (Riess et alhttp://adsabs.harvard.edu/cgi-bin/bib_query?arXiv:astro-ph/9805201) to 740 supernovae in JLA? Maybe this is news … at least to anyone interested in cosmology and fundamental physics!

Rubin & Hayden also make the usual criticism that we have ignored evidence from other observations e.g. of baryon acoustic oscillations and the cosmic microwave background. We are of course very aware of these observations but as we say in the paper the interpretation of such data is very model-dependent. For example dark energy has no direct influence on the cosmic microwave background. What is deduced from the data is the spatial curvature (adopting the value of the locally measured Hubble expansion rate H_0) and the fractional matter content of the universe (assuming the primordial fluctuation spectrum to be a close-to-scale-invariant power law). Dark energy is then *assumed* to make up the rest (using the sum rule: 1 = \Omega_m + \Omega_\Lambda for a spatially flat universe as suggested by the data). This need not be correct however if there are in fact other terms that should be added to this sum rule (corresponding to corrections to the Friedman equation to account e.g. for averaging over inhomogeneities or for non-ideal gas behaviour of the matter content). It is important to emphasise that there is no convincing (i.e. >5 sigma) dynamical evidence for dark energy, e.g. the late integrated Sachs-Wolfe effect which induces subtle correlations between the CMB and large-scale structure. Rubin & Hayden even claim in their Abstract (v1) that “The combined analysis of modern cosmological experiments … indicate 75 sigma evidence for positive Omega_\Lambda” – which is surely a joke! Nevertheless this is being faithfully repeated on newsgroups, presumably by those somewhat challenged in their grasp of basic statistics.

Apologies for the long post but we would like to explain that the technical criticism of our work by Rubin & Hayden and by Riess & Scolnic is rather disingenuous and it is easy to be misled if you are not an expert. You are entitled to rail against the standards of science journalism but please do not taint us by association.

As a last comment, surely we all want to make progress in cosmology but this will be hard if cosmologists are so keen to cling on to their ‘standard model’ instead of subjecting it to critical tests (as particle physicists continually do to their Standard Model). Moreover the fundamental assumptions of the cosmological model (homogeneity, ideal fluids) have not been tested rigorously (unlike the Standard Model which has been tested at the level of quantum corrections). This is all the more important in cosmology because there is simply no physical explanation for why \Lambda should be of order H_0^2.

Best regards,

 

Jeppe Trøst Nielsen, Alberto Guffanti and Subir Sarkar

 


 

On an unrelated note, Perimeter’s PSI program is now accepting applications for 2017. It’s something I wish I knew about when I was an undergrad, for those interested in theoretical physics it can be an enormous jump-start to your career. Here’s their blurb:

Perimeter Scholars International (PSI) is now accepting applications for Perimeter Institute for Theoretical Physics’ unique 10-month Master’s program.

Features of the program include:

  • All student costs (tuition and living) are covered, removing financial and/or geographical barriers to entry
  • Students learn from world-leading theoretical physicists – resident Perimeter researchers and visiting scientists – within the inspiring environment of Perimeter Institute
  • Collaboration is valued over competition; deep understanding and creativity are valued over rote learning and examination
  • PSI recruits worldwide: 85 percent of students come from outside of Canada
  • PSI takes calculated risks, seeking extraordinary talent who may have non-traditional academic backgrounds but have demonstrated exceptional scientific aptitude

Apply online at http://perimeterinstitute.ca/apply.

Applications are due by February 1, 2017.

“Maybe” Isn’t News

It’s been published several places, but you’ve probably seen this headline:

expansionheadlineIf you’ve been following me for a while, you know where this is going:

No, these physicists haven’t actually shown that the Universe isn’t expanding at an accelerated rate.

What they did show is that the original type of data used to discover that the universe was accelerating back in the 90’s, measurements of supernovae, doesn’t live up to the rigorous standards that we physicists use to evaluate discoveries. We typically only call something a discovery if the evidence is good enough that, in a world where the discovery wasn’t actually true, we’d only have a one in 3.5 million chance of getting the same evidence (“five sigma” evidence). In their paper, Nielsen, Guffanti, and Sarkar argue that looking at a bigger collection of supernovae leads to a hazier picture: the chance that we could get the same evidence in a universe that isn’t accelerating is closer to one in a thousand, giving “three sigma” evidence.

This might sound like statistical quibbling: one in a thousand is still pretty unlikely, after all. But a one in a thousand chance still happens once in a thousand times, and there’s a long history of three sigma evidence turning out to just be random noise. If the discovery of the accelerating universe was new, this would be an important objection, a reason to hold back and wait for more data before announcing a discovery.

The trouble is, the discovery isn’t new. In the twenty years since it was discovered that the universe was accelerating, people have built that discovery into the standard model of cosmology. They’ve used that model to make other predictions, explaining a wide range of other observations. People have built on the discovery, and their success in doing so is its own kind of evidence.

So the objection, that one source of evidence isn’t as strong as people thought, doesn’t kill cosmic acceleration. What it is is a “maybe”, showing that there is at least room in some of the data for a non-accelerating universe.

People publish “maybes” all the time, nothing bad about that. There’s a real debate to be had about how strong the evidence is, and how much it really establishes. (And there are already voices on the other side of that debate.)

But a “maybe” isn’t news. It just isn’t.

Science journalists (and university press offices) have a habit of trying to turn “maybes” into stories. I’ve lost track of the times I’ve seen ideas that were proposed a long time ago (technicolor, MOND, SUSY) get new headlines not for new evidence or new ideas, but just because they haven’t been ruled out yet. “SUSY hasn’t been ruled out yet” is an opinion piece, perhaps a worthwhile one, but it’s no news article.

The thing is, I can understand why journalists do this. So much of science is building on these kinds of “maybes”, working towards the tipping point where a “maybe” becomes a “yes” (or a “no”). And journalists (and university press offices, and to some extent the scientists themselves) can’t just take time off and wait for something legitimately newsworthy. They’ve got pages to fill and careers to advance, they need to say something.

I post once a week. As a consequence, a meaningful fraction of my posts are garbage. I’m sure that if I posted every day, most of my posts would be garbage.

Many science news sites post multiple times a day. They’ve got multiple writers, sure, and wider coverage…but they still don’t have the luxury of skipping a “maybe” when someone hands it to them.

I don’t know if there’s a way out of this. Maybe we need a new model for science journalism, something that doesn’t try to ape the pace of the rest of the news cycle. For the moment, though, it’s publish or perish, and that means lots and lots of “maybes”.

EDIT: More arguments against the paper in question, pointing out that they made some fairly dodgy assumptions.

EDIT: The paper’s authors respond here.