arXiv vs. snarXiv: Can You Tell the Difference?

Have you ever played arXiv vs snarXiv?

arXiv is a preprint repository: it’s where we physicists put our papers before they’re published to journals.

snarXiv is…well..sound it out.

A creation of David Simmons-Duffin, snarXiv randomly generates titles and abstracts out of trendy arXiv buzzwords. It’s designed so that the papers on it look almost plausible…until you take a closer look, anyway.

Hence the game, arXiv vs snarXiv. Given just the titles of two papers, can you figure out which one is real, and which is fake?

I played arXiv vs snarXiv for a bit today, waiting for some code to run. Out of twenty questions, I only got two wrong.

Sometimes, it was fairly clear which paper was fake because snarXiv overreached. By trying to pile on too many buzzwords, it ended up with a title that repeated itself, or didn’t quite work grammatically.

Other times, I had to use some actual physics knowledge. Usually, this meant noticing when a title tied together unrelated areas in an implausible way. When a title claims to tie obscure mathematical concepts from string theory to a concrete problem in astronomy, it’s pretty clearly snarXiv talking.

The toughest questions, including the ones I got wrong, were when snarXiv went for something subtle. For short enough titles, the telltale signs of snarXiv were suppressed. There just weren’t enough buzzwords for a mistake to show up. I’m not sure there’s a way to distinguish titles like that, even for people in the relevant sub-field.

How well do you do at arXiv vs snarXiv? Any tips?

Hexagon Functions Meet the Amplituhedron: Thinking Positive

I finished a new paper recently, it’s up on arXiv now.

This time, we’re collaborating with Jaroslav Trnka, of Amplituhedron fame, to investigate connections between the Amplituhedron and our hexagon function approach.

The Amplituhedron is a way to think about scattering amplitudes in our favorite toy model theory, N=4 super Yang-Mills. Specifically, it describes amplitudes as the “volume” of some geometric space.

Here’s something you might expect: if something is a volume, it should be positive, right? You can’t have a negative amount of space. So you’d naturally guess that these scattering amplitudes, if they’re really the “volume” of something, should be positive.

“Volume” is in quotation marks there for a reason, though, because the real story is a bit more complicated. The Amplituhedron isn’t literally the volume of some space, there are a bunch of other mathematical steps between the geometric story of the Amplituhedron on the one end and the final amplitude on the other. If it was literally a volume, calculating it would be quite a bit easier: mathematicians have gotten very talented at calculating volumes. But if it was literally a volume, it would have to be positive.

What our paper demonstrates is that, in the right regions (selected by the structure of the Amplituhedron), the amplitudes we’ve calculated so far are in fact positive. That first, basic requirement for the amplitude to actually literally be a volume is satisfied.

Of course, this doesn’t prove anything. There’s still a lot of work to do to actually find the thing the amplitude is the volume of, and this isn’t even proof that such a thing exists. It’s another, small piece of evidence. But it’s a reassuring one, and it’s nice to begin to link our approach with the Amplituhedron folks.

This week was the 75th birthday of John Schwarz, one of the founders of string theory and a discoverer of N=4 super Yang-Mills. We’ve dedicated the paper to him. His influence on the field, like the amplitudes of N=4 themselves, has been consistently positive.

Wait, How Do Academics Make Money?

I’ve been working on submitting one of my papers to a journal, which reminded me of the existence of publication fees. That in turn reminded me of a conversation I saw on tumblr a while back:


“beatonna” here is Kate Beaton, of the history-themed webcomic Hark! a Vagrant. She’s about as academia-adjacent as a non-academic gets, but even she thought that the academic database JSTOR paid academics for their contributions, presumably on some kind of royalty system.

In fact, academics don’t get paid by databases, journals, or anyone else that publishes or hosts our work. In the case of journals, we’re often the ones who pay publication fees. Those who write textbooks get royalties, but that’s about it on that front.

Kate Beaton’s confusion here is part of a more general confusion: in my experience, most people don’t know how academics are paid.

The first assumption is usually that we’re paid to teach. I can’t count the number of times I’ve heard someone respond to someone studying physics or math with the question “Oh, so you’re going to teach?”

This one is at least sort of true. Most academics work at universities, and usually have teaching duties. Often, part of an academic’s salary is explicitly related to teaching.

Still, it’s a bit misleading to think of academics as paid to teach: at a big research university, teaching often doesn’t get much emphasis. The extent to which the quality of teaching determines a professor’s funding or career prospects is often quite minimal. Academics teach, but their job isn’t “teacher”.

From there, the next assumption is the one Kate Beaton made. If academics aren’t paid to teach, are they paid to write?

Academia is often described as publish-or-perish, and research doesn’t really “count” until it’s made it to a journal. It would be reasonable to assume that academics are like writers, paid when someone buys our content. As mentioned, though, that’s just not how it works: if anything, sometimes we are the ones who pay the publishers!

It’s probably more accurate (though still not the full story) to say that academics are paid to research.

Research universities expect professors not only to teach, but to do novel and interesting research. Publications are important not because we get paid to write them, but because they give universities an idea of how productive we are. Promotions and the like, at least at research universities, are mostly based on those sorts of metrics.

Professors get some of their money from their universities, for teaching and research. The rest comes from grants. Usually, these come from governments, though private donors are a longstanding and increasingly important group. In both cases, someone decides that a certain general sort of research ought to be done and solicits applications from people interested in doing it. Different people apply with specific proposals, which are assessed with a wide range of esoteric criteria (but yes publications are important), and some people get funding. That funding includes not just equipment, but contributions to salaries as well. Academics really are, in many cases, paid by grants.

This is really pretty dramatically different from any other job. There’s no “customer” in the normal sense, and even the people in charge of paying us are more concerned that a certain sort of work be done than that they have control over it. It’s completely understandable that the public rounds that off to “teaching” or “writing”. It’s certainly more familiar.


A Response from Nielsen, Guffanti and Sarkar

I have been corresponding with Subir Sarkar, one of the authors of the paper I mentioned a few weeks ago arguing that the evidence for cosmic acceleration was much weaker than previously thought. He believes that the criticisms of Rubin and Hayden (linked to in my post) are deeply flawed. Since he and his coauthors haven’t responded publicly to Rubin and Hayden yet, they graciously let me post a summary of their objections.

Dear Matt,

This concerns the discussion on your blog of our recent paper showing that the evidence for cosmic acceleration from supernovae is only 3 sigma. Your obviously annoyed response is in fact to inflated headlines in the media about our work – our paper does just what it does on the can: “Marginal evidence for cosmic acceleration from Type Ia supernovae“. Nevertheless you make a fair assessment of the actual result in our paper and we are grateful for that.

However we feel you are not justified in going on further to state: “In the twenty years since it was discovered that the universe was accelerating, people have built that discovery into the standard model of cosmology. They’ve used that model to make other predictions, explaining a wide range of other observations. People have built on the discovery, and their success in doing so is its own kind of evidence”. If you were as expert in cosmology as you evidently are concerning amplitudes you would know that much of the reasoning you allude to is circular. There are also other instances (which we are looking into) of using statistical methods that assume the answer to shore up the ‘standard model’ of cosmology. Does it not worry you that the evidence from supernovae – which is widely believed to be compelling – turns out to be less so when examined closely? There is a danger of confirmation bias in that cosmologists making poor measurements with large systematic uncertainties nevertheless keep finding the ‘right answer’. See e.g. Croft & Dailey ( who noted “… of the 28 measurements of Omega_Lambda in our sample published since 2003, only 2 are more than 1 sigma from the WMAP results. Wider use of blind analyses in cosmology could help to avoid this”. Unfortunately the situation has not improved in subsequent years.

You are of course entitled to air your personal views on your blog. But please allow us to point out that you are being unfair to us by uncritically stating in the second part of  your sentence: “EDIT: More arguments against the paper in question, pointing out that they made some fairly dodgy assumptions” in which you link to the arXiv eprint by Rubin & Hayden.

These authors make a claim similar to Riess & Scolnic ( that we “assume that the mean properties of supernovae from each of the samples used to measure the expansion history are the same, even though they have been shown to be different and past analyses have accounted for these differences”. In fact we are using exactly the same dataset (called JLA) as Adam Riess and co. have done in their own analysis (Betoule et al They found  stronger evidence for acceleration because of using a flawed statistical method (“constrained \chi^2”). The reason why we find weaker evidence is that we use the Maximum Likelihood Estimator – it is not because of making “dodgy assumptions”. We show our results in the same \Omega_\Lambda – \Omega_m plane simply for ease of comparison with the previous result – as seen in the attached plot, the contours move to the right … and now enclose the “no acceleration” line within 3 \sigma. Our analysis is not – as Brian Schmidt tweeted – “at best unorthodox” … even if this too has been uncritically propagated on social media.

In fact the result from our (frequentist) statistical procedure has been confirmed by an independent analysis using a ‘Bayesian Hierarchical Model’ (Shariff et al This is a more sophisticated approach because it does not adopt a Gaussian approximation as we did for the distribution of the light curve parameters (x_1 and c), however their contours are more ragged because of numerical computation limitations.


Rubin & Hayden do not mention this paper (although bizarrely they ascribe to us the ‘Bayesian Hierarchical Model’). Nevertheless they find more-or-less the same result as us, namely 3.1 sigma evidence for acceleration, using the same dataset as we did (left panel of their Fig.2). They argue however that there are selection effects in this dataset – which have not already been corrected for by the JLA collaboration (which incidentally included Adam Riess, Saul Perlmutter and most other supernova experts in the world). To address this Rubin & Hayden  introduce a redshift-dependent prior on the x_1 and c distributions. This increases the significance to 4.2 sigma (right panel of their Fig.2). If such a procedure is indeed valid then it does mark progress in the field, but that does not mean that these authors have “demonstrated errors in (our) analysis” as they state in their Abstract. Their result also begs the question why has the significance increased so little in going from the initial 50 supernovae which yielded 3.9 sigma evidence for acceleration (Riess et al to 740 supernovae in JLA? Maybe this is news … at least to anyone interested in cosmology and fundamental physics!

Rubin & Hayden also make the usual criticism that we have ignored evidence from other observations e.g. of baryon acoustic oscillations and the cosmic microwave background. We are of course very aware of these observations but as we say in the paper the interpretation of such data is very model-dependent. For example dark energy has no direct influence on the cosmic microwave background. What is deduced from the data is the spatial curvature (adopting the value of the locally measured Hubble expansion rate H_0) and the fractional matter content of the universe (assuming the primordial fluctuation spectrum to be a close-to-scale-invariant power law). Dark energy is then *assumed* to make up the rest (using the sum rule: 1 = \Omega_m + \Omega_\Lambda for a spatially flat universe as suggested by the data). This need not be correct however if there are in fact other terms that should be added to this sum rule (corresponding to corrections to the Friedman equation to account e.g. for averaging over inhomogeneities or for non-ideal gas behaviour of the matter content). It is important to emphasise that there is no convincing (i.e. >5 sigma) dynamical evidence for dark energy, e.g. the late integrated Sachs-Wolfe effect which induces subtle correlations between the CMB and large-scale structure. Rubin & Hayden even claim in their Abstract (v1) that “The combined analysis of modern cosmological experiments … indicate 75 sigma evidence for positive Omega_\Lambda” – which is surely a joke! Nevertheless this is being faithfully repeated on newsgroups, presumably by those somewhat challenged in their grasp of basic statistics.

Apologies for the long post but we would like to explain that the technical criticism of our work by Rubin & Hayden and by Riess & Scolnic is rather disingenuous and it is easy to be misled if you are not an expert. You are entitled to rail against the standards of science journalism but please do not taint us by association.

As a last comment, surely we all want to make progress in cosmology but this will be hard if cosmologists are so keen to cling on to their ‘standard model’ instead of subjecting it to critical tests (as particle physicists continually do to their Standard Model). Moreover the fundamental assumptions of the cosmological model (homogeneity, ideal fluids) have not been tested rigorously (unlike the Standard Model which has been tested at the level of quantum corrections). This is all the more important in cosmology because there is simply no physical explanation for why \Lambda should be of order H_0^2.

Best regards,


Jeppe Trøst Nielsen, Alberto Guffanti and Subir Sarkar



On an unrelated note, Perimeter’s PSI program is now accepting applications for 2017. It’s something I wish I knew about when I was an undergrad, for those interested in theoretical physics it can be an enormous jump-start to your career. Here’s their blurb:

Perimeter Scholars International (PSI) is now accepting applications for Perimeter Institute for Theoretical Physics’ unique 10-month Master’s program.

Features of the program include:

  • All student costs (tuition and living) are covered, removing financial and/or geographical barriers to entry
  • Students learn from world-leading theoretical physicists – resident Perimeter researchers and visiting scientists – within the inspiring environment of Perimeter Institute
  • Collaboration is valued over competition; deep understanding and creativity are valued over rote learning and examination
  • PSI recruits worldwide: 85 percent of students come from outside of Canada
  • PSI takes calculated risks, seeking extraordinary talent who may have non-traditional academic backgrounds but have demonstrated exceptional scientific aptitude

Apply online at

Applications are due by February 1, 2017.

What If the Field Is Doomed?

Around Halloween, I have a tradition of exploring the spooky and/or scary side of physics (sometimes rather tenuously). This time, I want to talk about something particle physicists find scary: the future of the field.

For a long time, now, our field has centered around particle colliders. Early colliders confirmed the existence of quarks and gluons, and populated the Standard Model with a wealth of particles, some expected and some not. Now, an enormous amount of effort has poured into the Large Hadron Collider, which found the Higgs…and so far, nothing else.

Plans are being discussed for an even larger collider, in Europe or China, but it’s not clear that either will be funded. Even if the case for new physics isn’t as strong in such a collider, there are properties of the Higgs that the LHC won’t be able to measure, things it’s important to check with a more powerful machine.

That’s the case we’ll have to make to the public, if we want such a collider to be built. But in addition to the scientific reasons, there are selfish reasons to hope for a new collider. Without one, it’s not clear the field can survive in its current form.

By “the field”, here, I don’t just mean those focused on making predictions for collider physics. My work isn’t plugged particularly tightly into the real world, the same is true of most string theorists. Naively, you’d think it wouldn’t matter to us if a new collider gets built.

The trouble is, physics is interconnected. We may not all make predictions about the world, but the purpose of the tools we build and concepts we explore is to eventually make contact. On grant applications, we talk about that future, one that leads not just to understanding the mathematics and models we use but to understanding reality. And for a long while, a major theme in those grant applications has been collider physics.

Different sub-fields are vulnerable to this in different ways. Surprisingly, the people who directly make predictions for the LHC might have it easiest. Many of them can pivot, and make predictions for cosmological observations and cheaper dark matter detection experiments. Quite a few are already doing so.

It’s harder for my field, for amplitudeology. We try to push the calculation techniques of theoretical physics to greater and greater precision…but without colliders, there are fewer experiments that can match that precision. Cosmological observations and dark matter detection won’t need four-loop calculations.

If there isn’t a next big collider, our field won’t dry up overnight. Our work is disconnected enough, at a far enough remove from reality, that it takes time for that sort of change to be reflected in our funding. Optimistically, this gives people enough time to change gears and alter their focus to the less collider-dependent parts of the field. Pessimistically, it means people would be working on a zombie field, shambling around in a field that is already dead but can’t admit it.


Well I had to use some Halloween imagery

My hope is that this won’t happen. Even if the new colliders don’t get approved and collider physics goes dormant, I’d like to think my colleagues are adaptable enough to stay useful as the world’s demands change. But I’m young in this field, I haven’t seen it face these kinds of challenges before. And so, I worry.

“Maybe” Isn’t News

It’s been published several places, but you’ve probably seen this headline:

expansionheadlineIf you’ve been following me for a while, you know where this is going:

No, these physicists haven’t actually shown that the Universe isn’t expanding at an accelerated rate.

What they did show is that the original type of data used to discover that the universe was accelerating back in the 90’s, measurements of supernovae, doesn’t live up to the rigorous standards that we physicists use to evaluate discoveries. We typically only call something a discovery if the evidence is good enough that, in a world where the discovery wasn’t actually true, we’d only have a one in 3.5 million chance of getting the same evidence (“five sigma” evidence). In their paper, Nielsen, Guffanti, and Sarkar argue that looking at a bigger collection of supernovae leads to a hazier picture: the chance that we could get the same evidence in a universe that isn’t accelerating is closer to one in a thousand, giving “three sigma” evidence.

This might sound like statistical quibbling: one in a thousand is still pretty unlikely, after all. But a one in a thousand chance still happens once in a thousand times, and there’s a long history of three sigma evidence turning out to just be random noise. If the discovery of the accelerating universe was new, this would be an important objection, a reason to hold back and wait for more data before announcing a discovery.

The trouble is, the discovery isn’t new. In the twenty years since it was discovered that the universe was accelerating, people have built that discovery into the standard model of cosmology. They’ve used that model to make other predictions, explaining a wide range of other observations. People have built on the discovery, and their success in doing so is its own kind of evidence.

So the objection, that one source of evidence isn’t as strong as people thought, doesn’t kill cosmic acceleration. What it is is a “maybe”, showing that there is at least room in some of the data for a non-accelerating universe.

People publish “maybes” all the time, nothing bad about that. There’s a real debate to be had about how strong the evidence is, and how much it really establishes. (And there are already voices on the other side of that debate.)

But a “maybe” isn’t news. It just isn’t.

Science journalists (and university press offices) have a habit of trying to turn “maybes” into stories. I’ve lost track of the times I’ve seen ideas that were proposed a long time ago (technicolor, MOND, SUSY) get new headlines not for new evidence or new ideas, but just because they haven’t been ruled out yet. “SUSY hasn’t been ruled out yet” is an opinion piece, perhaps a worthwhile one, but it’s no news article.

The thing is, I can understand why journalists do this. So much of science is building on these kinds of “maybes”, working towards the tipping point where a “maybe” becomes a “yes” (or a “no”). And journalists (and university press offices, and to some extent the scientists themselves) can’t just take time off and wait for something legitimately newsworthy. They’ve got pages to fill and careers to advance, they need to say something.

I post once a week. As a consequence, a meaningful fraction of my posts are garbage. I’m sure that if I posted every day, most of my posts would be garbage.

Many science news sites post multiple times a day. They’ve got multiple writers, sure, and wider coverage…but they still don’t have the luxury of skipping a “maybe” when someone hands it to them.

I don’t know if there’s a way out of this. Maybe we need a new model for science journalism, something that doesn’t try to ape the pace of the rest of the news cycle. For the moment, though, it’s publish or perish, and that means lots and lots of “maybes”.

EDIT: More arguments against the paper in question, pointing out that they made some fairly dodgy assumptions.

EDIT: The paper’s authors respond here.

Four Gravitons in China

I’m in China this week, at the School and Workshop on Amplitudes in Beijing 2016.


It’s a little chilly this time of year, so the dragons have accessorized

A few years back, I mentioned that there didn’t seem to be many amplitudeologists in Asia. That’s changed quite a lot over just the last few years. Song He and Yu-tin Huang went from postdocs in the west to faculty positions in China and Taiwan, respectively, while Bo Feng’s group in China has expanded. As a consequence, there’s now a substantial community here. This is the third “Amplitudes in Asia” conference, with past years meeting in Hong Kong and Taipei.

The “school” part of the conference was last week. I wasn’t here, but the students here seem to have enjoyed it a lot. This week is the “workshop” part, and there have been talks on a variety of parts of amplitudes. Nima showed up on Wednesday and managed to talk for his usual impressively long amount of time, finishing with a public lecture about the future of physics. The talk was ostensibly about why China should build the next big collider, but for the most part it ended up as a more general talk about exciting open questions in high energy physics. The talks were recorded, so they should be online at some point.