Tag Archives: science

Digging up Variations

The best parts of physics research are when I get a chance to push out into the unknown, doing calculations no-one has done before. Sometimes, though, research is more…archeological.

2016-05-441-134ap_archeologyexcavation_loropc3a9ni_ruins_nr-loropc3a9niponi_prv-bf_sun15may2016-1119h

Pictured: not what I signed up for

Recently, I’ve been digging through a tangle of papers, each of which calculates roughly the same thing in a slightly different way. Like any good archeologist, I need to figure out not just what the authors of these papers were doing, but also why.

(As a physicist, why do I care about “why”? In this case, it’s because I want to know which of the authors’ choices are worth building on. If I can figure out why they made the choices they did, I can decide whether I share their motivations, and thus which aspects of their calculations are useful for mine.)

My first guess at “why” was a deeply cynical one. Why would someone publish slight variations on an old calculation? To get more publications!

This is a real problem in science. In certain countries in particular, promotions and tenure are based not on honestly assessing someone’s work but on quick and dirty calculations based on how many papers they’ve published. This motivates scientists to do the smallest amount possible in order to get a paper out.

That wasn’t what was happening in these papers, though. None of the authors lived in those kinds of countries, and most were pretty well established people: not the sort who worry about keeping up with publications.

So I put aside my cynical first-guess, and actually looked at the papers. Doing that, I found a more optimistic explanation.

These authors were in the process of building research programs. Each had their own long-term goal, a set of concepts and methods they were building towards. And each stopped along the way, to do another variation on this well-trod calculation. They weren’t doing this just because they needed a paper, or just because they could. They were trying to sift out insights, to debug their nascent research program in a well-understood case.

Thinking about it this way helped untwist the tangle of papers. The confusion of different choices suddenly made sense, as the result of different programs with different goals. And in turn, understanding which goals contributed to which papers helped me sort out which goals I shared, and which ideas would turn out to be helpful.

Would it have been less confusing if some of these people had sat on their calculations, and not published? Maybe at first. But in the end, the variations help, giving me a clearer understanding of the whole.

“Maybe” Isn’t News

It’s been published several places, but you’ve probably seen this headline:

expansionheadlineIf you’ve been following me for a while, you know where this is going:

No, these physicists haven’t actually shown that the Universe isn’t expanding at an accelerated rate.

What they did show is that the original type of data used to discover that the universe was accelerating back in the 90’s, measurements of supernovae, doesn’t live up to the rigorous standards that we physicists use to evaluate discoveries. We typically only call something a discovery if the evidence is good enough that, in a world where the discovery wasn’t actually true, we’d only have a one in 3.5 million chance of getting the same evidence (“five sigma” evidence). In their paper, Nielsen, Guffanti, and Sarkar argue that looking at a bigger collection of supernovae leads to a hazier picture: the chance that we could get the same evidence in a universe that isn’t accelerating is closer to one in a thousand, giving “three sigma” evidence.

This might sound like statistical quibbling: one in a thousand is still pretty unlikely, after all. But a one in a thousand chance still happens once in a thousand times, and there’s a long history of three sigma evidence turning out to just be random noise. If the discovery of the accelerating universe was new, this would be an important objection, a reason to hold back and wait for more data before announcing a discovery.

The trouble is, the discovery isn’t new. In the twenty years since it was discovered that the universe was accelerating, people have built that discovery into the standard model of cosmology. They’ve used that model to make other predictions, explaining a wide range of other observations. People have built on the discovery, and their success in doing so is its own kind of evidence.

So the objection, that one source of evidence isn’t as strong as people thought, doesn’t kill cosmic acceleration. What it is is a “maybe”, showing that there is at least room in some of the data for a non-accelerating universe.

People publish “maybes” all the time, nothing bad about that. There’s a real debate to be had about how strong the evidence is, and how much it really establishes. (And there are already voices on the other side of that debate.)

But a “maybe” isn’t news. It just isn’t.

Science journalists (and university press offices) have a habit of trying to turn “maybes” into stories. I’ve lost track of the times I’ve seen ideas that were proposed a long time ago (technicolor, MOND, SUSY) get new headlines not for new evidence or new ideas, but just because they haven’t been ruled out yet. “SUSY hasn’t been ruled out yet” is an opinion piece, perhaps a worthwhile one, but it’s no news article.

The thing is, I can understand why journalists do this. So much of science is building on these kinds of “maybes”, working towards the tipping point where a “maybe” becomes a “yes” (or a “no”). And journalists (and university press offices, and to some extent the scientists themselves) can’t just take time off and wait for something legitimately newsworthy. They’ve got pages to fill and careers to advance, they need to say something.

I post once a week. As a consequence, a meaningful fraction of my posts are garbage. I’m sure that if I posted every day, most of my posts would be garbage.

Many science news sites post multiple times a day. They’ve got multiple writers, sure, and wider coverage…but they still don’t have the luxury of skipping a “maybe” when someone hands it to them.

I don’t know if there’s a way out of this. Maybe we need a new model for science journalism, something that doesn’t try to ape the pace of the rest of the news cycle. For the moment, though, it’s publish or perish, and that means lots and lots of “maybes”.

EDIT: More arguments against the paper in question, pointing out that they made some fairly dodgy assumptions.

EDIT: The paper’s authors respond here.

I Don’t Get Crackpots

[Note: not an April fool’s post. Now I’m wishing I wrote one though.]

After the MHV@30 conference, I spent a few days visiting my sister. I hadn’t seen her in a while, and she noticed something new about me.

“You’re not sure about anything. It’s always ‘I get the impression’ or ‘I believe so’ or ‘that seems good’.”

On reflection, she’s right.

It’s a habit I’ve picked up from spending time around scientists. When you’re surrounded by people who are likely to know more than you do about something, it’s usually good to qualify your statements. A little intellectual humility keeps simple corrections from growing into pointless arguments, and makes it easier to learn from your mistakes.

With that kind of mindset, though, I really really don’t get crackpots.

open-uri20150422-20810-4a7nvg_87d49366

For example, why do they always wear funnels on their heads?

The thing about genuine crackpots (as opposed to just scientists with weird ideas) is that they tend to have almost none of the relevant background for a given field, but nevertheless have extremely strong opinions about it. That basic first step, of assuming that there are people who probably know a lot more about whatever you’re talking about? Typically, they don’t bother with that. The qualifiers, the “typically” and “as far as I know” just don’t show up. And I have a lot of trouble understanding how a person can work that way.

Is some of it the Dunning-Kruger effect? Sure. If you don’t know much about something, you don’t know the limits of your own knowledge, so you think you know more than you really do. But I don’t think it’s just that…there’s a baseline level of doubt, of humility in general, that just isn’t there for most crackpots.

I wonder if some fraction of crackpots are genuinely mentally ill, but if so I’m not sure what the illness would be. Mania is an ok fit some of the time, and the word salad and “everyone but me is crazy” attitude almost seem schizophrenic, but I doubt either is really what’s going on in most cases.

All of this adds up to me just being completely unable to relate to people who display a sufficient level of crackpottery.

The thing is, there are crackpots out there who I kind of wish I could talk to, because if I could maybe I could help them. There are crackpots who seem genuinely willing to be corrected, to be told what they’re doing wrong. But that core of implicit arrogance, the central assumption that it’s possible to make breakthroughs in a field while knowing almost nothing about it, that’s still there, and it makes it impossible for me to deal with them.

I kind of wish there was a website I could link, dedicated to walking crackpots through their mistakes. There used to be something like that for supernatural crackpots, in the form of the James Randi Educational Foundation‘s Million Dollar Prize, complete with forums where (basically) helpful people would patiently walk applicants through how to set up a test of their claims. There’s never been anything like that for science, as far as I’m aware, and it seems like it would take a lot more work. Still, it would be nice if there were people out there patient enough to do it.

Science Never Forgets

I’ll just be doing a short post this week, I’ve been busy at a workshop on Flux Tubes here at Perimeter.

If you’ve ever heard someone tell the history of string theory, you’ve probably heard that it was first proposed not as a quantum theory of gravity, but as a way to describe the strong nuclear force. Colliders of the time had discovered particles, called mesons, that seemed to have a key role in the strong nuclear force that held protons and neutrons together. These mesons had an unusual property: the faster they spun, the higher their mass, following a very simple and regular pattern known as a Regge trajectory. Researchers found that they could predict this kind of behavior if, rather than particles, these mesons were short lengths of “string”, and with this discovery they invented string theory.

As it turned out, these early researchers were wrong. Mesons are not lengths of string, rather, they are pairs of quarks. The discovery of quarks explained how the strong force acted on protons and neutrons, each made of three quarks, and it also explained why mesons acted a bit like strings: in each meson, the two quarks are linked by a flux tube, a roughly cylindrical area filled with the gluons that carry the strong nuclear force. So rather than strings, mesons turned out to be more like bolas.

Leonin sold separately.

If you’ve heard this story before, you probably think it’s ancient history. We know about quarks and gluons now, and string theory has moved on to bigger and better things. You might be surprised to hear that at this week’s workshop, several presenters have been talking about modeling flux tubes between quarks in terms of string theory!

The thing is, science never forgets a good idea. String theory was superseded by quarks in describing the strong force, but it was only proposed in the first place because it matched the data fairly well. Now, with string theory-inspired techniques, people are calculating the first corrections to the string-like behavior of these flux tubes, comparing them with simulations of quarks and gluons, and finding surprisingly good agreement!

Science isn’t a linear story, where the past falls away to the shiny new theories of the future. It’s a marketplace. Some ideas are traded more widely, some less…but if a product works, even only sometimes, chances are someone out there will have a reason to buy it.

Who Plagiarizes an Acknowledgements Section?

I’ve got plagiarists on the brain.

Maybe it was running into this interesting discussion about a plagiarized application for the National Science Foundation’s prestigious Graduate Research Fellowship Program. Maybe it’s due to the talk Paul Ginsparg, founder of arXiv, gave this week about, among other things, detecting plagiarism.

Using arXiv’s repository of every paper someone in physics thought was worth posting, Ginsparg has been using statistical techniques to sift out cases of plagiarism. Probably the funniest cases involved people copying a chunk of their thesis acknowledgements section, as excerpted here. Compare:

“I cannot describe how indebted I am to my wonderful girlfriend, Amanda, whose love and encouragement will always motivate me to achieve all that I can. I could not have written this thesis without her support; in particular, my peculiar working hours and erratic behaviour towards the end could not have been easy to deal with!”

“I cannot describe how indebted I am to my wonderful wife, Renata, whose love and encouragement will always motivate me to achieve all that I can. I could not have written this thesis without her support; in particular, my peculiar working hours and erratic behaviour towards the end could not have been easy to deal with!”

Why would someone do this? Copying the scientific part of a thesis makes sense, in a twisted way: science is hard! But why would someone copy the fluff at the end, the easy part that’s supposed to be a genuine take on your emotions?

The thing is, the acknowledgements section of a thesis isn’t exactly genuine. It’s very formal: a required section of the thesis, with tacit expectations about what’s appropriate to include and what isn’t. It’s also the sort of thing you only write once in your life: while published papers also have acknowledgements sections, they’re typically much shorter, and have different conventions.

If you ever were forced to write thank-you notes as a kid, you know where I’m going with this.

It’s not that you don’t feel grateful, you do! But when you feel grateful, you express it by saying “thank you” and moving on. Writing a note about it isn’t very intuitive, it’s not a way you’re used to expressing gratitude, so the whole experience feels like you’re just following a template.

Literally in some cases.

That sort of situation: where it doesn’t matter how strongly you feel something, only whether you express it in the right way, is a breeding ground for plagiarism. Aunt Mildred isn’t going to care what you write in your thank-you note, and Amanda/Renata isn’t going to be moved by your acknowledgements section. It’s so easy to decide, in that kind of situation, that it’s better to just grab whatever appropriate text you can than to teach yourself a new style of writing.

In general, plagiarism happens because there’s a disconnect between incentives and what they’re meant to be for. In a world where very few beginning graduate students actually have a solid research plan, the NSF’s fellowship application feels like a demand for creative lying, not an honest way to judge scientific potential. In countries eager for highly-cited faculty but low on preexisting experts able to judge scientific merit, tenure becomes easier to get by faking a series of papers than by doing the actual work.

If we want to get rid of plagiarism, we need to make sure our incentives match our intent. We need a system in which people succeed when they do real work, get fellowships when they honestly have talent, and where we care about whether someone was grateful, not how they express it. If we can’t do that, then there will always be people trying to sneak through the cracks.

The Cycle of Exploration

Science is often described as a journey of exploration. You might imagine scientists carefully planning an expedition, gathering their equipment, then venturing out into the wilds of Nature, traveling as far as they can before returning with tales of the wonders they discovered.

Is it capybaras? Please let it be capybaras.

Is it capybaras? Please let it be capybaras.

This misses an important part of the story, though. In science, exploration isn’t just about discovering the true nature of Nature, as important as that is. It’s also about laying the groundwork for future exploration.

Picture our explorers, traveling out into the wilderness with no idea what’s in store. With only a rough idea of the challenges they might face, they must pack for every possibility: warm clothing for mountains, sunscreen for the desert, canoes to ford rivers, cameras in case they encounter capybaras. Since they can only carry so much, they can only travel so far before they run out of supplies.

Once they return, though, the explorers can assess what they did and didn’t need. Maybe they found a jungle, full of capybaras. The next time they travel they’ll make sure to bring canoes and cameras, but they can skip the warm coats. This lets them free up more room, letting them bring more supplies that’s actually useful. In the end, this lets them travel farther.

Science is a lot like this. The more we know, the better questions we can ask, and the further we can explore. It’s true not just for experiments, but for theoretical work as well. Here’s a slide from a talk I’m preparing, about how this works in my sub-field of Amplitudeology.

Unfortunately not a capybara.

Unfortunately not a capybara.

In theoretical physics, you often start out doing a calculation using the most general methods you have available. Once you’ve done it, you understand a bit more about your results: in particular, you can start figuring out which parts of the general method are actually unnecessary. By paring things down, you can figure out a new method, one that’s more efficient and allows for more complicated calculations. Doing those calculations then reveals new patterns, letting you propose even newer methods and do even more complicated calculations.

It’s the circle of exploration, and it really does move us all, motivating everything we do. With each discovery, we can go further, learn more, than the last attempt, keeping science churning long into the future.

Science is Debugging

What do I do, when I get to work in the morning?

I debug programs.

I debug programs literally, in that most of the calculations I do are far to complicated to do by hand. I write programs to do my calculations, and invariably these programs have bugs. So, I debug.

I debug programs in a broader sense, too.

In science, a research program is a broad approach, taken by a number of people, used to make progress on some set of important scientific questions. Someone suggests a way forward (“Let’s try using an ansatz of transcendental functions!” “Let’s try to represent our calculations with a geometrical object!”) and they and others apply the new method to as many problems as they can. Eventually the program loses steam, and a new program is proposed.

The thing about these programs is, they’re pretty much never fully fleshed out at the beginning. There’s a general idea, and a good one, but it usually requires refinement. If you just follow the same steps as the first person in the program you’re bound to fail. Instead, you have to tweak the program, broadening it and adapting it to the problem you’re trying to solve.

It’s a heck of a lot like debugging a computer program, really. You start out with a hastily written script, and you try applying it as-is, hoping that it works. Often it doesn’t, and you have to go back, step by step, and figure out what’s going wrong.

So when I debug computer programs at work, I’m doing it with a broader goal. I’m running a scientific program, looking for bugs in that. If and when I find them, I can write new computer programs to figure out what’s going wrong. Then I have to debug those computer programs…

I’ll just leave this here.