Tag Archives: DoingScience

Grant Roulette

Sometimes, it feels like applying for funding in science is a form of high-stakes gambling. You put in weeks of work assembling a grant application, making sure that it’s exciting and relevant and contains all the obnoxious buzzwords you’re supposed to use…and in the end, it gets approved or rejected for reasons that seem entirely out of your control.

What if, instead, you were actually gambling?

Put all my money on post-Newtonian corrections…

That’s the philosophy behind a 2016 proposal by Ferric Fang and Arturo Casadevall, recently summarized in an article on Vox by Kelsey Piper. The goal is to cut down on the time scientists waste applying for money from various government organizations (for them, the US’s National Institute of Health) by making part of the process random. Applications would be reviewed to make sure they met a minimum standard, but past that point every grant would have an equal chance of getting funded. That way scientists wouldn’t spend so much time perfecting grant applications, and could focus on the actual science.

It’s an idea that seems, on its face, a bit too cute. Yes, grant applications are exhausting, but surely you still want some way to prioritize better ideas over worse ones? For all its flaws, one would hope the grant review process at least does that.

Well, maybe not. The Vox piece argues that, at least in medicine, grants are almost random already. Each grant is usually reviewed by multiple experts. Several studies cited in the piece looked at the variability between these experts: do they usually agree, or disagree? Measuring this in a variety of ways, they came to the same conclusion: there is almost no consistency among ratings by different experts. In effect, the NIH appears to already be using a lottery, one in which grants are randomly accepted or rejected depending on who reviews them.

What encourages me about these studies is that there really is a concrete question to ask. You could argue that physics shouldn’t suffer from the same problems as medicine, that grant review is really doing good work in our field. If you want to argue that, you can test it! Look at old reviews by different people, or get researchers to do “mock reviews”, and test statistical measures like inter-rater reliability. If there really is no consistency between reviews then we have a real problem in need of fixing.

I genuinely don’t know what to expect from that kind of study in my field. But the way people talk about grants makes me suspicious. Everyone seems to feel like grant agencies are biased against their sub-field. Grant-writing advice is full of weird circumstantial tips. (“I heard so-and-so is reviewing this year, so don’t mention QCD!”) It could all be true…but it’s also the kind of superstition people come up with when they look for patterns in a random process. If all the grant-writing advice in the world boils down to “bet on red”, we might as well admit which game we’re playing.


What Science Would You Do If You Had the Time?

I know a lot of people who worry about the state of academia. They worry that the competition for grants and jobs has twisted scientists’ priorities, that the sort of dedicated research of the past, sitting down and thinking about a topic until you really understand it, just isn’t possible anymore. The timeline varies: there are people who think the last really important development was the Standard Model, or the top quark, or AdS/CFT. Even more optimistic people, who think physics is still just as great as it ever was, often complain that they don’t have enough time.

Sometimes I wonder what physics would be like if we did have the time. If we didn’t have to worry about careers and funding, what would we do? I can speculate, comparing to different communities, but here I’m interested in something more concrete: what, specifically, could we accomplish? I often hear people complain that the incentives of academia discourage deep work, but I don’t often hear examples of the kind of deep work that’s being discouraged.

So I’m going to try an experiment here. I know I have a decent number of readers who are scientists of one field or another. Imagine you didn’t have to worry about funding any more. You’ve got a permanent position, and what’s more, your favorite collaborators do too. You don’t have to care about whether your work is popular, whether it appeals to the university or the funding agencies or any of that. What would you work on? What projects would you personally do, that you don’t have the time for in the current system? What worthwhile ideas has modern academia left out?

Interdisciplinarity Is Good for the Soul

Interdisciplinary research is trendy these days. Grant agencies love it, for one. But talking to people in other fields isn’t just promoted by the authorities: like eating your vegetables, it’s good for you too.

If you talk only to people from your own field, you can lose track of what matters in the wider world. There’s a feedback effect where everyone in a field works on what everyone else in the field finds interesting, and the field spirals inward. “Interesting” starts meaning what everyone else is working on, without fulfilling any other criteria. Interdisciplinary contacts hold that back: not only can they call bullshit when you’re deep in your field’s arcane weirdness, they can also point out things that are more interesting than you expected, ideas that your field has seen so often they look boring but that are actually more surprising or useful than you realize.

Interdisciplinary research is good for self-esteem, too. As a young researcher, you can easily spend all your time talking to people who know more about your field than you do. Branching out reminds you of how much you’ve learned: all that specialized knowledge may be entry-level in your field, but it still puts you ahead of the rest of the world. Even as a grad student, you can be someone else’s guest expert if the right topic comes up.

Pan Narrans Scientificus

As scientists, we want to describe the world as objectively as possible. We try to focus on what we can establish conclusively, to leave out excessive speculation and stick to cold, hard facts.

Then we have to write application letters.

Stick to the raw, un-embellished facts, and an application letter would just be a list: these papers in these journals, these talks and awards. Though we may sometimes wish applications worked that way, we don’t live in that kind of world. To apply for a job or a grant, we can’t just stick to the most easily measured facts. We have to tell a story.

The author Terry Pratchett called humans Pan Narrans, the Storytelling Ape. Stories aren’t just for fun, they’re how we see the world, how we organize our perceptions and actions. Without a story, the world doesn’t make sense. And that applies even to scientists.

Applications work best when they tell a story: how did you get here, and where are you going? Scientific papers, similarly, require some sort of narrative: what did you do, and why did you do it? When teaching or writing about science, we almost never just present the facts. We try to fit it into a story, one that presents the facts but also makes sense, in that deliciously human way. A story, more than mere facts, lets us project to the future, anticipating what you’ll do with that grant money or how others will take your research in new directions.

It’s important to remember, though, that stories aren’t actually facts. You can’t get too attached to one story, you have to be willing to shift as new facts come in. Those facts can be scientific measurements, but they can also be steps in your career. You aren’t going to tell the same story when applying to grad school as when you’re trying for tenure, and that’s not just because you’ll have more to tell. The facts of your life will be organized in new ways, rearranging in importance as the story shifts.

Keep your stories in mind as you write or do science. Think about your narrative, the story you’re using to understand the world. Think about what it predicts, how the next step in the story should go. And be ready to start a new story when you need to.

My Other Brain (And My Other Other Brain)

What does a theoretical physicist do all day? We sit and think.

Most of us can’t do all that thinking in our heads, though. Maybe Steven Hawking could, but the rest of us need to visualize what we’re thinking. Our memories, too, are all-too finite, prone to forget what we’re doing midway through a calculation.

So rather than just use our imagination and memory, we use another imagination, another memory: a piece of paper. Writing is the simplest “other brain” we have access to, but even by itself it’s a big improvement, adding weeks of memory and the ability to “see” long calculations at work.

But even augmented by writing, our brains are limited. We can only calculate so fast. What’s more, we get bored: doing the same thing mechanically over and over is not something our brains like to do.

Luckily, in the modern era we have access to other brains: computers.

As I write, the “other brain” sitting on my desk works out a long calculation. Using programs like Mathematica or Maple, or more serious programming languages, I can tell my “other brain” to do something and it will do it, quickly and without getting bored.

My “other brain” is limited too. It has only so much memory, only so much speed, it can only do so many calculations at once. While it’s thinking, though, I can find yet another brain to think at the same time. Sometimes that’s just my desktop, sitting back in my office in Denmark. Sometimes I have access to clusters, blobs of synchronized brains to do my bidding.

While I’m writing this, my “brains” are doing five different calculations (not counting any my “real brain” might be doing). I’m sitting and thinking, as a theoretical physicist should.

Amplitudes in the LHC Era at GGI

I’m at the Galileo Galilei Institute in Florence this week, for a program on Amplitudes in the LHC Era.


I didn’t notice this ceiling decoration last time I was here. These guys really love their Galileo stuff.

I’ll be here for three weeks of the full six-week program, hopefully plenty of time for some solid collaboration. This week was the “conference part”, with a flurry of talks over three days.

I missed the first day, which focused on the “actually useful” side of scattering amplitudes, practical techniques that can be applied to real Standard Model calculations. Luckily the slides are online, and at least some of the speakers are still around to answer questions. I’m particularly curious about Daniel Hulme’s talk, about an approximation strategy I hadn’t heard of before.

The topics of the next two days were more familiar, but the talks still gave me a better appreciation for the big picture behind them. From Johannes Henn’s thoughts about isolating a “conformal part” of general scattering amplitudes to Enrico Herrmann’s roadmap for finding an amplituhedron for supergravity, people seem to be aiming for bigger goals than just the next technical hurdle. It will be nice to settle in over the next couple weeks and get a feeling for what folks are working on next.

A Micrographia of Beastly Feynman Diagrams

Earlier this year, I had a paper about the weird multi-dimensional curves you get when you try to compute trickier and trickier Feynman diagrams. These curves were “Calabi-Yau”, a type of curve string theorists have studied as a way to curl up extra dimensions to preserve something called supersymmetry. At the time, string theorists asked me why Calabi-Yau curves showed up in these Feynman diagrams. Do they also have something to do with supersymmetry?

I still don’t know the general answer. I don’t know if all Feynman diagrams have Calabi-Yau curves hidden in them, or if only some do. But for a specific class of diagrams, I now know the reason. In this week’s paper, with Jacob Bourjaily, Andrew McLeod, and Matthias Wilhelm, we prove it.

We just needed to look at some more exotic beasts to figure it out.


Like this guy!

Meet the tardigrade. In biology, they’re incredibly tenacious microscopic animals, able to withstand the most extreme of temperatures and the radiation of outer space. In physics, we’re using their name for a class of Feynman diagrams.


A clear resemblance!

There is a long history of physicists using whimsical animal names for Feynman diagrams, from the penguin to the seagull (no relation). We chose to stick with microscopic organisms: in addition to the tardigrades, we have paramecia and amoebas, even a rogue coccolithophore.

The diagrams we look at have one thing in common, which is key to our proof: the number of lines on the inside of the diagram (“propagators”, which represent “virtual particles”) is related to the number of “loops” in the diagram, as well as the dimension. When these three numbers are related in the right way, it becomes relatively simple to show that any curves we find when computing the Feynman diagram have to be Calabi-Yau.

This includes the most well-known case of Calabi-Yaus showing up in Feynman diagrams, in so-called “banana” or “sunrise” graphs. It’s closely related to some of the cases examined by mathematicians, and our argument ended up pretty close to one made back in 2009 by the mathematician Francis Brown for a different class of diagrams. Oddly enough, neither argument works for the “traintrack” diagrams from our last paper. The tardigrades, paramecia, and amoebas are “more beastly” than those traintracks: their Calabi-Yau curves have more dimensions. In fact, we can show they have the most dimensions possible at each loop, provided all of our particles are massless. In some sense, tardigrades are “as beastly as you can get”.

We still don’t know whether all Feynman diagrams have Calabi-Yau curves, or just these. We’re not even sure how much it matters: it could be that the Calabi-Yau property is a red herring here, noticed because it’s interesting to string theorists but not so informative for us. We don’t understand Calabi-Yaus all that well yet ourselves, so we’ve been looking around at textbooks to try to figure out what people know. One of those textbooks was our inspiration for the “bestiary” in our title, an author whose whimsy we heartily approve of.

Like the classical bestiary, we hope that ours conveys a wholesome moral. There are much stranger beasts in the world of Feynman diagrams than anyone suspected.