# Pan Narrans Scientificus

As scientists, we want to describe the world as objectively as possible. We try to focus on what we can establish conclusively, to leave out excessive speculation and stick to cold, hard facts.

Then we have to write application letters.

Stick to the raw, un-embellished facts, and an application letter would just be a list: these papers in these journals, these talks and awards. Though we may sometimes wish applications worked that way, we don’t live in that kind of world. To apply for a job or a grant, we can’t just stick to the most easily measured facts. We have to tell a story.

The author Terry Pratchett called humans Pan Narrans, the Storytelling Ape. Stories aren’t just for fun, they’re how we see the world, how we organize our perceptions and actions. Without a story, the world doesn’t make sense. And that applies even to scientists.

Applications work best when they tell a story: how did you get here, and where are you going? Scientific papers, similarly, require some sort of narrative: what did you do, and why did you do it? When teaching or writing about science, we almost never just present the facts. We try to fit it into a story, one that presents the facts but also makes sense, in that deliciously human way. A story, more than mere facts, lets us project to the future, anticipating what you’ll do with that grant money or how others will take your research in new directions.

It’s important to remember, though, that stories aren’t actually facts. You can’t get too attached to one story, you have to be willing to shift as new facts come in. Those facts can be scientific measurements, but they can also be steps in your career. You aren’t going to tell the same story when applying to grad school as when you’re trying for tenure, and that’s not just because you’ll have more to tell. The facts of your life will be organized in new ways, rearranging in importance as the story shifts.

Keep your stories in mind as you write or do science. Think about your narrative, the story you’re using to understand the world. Think about what it predicts, how the next step in the story should go. And be ready to start a new story when you need to.

# My Other Brain (And My Other Other Brain)

What does a theoretical physicist do all day? We sit and think.

Most of us can’t do all that thinking in our heads, though. Maybe Steven Hawking could, but the rest of us need to visualize what we’re thinking. Our memories, too, are all-too finite, prone to forget what we’re doing midway through a calculation.

So rather than just use our imagination and memory, we use another imagination, another memory: a piece of paper. Writing is the simplest “other brain” we have access to, but even by itself it’s a big improvement, adding weeks of memory and the ability to “see” long calculations at work.

But even augmented by writing, our brains are limited. We can only calculate so fast. What’s more, we get bored: doing the same thing mechanically over and over is not something our brains like to do.

Luckily, in the modern era we have access to other brains: computers.

As I write, the “other brain” sitting on my desk works out a long calculation. Using programs like Mathematica or Maple, or more serious programming languages, I can tell my “other brain” to do something and it will do it, quickly and without getting bored.

My “other brain” is limited too. It has only so much memory, only so much speed, it can only do so many calculations at once. While it’s thinking, though, I can find yet another brain to think at the same time. Sometimes that’s just my desktop, sitting back in my office in Denmark. Sometimes I have access to clusters, blobs of synchronized brains to do my bidding.

While I’m writing this, my “brains” are doing five different calculations (not counting any my “real brain” might be doing). I’m sitting and thinking, as a theoretical physicist should.

# When You Shouldn’t Listen to a Distinguished but Elderly Scientist

Of science fiction author Arthur C. Clarke’s sayings, the most famous is “Clarke’s third law”, that “Any sufficiently advanced technology is indistinguishable from magic.” Almost as famous, though, is his first law:

“When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.”

Recently Michael Atiyah, an extremely distinguished but also rather elderly mathematician, claimed that something was possible: specifically, he claimed it was possible that he had proved the Riemann hypothesis, one of the longest-standing and most difficult puzzles in mathematics. I won’t go into the details here, but people are, well, skeptical.

This post isn’t really about Atiyah. I’m not close enough to that situation to comment. Instead, it’s about a more general problem.

See, the public seems to mostly agree with Clarke’s law. They trust distinguished, elderly scientists, at least when they’re saying something optimistic. Other scientists know better. We know that scientists are human, that humans age…and that sometimes scientific minds don’t age gracefully.

Some of the time, that means Alzheimer’s, or another form of dementia. Other times, it’s nothing so extreme, just a mind slowing down with age, opinions calcifying and logic getting just a bit more fuzzy.

And the thing is, watching from the sidelines, you aren’t going to know the details. Other scientists in the field will, but this kind of thing is almost never discussed with the wider public. Even here, though specific physicists come to mind as I write this, I’m not going to name them. It feels rude, to point out that kind of all-too-human weakness in someone who accomplished so much. But I think it’s important for the public to keep in mind that these people exist. When an elderly Nobelist claims to have solved a problem that baffles mainstream science, the news won’t tell you they’re mentally ill. All you can do is keep your eyes open, and watch for warning signs:

Be wary of scientists who isolate themselves. Scientists who still actively collaborate and mentor almost never have this kind of problem. There’s a nasty feedback loop when those contacts start to diminish. Being regularly challenged is crucial to test scientific ideas, but it’s also important for mental health, especially in the elderly. As a scientist thinks less clearly, they won’t be able to keep up with their collaborators as much, worsening the situation.

Similarly, beware those famous enough to surround themselves with yes-men. With Nobel prizewinners in particular, many of the worst cases involve someone treated with so much reverence that they forget to question their own ideas. This is especially risky when commenting on an unfamiliar field: often, the Nobelist’s contacts in the new field have a vested interest in holding on to their big-name support, and ignoring signs of mental illness.

Finally, as always, bigger claims require better evidence. If everything someone works on is supposed to revolutionize science as we know it, then likely none of it will. The signs that indicate crackpots apply here as well: heavily invoking historical scientists, emphasis on notation over content, a lack of engagement with the existing literature. Be especially wary if the argument seems easy, deep problems are rarely so simple to solve.

Keep this in mind, and the next time a distinguished but elderly scientist states that something is possible, don’t trust them blindly. Ultimately, we’re still humans beings. We don’t last forever.

# Don’t Marry Your Arbitrary

This fall, I’m TAing a course on General Relativity. I haven’t taught in a while, so it’s been a good opportunity to reconnect with how students think.

This week, one problem left several students confused. The problem involved Christoffel symbols, the bane of many a physics grad student, but the trick that they had to use was in the end quite simple. It’s an example of a broader trick, a way of thinking about problems that comes up all across physics.

To see a simplified version of the problem, imagine you start with this sum:

$g(j)=\Sigma_{i=0}^n ( f(i,j)-f(j,i) )$

Now, imagine you want to sum the function $g(j)$ over $j$. You can write:

$\Sigma_{j=0}^n g(j) = \Sigma_{j=0}^n \Sigma_{i=0}^n ( f(i,j)-f(j,i) )$

Let’s break this up into two terms, for later convenience:

$\Sigma_{j=0}^n g(j) = \Sigma_{j=0}^n \Sigma_{i=0}^n f(i,j) - \Sigma_{j=0}^n \Sigma_{i=0}^n f(j,i)$

Without telling you anything about $f(i,j)$, what do you know about this sum?

Well, one thing you know is that $i$ and $j$ are arbitrary.

$i$ and $j$ are letters you happened to use. You could have used different letters, $x$ and $y$, or $\alpha$ and $\beta$. You could even use different letters in each term, if you wanted to. You could even just pick one term, and swap $i$ and $j$.

$\Sigma_{j=0}^n g(j) = \Sigma_{j=0}^n \Sigma_{i=0}^n f(i,j) - \Sigma_{i=0}^n \Sigma_{j=0}^n f(i,j) = 0$

And now, without knowing anything about $f(i,j)$, you know that $\Sigma_{j=0}^n g(j)$ is zero.

In physics, it’s extremely important to keep track of what could be really physical, and what is merely your arbitrary choice. In general relativity, your choice of polar versus spherical coordinates shouldn’t affect your calculation. In quantum field theory, your choice of gauge shouldn’t matter, and neither should your scheme for regularizing divergences.

Ideally, you’d do your calculation without making any of those arbitrary choices: no coordinates, no choice of gauge, no regularization scheme. In practice, sometimes you can do this, sometimes you can’t. When you can’t, you need to keep that arbitrariness in the back of your mind, and not get stuck assuming your choice was the only one. If you’re careful with arbitrariness, it can be one of the most powerful tools in physics. If you’re not, you can stare at a mess of Christoffel symbols for hours, and nobody wants that.

# Current Themes 2018

I’m at Current Themes in High Energy Physics and Cosmology this week, the yearly conference of the Niels Bohr International Academy. (I talked about their trademark eclectic mix of topics last year.)

This year, the “current theme” was broadly gravitational (though with plenty of exceptions!).

For example, almost getting kicked out of the Botanical Garden

There were talks on phenomena we observe gravitationally, like dark matter. There were talks on calculating amplitudes in gravity theories, both classical and quantum. There were talks about black holes, and the overall shape of the universe. Subir Sarkar talked about his suspicion that the expansion of the universe isn’t actually accelerating, and while I still think the news coverage of it was overblown I sympathize a bit more with his point. He’s got a fairly specific worry, that we’re in a region that’s moving unusually with respect to the surrounding universe, that hasn’t really been investigated in much detail before. I don’t think he’s found anything definitive yet, but it will be interesting as more data accumulates to see what happens.

Of course, current themes can’t stick to just one theme, so there were non-gravitational talks as well. Nima Arkani-Hamed’s talk covered some results he’s talked about in the past, a geometric picture for constraining various theories, but with an interesting new development: while most of the constraints he found restrict things to be positive, one type of constraint he investigated allowed for a very small negative region, around thirty orders of magnitude smaller than the positive part. The extremely small size of the negative region was the most surprising part of the story, as it’s quite hard to get that kind of extremely small scale out of the math we typically invoke in physics (a similar sense of surprise motivates the idea of “naturalness” in particle physics).

There were other interesting talks, which I might talk about later. They should have slides up online soon in case any of you want to have a look.

# Conferences Are Work! Who Knew?

I’ve been traveling for over a month now, from conference to conference, with a bit of vacation thrown in at the end.

(As such, I haven’t had time to read up on the recent announcement of the detection of neutrinos and high-energy photons from a blazar, Matt Strassler has a nice piece on it.)

One thing I didn’t expect was how exhausting going to three conferences in a row would be. I didn’t give any talks this time around, so I thought I was skipping the “work” part. But sitting in a room for talk after talk, listening and taking notes, turns out to still be work! There’s effort involved in paying attention, especially in a scientific talk where the details matter. You assess the talks in your head, turning concepts around and thinking about what you might do with them. It’s the kind of thing you don’t notice for a seminar or two, but at a conference, after a while, it really builds up. After three, let’s just say I’ve really needed this vacation. I’ll be back at work next week, and maybe I’ll have a longer blog post for you folks. Until then, I ought to get some rest!

# By Any Other Author Would Smell as Sweet

I was chatting with someone about this paper (which probably deserves a post in its own right, once I figure out an angle that isn’t just me geeking out about how much I could do with their new setup), and I referred to it as “Claude’s paper”. This got me chided a bit: the paper has five authors, experts on Feynman diagrams and elliptic integrals. It’s not just “Claude’s paper”. So why do I think of it that way?

Part of it, I think, comes from the experience of reading a paper. We want to think of a paper as a speech act: someone talking to us, explaining something, leading us through a calculation. Our brain models that as a conversation with a single person, so we naturally try to put a single face to a paper. With a collaborative paper, this is almost never how it was written: different sections are usually written by different people, who then edit each other’s work. But unless you know the collaborators well, you aren’t going to know who wrote which section, so it’s easier to just picture one author for the whole thing.

Another element comes from how I think about the field. Just as it’s easier to think of a paper as the speech of one person, it’s easier to think of new developments as continuations of a story. I at least tend to think about the field in terms of specific programs: these people worked on this, which is a continuation of that. You can follow those kinds of threads though the field, but in reality they’re tangled together: collaborations are an opportunity for two programs to meet. In other fields you might have a “first author” to default to, but in theoretical physics we normally write authors alphabetically. For “Claude’s paper”, it just feels like the sort of thing I’d expect Claude Duhr to write, like a continuation of the other things he’s known for, even if it couldn’t have existed without the other four authors.

You’d worry that associating papers with people like this takes away deserved credit. I don’t think it’s quite that simple, though. In an older post I described this paper as the work of Anastasia Volovich and Mark Spradlin. On some level, that’s still how I think about it. Nevertheless, when I heard that Cristian Vergu was going to be at the Niels Bohr Institute next year, I was excited: we’re hiring one of the authors of GSVV! Even if I don’t think of him immediately when I think of the paper, I think of the paper when I think of him.

That, I think, is more important for credit. If you’re a hiring committee, you’ll start out by seeing names of applicants. It’s important, at that point, that you know what they did, that the authors of important papers stand out, that you assign credit where it’s due. It’s less necessary on the other end, when you’re reading a paper and casually classify it in your head.

Nevertheless, I should be more careful about credit. It’s important to remember that “Claude Duhr’s paper” is also “Johannes Broedel’s paper” and “Falko Dulat’s paper”, “Brenda Penante’s paper” and “Lorenzo Tancredi’s paper”. It gives me more of an appreciation of where it comes from, so I can get back to having fun applying it.