Tag Archives: academia

Different Fields, Different Worlds

My grandfather is a molecular biologist. When we meet, we swap stories: the state of my field and his, different methods and focuses but often a surprising amount of common ground.

Recently he forwarded me an article by Raymond Goldstein, a biological physicist, arguing that biologists ought to be more comfortable with physical reasoning. The article is interesting in its own right, contrasting how physicists and biologists think about the relationship between models, predictions, and experiments. But what struck me most about the article wasn’t the content, but the context.

Goldstein’s article focuses on a question that seemed to me oddly myopic: should physical models be in the Results section, or the Discussion section?

As someone who has never written a paper with either a Results section or a Discussion section, I wondered why anyone would care. In my field, paper formats are fairly flexible. We usually have an Introduction and a Conclusion, yes, but in between we use however many sections we need to explain what we need to. In contrast, biology papers seem to have a very fixed structure: after the Introduction, there’s a Results section, a Discussion section, and a Materials and Methods section at the end.

At first blush, this seemed incredibly bizarre. Why describe your results before the methods you used to get them? How do you talk about your results without discussing them, but still take a full section to do it? And why do reviewers care how you divide things up in the first place?

It made a bit more sense once I thought about how biology differs from theoretical physics. In theoretical physics, the “methods” are most of the result: unsolved problems are usually unsolved because existing methods don’t solve them, and we need to develop new methods to make progress. Our “methods”, in turn, are often the part of the paper experts are most eager to read. In biology, in contrast, the methods are much more standardized. While papers will occasionally introduce new methods, there are so many unexplored biological phenomena that most of the time researchers don’t need to invent a new method: just asking a question no-one else has asked can be enough for a discovery. In that environment, the “results” matter a lot more: they’re the part that takes the most scrutiny, that needs to stand up on its own.

I can even understand the need for a fixed structure. Biology is a much bigger field than theoretical physics. My field is small enough that we all pretty much know each other. If a paper is hard to read, we’ll probably get a chance to ask the author what they meant. Biology, in contrast, is huge. An important result could come from anywhere, and anyone. Having a standardized format makes it a lot easier to scan through an unfamiliar paper and find what you need, especially when there might be hundreds of relevant papers.

The problem with a standardized system, as always, is the existence of exceptions. A more “physics-like” biology paper is more readable with “physics-like” conventions, even if the rest of the field needs to stay “biology-like”. Because of that, I have a lot of sympathy for Goldstein’s argument, but I can’t help but feel that he should be asking for more. If creating new mathematical models and refining them with observation is at the heart of what Goldstein is doing, then maybe he shouldn’t have to use Results/Discussion/Methods in the first place. Maybe he should be allowed to write biology papers that look more like physics papers.

Advertisements

Adversarial Collaborations for Physics

Sometimes physics debates get ugly. For the scientists reading this, imagine your worst opponents. Think of the people who always misinterpret your work while using shoddy arguments to prop up their own, where every question at a talk becomes a screaming match until you just stop going to the same conferences at all.

Now, imagine writing a paper with those people.

Adversarial collaborations, subject of a recent a contest on the blog Slate Star Codex, are a proposed method for resolving scientific debates. Two scientists on opposite sides of an argument commit to writing a paper together, describing the overall state of knowledge on the topic. For the paper to get published, both sides have to sign off on it: they both have to agree that everything in the paper is true. This prevents either side from cheating, or from coming back later with made-up objections: if a point in the paper is wrong, one side or the other is bound to catch it.

This won’t work for the most vicious debates, when one (or both) sides isn’t interested in common ground. But for some ongoing debates in physics, I think this approach could actually help.

One advantage of adversarial collaborations is in preventing accusations of bias. The debate between dark matter and MOND-like proposals is filled with these kinds of accusations: claims that one group or another is ignoring important data, being dishonest about the parameters they need to fit, or applying standards of proof they would never require of their own pet theory. Adversarial collaboration prevents these kinds of accusations: whatever comes out of an adversarial collaboration, both sides would make sure the other side didn’t bias it.

Another advantage of adversarial collaborations is that they make it much harder for one side to move the goalposts, or to accuse the other side of moving the goalposts. From the sidelines, one thing that frustrates me watching string theorists debate whether the theory can describe de Sitter space is that they rarely articulate what it would take to decisively show that a particular model gives rise to de Sitter. Any conclusion of an adversarial collaboration between de Sitter skeptics and optimists would at least guarantee that both parties agreed on the criteria. Similarly, I get the impression that many debates about interpretations of quantum mechanics are bogged down by one side claiming they’ve closed off a loophole with a new experiment, only for the other to claim it wasn’t the loophole they were actually using, something that could be avoided if both sides were involved in the experiment from the beginning.

It’s possible, even likely, that no-one will try adversarial collaboration for these debates. Even if they did, it’s quite possible the collaborations wouldn’t be able to agree on anything! Still, I have to hope that someone takes the plunge and tries writing a paper with their enemies. At minimum, it’ll be an interesting read!

Conferences Are Work! Who Knew?

I’ve been traveling for over a month now, from conference to conference, with a bit of vacation thrown in at the end.

(As such, I haven’t had time to read up on the recent announcement of the detection of neutrinos and high-energy photons from a blazar, Matt Strassler has a nice piece on it.)

One thing I didn’t expect was how exhausting going to three conferences in a row would be. I didn’t give any talks this time around, so I thought I was skipping the “work” part. But sitting in a room for talk after talk, listening and taking notes, turns out to still be work! There’s effort involved in paying attention, especially in a scientific talk where the details matter. You assess the talks in your head, turning concepts around and thinking about what you might do with them. It’s the kind of thing you don’t notice for a seminar or two, but at a conference, after a while, it really builds up. After three, let’s just say I’ve really needed this vacation. I’ll be back at work next week, and maybe I’ll have a longer blog post for you folks. Until then, I ought to get some rest!

The Amplitudes Long View

Occasionally, other physicists ask me what the goal of amplitudes research is. What’s it all about?

I want to give my usual answer: we’re calculating scattering amplitudes! We’re trying to compute them more efficiently, taking advantage of simplifications and using a big toolbox of different approaches, and…

Usually by this point in the conversation, it’s clear that this isn’t what they were asking.

When physicists ask me about the goal of amplitudes research, they’ve got a longer view in mind. Maybe they’ve seen a talk by Nima Arkani-Hamed, declaring that spacetime is doomed. Maybe they’ve seen papers arguing that everything we know about quantum field theory can be derived from a few simple rules. Maybe they’ve heard slogans, like “on-shell good, off-shell bad”. Maybe they’ve heard about the conjecture that N=8 supergravity is finite, or maybe they’ve just heard someone praise the field as “demoting the sacred cows like fields, Lagrangians, and gauge symmetry”.

Often, they’ve heard a little bit of all of these. Sometimes they’re excited, sometimes they’re skeptical, but either way, they’re usually more than a little confused. They’re asking how all of these statements fit into a larger story.

The glib answer is that they don’t. Amplitudes has always been a grab-bag of methods: different people with different backgrounds, united by their interest in a particular kind of calculation.

With that said, I think there is a shared philosophy, even if each of us approaches it a little differently. There is an overall principle that unites the amplituhedron and color-kinematics duality, the CHY string and bootstrap methods, BCFW and generalized unitarity.

If I had to describe that principle in one word, I’d call it minimality. Quantum field theory involves hugely complicated mathematical machinery: Lagrangians and path integrals, Feynman diagrams and gauge fixing. At the end of the day, if you want to answer a concrete question, you’re computing a few specific kinds of things: mostly, scattering amplitudes and correlation functions. Amplitudes tries to start from the other end, and ask what outputs of this process are allowed. The idea is to search for something minimal: a few principles that, when applied to a final answer in a particular form, specify it uniquely. The form in question varies: it can be a geometric picture like the amplituhedron, or a string-like worldsheet, or a constructive approach built up from three-particle amplitudes. The goal, in each case, is the same: to skip the usual machinery, and understand the allowed form for the answer.

From this principle, where do the slogans come from? How could minimality replace spacetime, or solve quantum gravity?

It can’t…if we stick to only matching quantum field theory. As long as each calculation matches one someone else could do with known theories, even if we’re more efficient, these minimal descriptions won’t really solve these kinds of big-picture mysteries.

The hope (and for the most part, it’s a long-term hope) is that we can go beyond that. By exploring minimal descriptions, the hope is that we will find not only known theories, but unknown ones as well, theories that weren’t expected in the old understanding of quantum field theory. The amplituhedron doesn’t need space-time, it might lead the way to a theory that doesn’t have space-time. If N=8 supergravity is finite, it could suggest new theories that are finite. The story repeats, with variations, whenever amplitudeologists explore the outlook of our field. If we know the minimal requirements for an amplitude, we could find amplitudes that nobody expected.

I’m not claiming we’re the only field like this: I feel like the conformal bootstrap could tell a similar story. And I’m not saying everyone thinks about our field this way: there’s a lot of deep mathematics in just calculating amplitudes, and it fascinated people long before the field caught on with the Princeton set.

But if you’re asking what the story is for amplitudes, the weird buzz you catch bits and pieces of and can’t quite put together…well, if there’s any unifying story, I think it’s this one.

Citations Are Reblogs

Last week we had a seminar from Nadav Drukker, a physicist who commemorates his papers with pottery.

At the speaker dinner we got to chatting about physics outreach, and one of my colleagues told an amusing story. He was explaining the idea of citations to someone at a party, and the other person latched on to the idea of citations as “likes” on Facebook. She was then shocked when he told her that a typical paper of his got around fifty citations.

“Only fifty likes???”

Ok, clearly the metaphor of citations as “likes” is more than a little silly. Liking a post is easy and quick, while citing a paper requires a full paper of your own. Obviously, citations are not “likes”.

No, citations are reblogs.

Citations are someone engaging with your paper, your “post” in this metaphor, and building on it, making it part of their own work. That’s much closer to a “reblog” (or in Facebook terms a “share”) than a “like”. More specifically, it’s a “reblog-with-commentary”, taking someone’s content and adding your own, in a way that acknowledges where the original idea came from. And while fifty “likes” on a post may seem low, fifty reblogs with commentary (not just “LOL SMH”, but actual discussion) is pretty reasonable.

The average person doesn’t know much about academia, but there are a lot of academia-like communities out there. People who’ve never written a paper might know what it’s like to use characters from someone else’s fanfiction, or sew a quilt based on a friend’s pattern. Small communities of creative people aren’t so different from each other, whether they’re writers or gamers or scientists. Each group has traditions of building on each other’s work, acknowledging where your inspiration came from, and using that to build standing in the community. Citations happen to be ours.

By Any Other Author Would Smell as Sweet

I was chatting with someone about this paper (which probably deserves a post in its own right, once I figure out an angle that isn’t just me geeking out about how much I could do with their new setup), and I referred to it as “Claude’s paper”. This got me chided a bit: the paper has five authors, experts on Feynman diagrams and elliptic integrals. It’s not just “Claude’s paper”. So why do I think of it that way?

Part of it, I think, comes from the experience of reading a paper. We want to think of a paper as a speech act: someone talking to us, explaining something, leading us through a calculation. Our brain models that as a conversation with a single person, so we naturally try to put a single face to a paper. With a collaborative paper, this is almost never how it was written: different sections are usually written by different people, who then edit each other’s work. But unless you know the collaborators well, you aren’t going to know who wrote which section, so it’s easier to just picture one author for the whole thing.

Another element comes from how I think about the field. Just as it’s easier to think of a paper as the speech of one person, it’s easier to think of new developments as continuations of a story. I at least tend to think about the field in terms of specific programs: these people worked on this, which is a continuation of that. You can follow those kinds of threads though the field, but in reality they’re tangled together: collaborations are an opportunity for two programs to meet. In other fields you might have a “first author” to default to, but in theoretical physics we normally write authors alphabetically. For “Claude’s paper”, it just feels like the sort of thing I’d expect Claude Duhr to write, like a continuation of the other things he’s known for, even if it couldn’t have existed without the other four authors.

You’d worry that associating papers with people like this takes away deserved credit. I don’t think it’s quite that simple, though. In an older post I described this paper as the work of Anastasia Volovich and Mark Spradlin. On some level, that’s still how I think about it. Nevertheless, when I heard that Cristian Vergu was going to be at the Niels Bohr Institute next year, I was excited: we’re hiring one of the authors of GSVV! Even if I don’t think of him immediately when I think of the paper, I think of the paper when I think of him.

That, I think, is more important for credit. If you’re a hiring committee, you’ll start out by seeing names of applicants. It’s important, at that point, that you know what they did, that the authors of important papers stand out, that you assign credit where it’s due. It’s less necessary on the other end, when you’re reading a paper and casually classify it in your head.

Nevertheless, I should be more careful about credit. It’s important to remember that “Claude Duhr’s paper” is also “Johannes Broedel’s paper” and “Falko Dulat’s paper”, “Brenda Penante’s paper” and “Lorenzo Tancredi’s paper”. It gives me more of an appreciation of where it comes from, so I can get back to having fun applying it.

A Paper About Ranking Papers

If you’ve ever heard someone list problems in academia, citation-counting is usually near the top. Hiring and tenure committees want easy numbers to judge applicants with: number of papers, number of citations, or related statistics like the h-index. Unfortunately, these metrics can be gamed, leading to a host of bad practices that get blamed for pretty much everything that goes wrong in science. In physics, it’s not even clear that these statistics tell us anything: papers in our field have been including more citations over time, and for thousand-person experimental collaborations the number of citations and papers don’t really reflect any one person’s contribution.

It’s pretty easy to find people complaining about this. It’s much rarer to find a proposed solution.

That’s why I quite enjoyed Alessandro Strumia and Riccardo Torre’s paper last week, on Biblioranking fundamental physics.

Some of their suggestions are quite straightforward. With the number of citations per paper increasing, it makes sense to divide each paper by the number of citations it contains: it means more to get cited by a paper with ten citations than by a paper with one hundred. Similarly, you could divide credit for a paper among its authors, rather than giving each author full credit.

Some are more elaborate. They suggest using a variant of Google’s PageRank algorithm to rank papers and authors. Essentially, the algorithm imagines someone wandering from paper to paper and tries to figure out which papers are more central to the network. This is apparently an old idea, but by combining it with their normalization by number of citations they eke a bit more mileage from it. (I also found their treatment a bit clearer than the older papers they cite. There are a few more elaborate setups in the literature as well, but they seem to have a lot of free parameters so Strumia and Torre’s setup looks preferable on that front.)

One final problem they consider is that of self-citations, and citation cliques. In principle, you could boost your citation count by citing yourself. While that’s easy to correct for, you could also be one of a small number of authors who cite each other a lot. To keep the system from being gamed in this way, they propose a notion of a “CitationCoin” that counts (normalized) citations received minus (normalized) citations given. The idea is that, just as you can’t make anyone richer just by passing money between your friends without doing anything with it, so a small community can’t earn “CitationCoins” without getting the wider field interested.

There are still likely problems with these ideas. Dividing each paper by its number of authors seems like overkill: a thousand-person paper is not typically going to get a thousand times as many citations. I also don’t know whether there are ways to game this system: since the metrics are based in part on citations given, not just citations received, I worry there are situations where it would be to someone’s advantage to cite others less. I think they manage to avoid this by normalizing by number of citations given, and they emphasize that PageRank itself is estimating something we directly care about: how often people read a paper. Still, it would be good to see more rigorous work probing the system for weaknesses.

In addition to the proposed metrics, Strumia and Torre’s paper is full of interesting statistics about the arXiv and InSpire databases, both using more traditional metrics and their new ones. Whether or not the methods they propose work out, the paper is definitely worth a look.