Category Archives: Life as a Physicist

Current Themes 2018

I’m at Current Themes in High Energy Physics and Cosmology this week, the yearly conference of the Niels Bohr International Academy. (I talked about their trademark eclectic mix of topics last year.)

This year, the “current theme” was broadly gravitational (though with plenty of exceptions!).

IMG_20180815_180435532

For example, almost getting kicked out of the Botanical Garden

There were talks on phenomena we observe gravitationally, like dark matter. There were talks on calculating amplitudes in gravity theories, both classical and quantum. There were talks about black holes, and the overall shape of the universe. Subir Sarkar talked about his suspicion that the expansion of the universe isn’t actually accelerating, and while I still think the news coverage of it was overblown I sympathize a bit more with his point. He’s got a fairly specific worry, that we’re in a region that’s moving unusually with respect to the surrounding universe, that hasn’t really been investigated in much detail before. I don’t think he’s found anything definitive yet, but it will be interesting as more data accumulates to see what happens.

Of course, current themes can’t stick to just one theme, so there were non-gravitational talks as well. Nima Arkani-Hamed’s talk covered some results he’s talked about in the past, a geometric picture for constraining various theories, but with an interesting new development: while most of the constraints he found restrict things to be positive, one type of constraint he investigated allowed for a very small negative region, around thirty orders of magnitude smaller than the positive part. The extremely small size of the negative region was the most surprising part of the story, as it’s quite hard to get that kind of extremely small scale out of the math we typically invoke in physics (a similar sense of surprise motivates the idea of “naturalness” in particle physics).

There were other interesting talks, which I might talk about later. They should have slides up online soon in case any of you want to have a look.

Advertisements

Conferences Are Work! Who Knew?

I’ve been traveling for over a month now, from conference to conference, with a bit of vacation thrown in at the end.

(As such, I haven’t had time to read up on the recent announcement of the detection of neutrinos and high-energy photons from a blazar, Matt Strassler has a nice piece on it.)

One thing I didn’t expect was how exhausting going to three conferences in a row would be. I didn’t give any talks this time around, so I thought I was skipping the “work” part. But sitting in a room for talk after talk, listening and taking notes, turns out to still be work! There’s effort involved in paying attention, especially in a scientific talk where the details matter. You assess the talks in your head, turning concepts around and thinking about what you might do with them. It’s the kind of thing you don’t notice for a seminar or two, but at a conference, after a while, it really builds up. After three, let’s just say I’ve really needed this vacation. I’ll be back at work next week, and maybe I’ll have a longer blog post for you folks. Until then, I ought to get some rest!

By Any Other Author Would Smell as Sweet

I was chatting with someone about this paper (which probably deserves a post in its own right, once I figure out an angle that isn’t just me geeking out about how much I could do with their new setup), and I referred to it as “Claude’s paper”. This got me chided a bit: the paper has five authors, experts on Feynman diagrams and elliptic integrals. It’s not just “Claude’s paper”. So why do I think of it that way?

Part of it, I think, comes from the experience of reading a paper. We want to think of a paper as a speech act: someone talking to us, explaining something, leading us through a calculation. Our brain models that as a conversation with a single person, so we naturally try to put a single face to a paper. With a collaborative paper, this is almost never how it was written: different sections are usually written by different people, who then edit each other’s work. But unless you know the collaborators well, you aren’t going to know who wrote which section, so it’s easier to just picture one author for the whole thing.

Another element comes from how I think about the field. Just as it’s easier to think of a paper as the speech of one person, it’s easier to think of new developments as continuations of a story. I at least tend to think about the field in terms of specific programs: these people worked on this, which is a continuation of that. You can follow those kinds of threads though the field, but in reality they’re tangled together: collaborations are an opportunity for two programs to meet. In other fields you might have a “first author” to default to, but in theoretical physics we normally write authors alphabetically. For “Claude’s paper”, it just feels like the sort of thing I’d expect Claude Duhr to write, like a continuation of the other things he’s known for, even if it couldn’t have existed without the other four authors.

You’d worry that associating papers with people like this takes away deserved credit. I don’t think it’s quite that simple, though. In an older post I described this paper as the work of Anastasia Volovich and Mark Spradlin. On some level, that’s still how I think about it. Nevertheless, when I heard that Cristian Vergu was going to be at the Niels Bohr Institute next year, I was excited: we’re hiring one of the authors of GSVV! Even if I don’t think of him immediately when I think of the paper, I think of the paper when I think of him.

That, I think, is more important for credit. If you’re a hiring committee, you’ll start out by seeing names of applicants. It’s important, at that point, that you know what they did, that the authors of important papers stand out, that you assign credit where it’s due. It’s less necessary on the other end, when you’re reading a paper and casually classify it in your head.

Nevertheless, I should be more careful about credit. It’s important to remember that “Claude Duhr’s paper” is also “Johannes Broedel’s paper” and “Falko Dulat’s paper”, “Brenda Penante’s paper” and “Lorenzo Tancredi’s paper”. It gives me more of an appreciation of where it comes from, so I can get back to having fun applying it.

A Paper About Ranking Papers

If you’ve ever heard someone list problems in academia, citation-counting is usually near the top. Hiring and tenure committees want easy numbers to judge applicants with: number of papers, number of citations, or related statistics like the h-index. Unfortunately, these metrics can be gamed, leading to a host of bad practices that get blamed for pretty much everything that goes wrong in science. In physics, it’s not even clear that these statistics tell us anything: papers in our field have been including more citations over time, and for thousand-person experimental collaborations the number of citations and papers don’t really reflect any one person’s contribution.

It’s pretty easy to find people complaining about this. It’s much rarer to find a proposed solution.

That’s why I quite enjoyed Alessandro Strumia and Riccardo Torre’s paper last week, on Biblioranking fundamental physics.

Some of their suggestions are quite straightforward. With the number of citations per paper increasing, it makes sense to divide each paper by the number of citations it contains: it means more to get cited by a paper with ten citations than by a paper with one hundred. Similarly, you could divide credit for a paper among its authors, rather than giving each author full credit.

Some are more elaborate. They suggest using a variant of Google’s PageRank algorithm to rank papers and authors. Essentially, the algorithm imagines someone wandering from paper to paper and tries to figure out which papers are more central to the network. This is apparently an old idea, but by combining it with their normalization by number of citations they eke a bit more mileage from it. (I also found their treatment a bit clearer than the older papers they cite. There are a few more elaborate setups in the literature as well, but they seem to have a lot of free parameters so Strumia and Torre’s setup looks preferable on that front.)

One final problem they consider is that of self-citations, and citation cliques. In principle, you could boost your citation count by citing yourself. While that’s easy to correct for, you could also be one of a small number of authors who cite each other a lot. To keep the system from being gamed in this way, they propose a notion of a “CitationCoin” that counts (normalized) citations received minus (normalized) citations given. The idea is that, just as you can’t make anyone richer just by passing money between your friends without doing anything with it, so a small community can’t earn “CitationCoins” without getting the wider field interested.

There are still likely problems with these ideas. Dividing each paper by its number of authors seems like overkill: a thousand-person paper is not typically going to get a thousand times as many citations. I also don’t know whether there are ways to game this system: since the metrics are based in part on citations given, not just citations received, I worry there are situations where it would be to someone’s advantage to cite others less. I think they manage to avoid this by normalizing by number of citations given, and they emphasize that PageRank itself is estimating something we directly care about: how often people read a paper. Still, it would be good to see more rigorous work probing the system for weaknesses.

In addition to the proposed metrics, Strumia and Torre’s paper is full of interesting statistics about the arXiv and InSpire databases, both using more traditional metrics and their new ones. Whether or not the methods they propose work out, the paper is definitely worth a look.

Why Physicists Leave Physics

It’s an open secret that many physicists end up leaving physics. How many depends on how you count things, but for a representative number, this report has 31% of US physics PhDs in the private sector after one year. I’d expect that number to grow with time post-PhD. While some of these people might still be doing physics, in certain sub-fields that isn’t really an option: it’s not like there are companies that do R&D in particle physics, astrophysics, or string theory. Instead, these physicists get hired in data science, or quantitative finance, or machine learning. Others stay in academia, but stop doing physics: either transitioning to another field, or taking teaching-focused jobs that don’t leave time for research.

There’s a standard economic narrative for why this happens. The number of students grad schools accept and graduate is much higher than the number of professor jobs. There simply isn’t room for everyone, so many people end up doing something else instead.

That narrative is probably true, if you zoom out far enough. On the ground, though, the reasons people leave academia don’t feel quite this “economic”. While they might be indirectly based on a shortage of jobs, the direct reasons matter. Physicists leave physics for a wide variety of reasons, and many of them are things the field could improve on. Others are factors that will likely be present regardless of how many students graduate, or how many jobs there are. I worry that an attempt to address physics attrition on a purely economic level would miss these kinds of details.

I thought I’d talk in this post about a few reasons why physicists leave physics. Most of this won’t be new information to anyone, but I hope some of it is at least a new perspective.

First, to get it out of the way: almost no-one starts a physics PhD with the intention of going into industry. I’ve met a grand total of one person who did, and he’s rather unusual. Almost always, leaving physics represents someone’s dreams not working out.

Sometimes, that just means realizing you aren’t suited for physics. These are people who feel like they aren’t able to keep up with the material, or people who find they aren’t as interested in it as they expected. In my experience, people realize this sort of thing pretty early. They leave in the middle of grad school, or they leave once they have their PhD. In some sense, this is the healthy sort of attrition: without the ability to perfectly predict our interests and abilities, there will always be people who start a career and then decide it’s not for them.

I want to distinguish this from a broader reason to leave, disillusionment. These are people who can do physics, and want to do physics, but encounter a system that seems bent on making them do anything but. Sometimes this means disillusionment with the field itself: phenomenologists sick of tweaking models to lie just beyond the latest experimental bounds, or theorists who had hoped to address the real world but begin to see that they can’t. This kind of motivation lay behind several great atomic physicists going into biology after the second world war, to work on “life rather than death”. Sometimes instead it’s disillusionment with academia: people who have been bludgeoned by academic politics or bureaucracy, who despair of getting the academic system to care about real research or teaching instead of its current screwed-up priorities or who just don’t want to face that kind of abuse again.

When those people leave, it’s at every stage in their career. I’ve seen grad students disillusioned into leaving without a PhD, and successful tenured professors who feel like the field no longer has anything to offer them. While occasionally these people just have a difference of opinion, a lot of the time they’re pointing out real problems with the system, problems that actually should be fixed.

Sometimes, life intervenes. The classic example is the two-body problem, where you and your spouse have trouble finding jobs in the same place. There aren’t all that many places in the world that hire theoretical physicists, and still fewer with jobs open. One or both partners end up needing to compromise, and that can mean switching to a career with a bit more choice in location. People also move to take care of their parents, or because of other connections.

This seems closer to the economic picture, but I don’t think it quite lines up. Even if there were a lot fewer physicists applying for the same number of jobs, it’s still not certain that there’s a job where you want to live, specifically. You’d still end up with plenty of people leaving the field.

A commenter here frequently asks why physicists have to travel so much. Especially for a theorist, why can’t we just work remotely? With current technology, shouldn’t that be pretty easy to do?

I’ve done a lot of remote collaboration, it’s not impossible. But there really isn’t a substitute for working in the same place, for being able to meet someone in the hall and strike up a conversation around a blackboard. Remote collaborations are an ok way to keep a project going, but a rough way to start one. Institutes realize this, which is part of why most of the time they’ll only pay you a salary if they think you’re actually going to show up.

Could I imagine this changing? Maybe. The technology doesn’t exist right now, but maybe someday someone will design a social network with the right features, one where you can strike up and work on collaborations as naturally as you can in person. Then again, maybe I’m silly for imagining a technological solution to the problem in the first place.

What about more direct economic reasons? What about when people leave because of the academic job market itself?

This certainly happens. In my experience though, a lot of the time it’s pre-emptive. You’d think that people would apply for academic jobs, get rejected, and quit the field. More often, I’ve seen people notice the competition for jobs and decide at the outset that it’s not worth it for them. Sometimes this happens right out of grad school. Other times it’s later. In the latter case, these are often people who are “keeping up”, in that their career is moving roughly as fast as everyone else’s. Rather, it’s the stress, of keeping ahead of the field and marketing themselves and applying for every grant in sight and worrying that it could come crashing down any moment, that ends up too much to deal with.

What about the people who do get rejected over and over again?

Physics, like life in Jurassic Park, finds a way. Surprisingly often, these people manage to stick around. Without faculty positions they scrabble up postdoc after postdoc, short-term position after short-term position. They fund their way piece by piece, grant by grant. Often they get depressed, and cynical, and pissed off, and insist that this time they’re just going to quit the field altogether. But from what I’ve seen, once someone is that far in, they often don’t go through with it.

If fewer people went to physics grad school, or more professors were hired, would fewer people leave physics? Yes, absolutely. But there’s enough going on here, enough different causes and different motivations, that I suspect things wouldn’t work out quite as predicted. Some attrition is here to stay, some is independent of the economics. And some, perhaps, is due to problems we ought to actually solve.

Grad School Changes You

Occasionally, you’ll see people argue that PhD degrees are unnecessary. Sometimes they’re non-scientists who don’t know what they’re talking about, sometimes they’re Freeman Dyson.

With the wide range of arguers comes a wide range of arguments, and I don’t pretend to be able to address them all. But I do think that PhD programs, or something like them, are necessary. Grad school performs a task that almost nothing else can: it turns students into researchers.

The difference between studying a subject and researching it is a bit like the difference between swimming laps in a pool and being a fish. You can get pretty good at swimming, to the point where you can go back and forth with no real danger of screwing up. But a fish lives there.

To do research in a subject, you really have to be able to “live there”. It doesn’t have to be your whole life, or even the most important part of your life. But it has to be somewhere you’re comfortable, where you can immerse yourself and interact with it naturally. You have to have “fluency”, in the same sort of sense you can be fluent in a language. And just as you can learn a language much faster by immersion than by just taking classes, most people find it a lot easier to become a researcher if they’re in an environment built around research.

Does that have to be grad school? Not necessarily. Some people get immersed in real research from an early age (Dyson certainly fell into that category). But even (especially) for a curious person, it’s easy to get immersed in something else instead. As a kid, I would probably happily have become a Dungeons and Dragons researcher if that was a real thing.

Grad school is a choice, to immerse yourself in something specific. You want to become a physicist? You can go somewhere where everyone cares about physics. A mathematician? Same deal. They even pay you, so you don’t need to try to fit research in between a bunch of part-time jobs. They have classes for those who learn better from classes, libraries for those who learn better from books, and for those who learn from conversation you can walk down the hall, knock on a door, and learn something new. You get the opportunity to surround yourself with a topic, to work it into your bones.

And the crazy thing? It really works. You go in with a student’s knowledge of a subject, often decades out of date, and you end up giving talks in front of the world’s experts. In most cases, you end up genuinely shocked by how much you’ve changed, how much you’ve grown. I know I was.

I’m not saying that all aspects of grad school are necessary. The thesis doesn’t make sense in every field, there’s a reason why theoretical physicists usually just staple their papers together and call it a day. Different universities have quite different setups for classes and teaching experience, so it’s unlikely that there’s one true way to arrange those. Even the concept of a single advisor might be more of an administrative convenience than a real necessity. But the core idea, of a place that focuses on the transformation from student to researcher, that pays you and gives you access to what you need…I don’t think that’s something we can do without.

Writing the Paper Changes the Results

You spent months on your calculation, but finally it’s paid off. Now you just have to write the paper. That’s the easy part, right?

Not quite. Even if writing itself is easy for you, writing a paper is never just writing. To write a paper, you have to make your results as clear as possible, to fit them into one cohesive story. And often, doing that requires new calculations.

It’s something that first really struck me when talking to mathematicians, who may be the most extreme case. For them, a paper needs to be a complete, rigorous proof. Even when they have a result solidly plotted out in their head, when they’re sure they can prove something and they know what the proof needs to “look like”, actually getting the details right takes quite a lot of work.

Physicists don’t have quite the same standards of rigor, but we have a similar paper-writing experience. Often, trying to make our work clear raises novel questions. As we write, we try to put ourselves in the mind of a potential reader. Sometimes our imaginary reader is content and quiet. Other times, though, they object:

“Does this really work for all cases? What about this one? Did you make sure you can’t do this, or are you just assuming? Where does that pattern come from?”

Addressing those objections requires more work, more calculations. Sometimes, it becomes clear we don’t really understand our results at all! The paper takes a new direction, flows with new work to a new, truer message, one we wouldn’t have discovered if we didn’t sit down and try to write it out.