Why Physicists Leave Physics

It’s an open secret that many physicists end up leaving physics. How many depends on how you count things, but for a representative number, this report has 31% of US physics PhDs in the private sector after one year. I’d expect that number to grow with time post-PhD. While some of these people might still be doing physics, in certain sub-fields that isn’t really an option: it’s not like there are companies that do R&D in particle physics, astrophysics, or string theory. Instead, these physicists get hired in data science, or quantitative finance, or machine learning. Others stay in academia, but stop doing physics: either transitioning to another field, or taking teaching-focused jobs that don’t leave time for research.

There’s a standard economic narrative for why this happens. The number of students grad schools accept and graduate is much higher than the number of professor jobs. There simply isn’t room for everyone, so many people end up doing something else instead.

That narrative is probably true, if you zoom out far enough. On the ground, though, the reasons people leave academia don’t feel quite this “economic”. While they might be indirectly based on a shortage of jobs, the direct reasons matter. Physicists leave physics for a wide variety of reasons, and many of them are things the field could improve on. Others are factors that will likely be present regardless of how many students graduate, or how many jobs there are. I worry that an attempt to address physics attrition on a purely economic level would miss these kinds of details.

I thought I’d talk in this post about a few reasons why physicists leave physics. Most of this won’t be new information to anyone, but I hope some of it is at least a new perspective.

First, to get it out of the way: almost no-one starts a physics PhD with the intention of going into industry. I’ve met a grand total of one person who did, and he’s rather unusual. Almost always, leaving physics represents someone’s dreams not working out.

Sometimes, that just means realizing you aren’t suited for physics. These are people who feel like they aren’t able to keep up with the material, or people who find they aren’t as interested in it as they expected. In my experience, people realize this sort of thing pretty early. They leave in the middle of grad school, or they leave once they have their PhD. In some sense, this is the healthy sort of attrition: without the ability to perfectly predict our interests and abilities, there will always be people who start a career and then decide it’s not for them.

I want to distinguish this from a broader reason to leave, disillusionment. These are people who can do physics, and want to do physics, but encounter a system that seems bent on making them do anything but. Sometimes this means disillusionment with the field itself: phenomenologists sick of tweaking models to lie just beyond the latest experimental bounds, or theorists who had hoped to address the real world but begin to see that they can’t. This kind of motivation lay behind several great atomic physicists going into biology after the second world war, to work on “life rather than death”. Sometimes instead it’s disillusionment with academia: people who have been bludgeoned by academic politics or bureaucracy, who despair of getting the academic system to care about real research or teaching instead of its current screwed-up priorities or who just don’t want to face that kind of abuse again.

When those people leave, it’s at every stage in their career. I’ve seen grad students disillusioned into leaving without a PhD, and successful tenured professors who feel like the field no longer has anything to offer them. While occasionally these people just have a difference of opinion, a lot of the time they’re pointing out real problems with the system, problems that actually should be fixed.

Sometimes, life intervenes. The classic example is the two-body problem, where you and your spouse have trouble finding jobs in the same place. There aren’t all that many places in the world that hire theoretical physicists, and still fewer with jobs open. One or both partners end up needing to compromise, and that can mean switching to a career with a bit more choice in location. People also move to take care of their parents, or because of other connections.

This seems closer to the economic picture, but I don’t think it quite lines up. Even if there were a lot fewer physicists applying for the same number of jobs, it’s still not certain that there’s a job where you want to live, specifically. You’d still end up with plenty of people leaving the field.

A commenter here frequently asks why physicists have to travel so much. Especially for a theorist, why can’t we just work remotely? With current technology, shouldn’t that be pretty easy to do?

I’ve done a lot of remote collaboration, it’s not impossible. But there really isn’t a substitute for working in the same place, for being able to meet someone in the hall and strike up a conversation around a blackboard. Remote collaborations are an ok way to keep a project going, but a rough way to start one. Institutes realize this, which is part of why most of the time they’ll only pay you a salary if they think you’re actually going to show up.

Could I imagine this changing? Maybe. The technology doesn’t exist right now, but maybe someday someone will design a social network with the right features, one where you can strike up and work on collaborations as naturally as you can in person. Then again, maybe I’m silly for imagining a technological solution to the problem in the first place.

What about more direct economic reasons? What about when people leave because of the academic job market itself?

This certainly happens. In my experience though, a lot of the time it’s pre-emptive. You’d think that people would apply for academic jobs, get rejected, and quit the field. More often, I’ve seen people notice the competition for jobs and decide at the outset that it’s not worth it for them. Sometimes this happens right out of grad school. Other times it’s later. In the latter case, these are often people who are “keeping up”, in that their career is moving roughly as fast as everyone else’s. Rather, it’s the stress, of keeping ahead of the field and marketing themselves and applying for every grant in sight and worrying that it could come crashing down any moment, that ends up too much to deal with.

What about the people who do get rejected over and over again?

Physics, like life in Jurassic Park, finds a way. Surprisingly often, these people manage to stick around. Without faculty positions they scrabble up postdoc after postdoc, short-term position after short-term position. They fund their way piece by piece, grant by grant. Often they get depressed, and cynical, and pissed off, and insist that this time they’re just going to quit the field altogether. But from what I’ve seen, once someone is that far in, they often don’t go through with it.

If fewer people went to physics grad school, or more professors were hired, would fewer people leave physics? Yes, absolutely. But there’s enough going on here, enough different causes and different motivations, that I suspect things wouldn’t work out quite as predicted. Some attrition is here to stay, some is independent of the economics. And some, perhaps, is due to problems we ought to actually solve.

Advertisements

Path Integrals and Loop Integrals: Different Things!

When talking science, we need to be careful with our words. It’s easy for people to see a familiar word and assume something totally different from what we intend. And if we use the same word twice, for two different things…

I’ve noticed this problem with the word “integral”. When physicists talk about particle physics, there are two kinds of integrals we mention: path integrals, and loop integrals. I’ve seen plenty of people get confused, and assume that these two are the same thing. They’re not, and it’s worth spending some time explaining the difference.

Let’s start with path integrals (also referred to as functional integrals, or Feynman integrals). Feynman promoted a picture of quantum mechanics in which a particle travels along many different paths, from point A to point B.

three_paths_from_a_to_b

You’ve probably seen a picture like this. Classically, a particle would just take one path, the shortest path, from A to B. In quantum mechanics, you have to add up all possible paths. Most longer paths cancel, so on average the short, classical path is the most important one, but the others do contribute, and have observable, quantum effects. The sum over all paths is what we call a path integral.

It’s easy enough to draw this picture for a single particle. When we do particle physics, though, we aren’t usually interested in just one particle: we want to look at a bunch of different quantum fields, and figure out how they will interact.

We still use a path integral to do that, but it doesn’t look like a bunch of lines from point A to B, and there isn’t a convenient image I can steal from Wikipedia for it. The quantum field theory path integral adds up, not all the paths a particle can travel, but all the ways a set of quantum fields can interact.

How do we actually calculate that?

One way is with Feynman diagrams, and (often, but not always) loop integrals.

4grav2loop

I’ve talked about Feynman diagrams before. Each one is a picture of one possible way that particles can travel, or that quantum fields can interact. In some (loose) sense, each one is a single path in the path integral.

Each diagram serves as instructions for a calculation. We take information about the particles, their momenta and energy, and end up with a number. To calculate a path integral exactly, we’d have to add up all the diagrams we could possibly draw, to get a sum over all possible paths.

(There are ways to avoid this in special cases, which I’m not going to go into here.)

Sometimes, getting a number out of a diagram is fairly simple. If the diagram has no closed loops in it (if it’s what we call a tree diagram) then knowing the properties of the in-coming and out-going particles is enough to know the rest. If there are loops, though, there’s uncertainty: you have to add up every possible momentum of the particles in the loops. You do that with a different integral, and that’s the one that we sometimes refer to as a loop integral. (Perhaps confusingly, these are also often called Feynman integrals: Feynman did a lot of stuff!)

\frac{i^{a+l(1-d/2)}\pi^{ld/2}}{\prod_i \Gamma(a_i)}\int_0^\infty...\int_0^\infty \prod_i\alpha_i^{a_i-1}U^{-d/2}e^{iF/U-i\sum m_i^2\alpha_i}d\alpha_1...d\alpha_n

Loop integrals can be pretty complicated, but at heart they’re the same sort of thing you might have seen in a calculus class. Mathematicians are pretty comfortable with them, and they give rise to numbers that mathematicians find very interesting.

Path integrals are very different. In some sense, they’re an “integral over integrals”, adding up every loop integral you could write down. Mathematicians can define path integrals in special cases, but it’s still not clear that the general case, the overall path integral picture we use, actually makes rigorous mathematical sense.

So if you see physicists talking about integrals, it’s worth taking a moment to figure out which one we mean. Path integrals and loop integrals are both important, but they’re very, very different things.

We Didn’t Deserve Hawking

I don’t usually do obituaries. I didn’t do one when Joseph Polchinksi died, though his textbook is sitting an arm’s reach from me right now. I never collaborated with Polchinski, I never met him, and others were much better at telling his story.

I never met Stephen Hawking, either. When I was at Perimeter, I’d often get asked if I had. Visitors would see his name on the Perimeter website, and I’d have to disappoint them by explaining that he hadn’t visited the institute in quite some time. His health, while exceptional for a septuagenarian with ALS, wasn’t up to the travel.

Was his work especially relevant to mine? Only because of its relevance to everyone who does gravitational physics. The universality of singularities in general relativity, black hole thermodynamics and Hawking radiation, these sharpened the questions around quantum gravity. Without his work, string theory wouldn’t have tried to answer the questions Hawking posed, and it wouldn’t have become the field it is today.

Hawking was unique, though, not necessarily because of his work, but because of his recognizability. Those visitors to Perimeter were a cross-section of the Canadian public. Some of them didn’t know the name of the speaker for the lecture they came to see. Some, arriving after reading Lee Smolin’s book, could only refer to him as “that older fellow who thinks about quantum gravity”. But Hawking? They knew Hawking. Without exception, they knew Hawking.

Who was the last physicist the public knew, like that? Feynman, at the height of his popularity, might have been close. You’d have to go back to Einstein to find someone who was really solidly known like that, who you could mention in homes across the world and expect recognition. And who else has that kind of status? Bohr might have it in Denmark. Go further back, and you’ll find people know Newton, they know Gaileo.

Einstein changed our picture of space and time irrevocably. Newton invented physics as we know it. Galileo and Copernicus pointed up to the sky and shouted that the Earth moves!

Hawking asked questions. He told us what did and didn’t make sense, he showed what we had to take into account. He laid the rules of engagement, and the rest of quantum gravity came and asked alongside him.

We live in an age of questions now. We’re starting to glimpse the answers, we have candidates and frameworks and tools, and if we’re feeling very optimistic we might already be sitting on a theory of everything. But we haven’t turned that corner yet, from asking questions to changing the world.

These ages don’t usually get a household name. Normally, you need an Einstein, a Newton, a Galileo, you need to shake the foundations of the world.

Somehow, Hawking gave us one anyway. Somehow, in our age of questions, we put a face in everyone’s mind, a figure huddled in a wheelchair with a snarky, computer-generated voice. Somehow Hawking reached out and reminded the world that there were people out there asking, that there was a big beautiful puzzle that our field was trying to solve.

Deep down, I’m not sure we deserved that. I hope we deserve it soon.

Grad School Changes You

Occasionally, you’ll see people argue that PhD degrees are unnecessary. Sometimes they’re non-scientists who don’t know what they’re talking about, sometimes they’re Freeman Dyson.

With the wide range of arguers comes a wide range of arguments, and I don’t pretend to be able to address them all. But I do think that PhD programs, or something like them, are necessary. Grad school performs a task that almost nothing else can: it turns students into researchers.

The difference between studying a subject and researching it is a bit like the difference between swimming laps in a pool and being a fish. You can get pretty good at swimming, to the point where you can go back and forth with no real danger of screwing up. But a fish lives there.

To do research in a subject, you really have to be able to “live there”. It doesn’t have to be your whole life, or even the most important part of your life. But it has to be somewhere you’re comfortable, where you can immerse yourself and interact with it naturally. You have to have “fluency”, in the same sort of sense you can be fluent in a language. And just as you can learn a language much faster by immersion than by just taking classes, most people find it a lot easier to become a researcher if they’re in an environment built around research.

Does that have to be grad school? Not necessarily. Some people get immersed in real research from an early age (Dyson certainly fell into that category). But even (especially) for a curious person, it’s easy to get immersed in something else instead. As a kid, I would probably happily have become a Dungeons and Dragons researcher if that was a real thing.

Grad school is a choice, to immerse yourself in something specific. You want to become a physicist? You can go somewhere where everyone cares about physics. A mathematician? Same deal. They even pay you, so you don’t need to try to fit research in between a bunch of part-time jobs. They have classes for those who learn better from classes, libraries for those who learn better from books, and for those who learn from conversation you can walk down the hall, knock on a door, and learn something new. You get the opportunity to surround yourself with a topic, to work it into your bones.

And the crazy thing? It really works. You go in with a student’s knowledge of a subject, often decades out of date, and you end up giving talks in front of the world’s experts. In most cases, you end up genuinely shocked by how much you’ve changed, how much you’ve grown. I know I was.

I’m not saying that all aspects of grad school are necessary. The thesis doesn’t make sense in every field, there’s a reason why theoretical physicists usually just staple their papers together and call it a day. Different universities have quite different setups for classes and teaching experience, so it’s unlikely that there’s one true way to arrange those. Even the concept of a single advisor might be more of an administrative convenience than a real necessity. But the core idea, of a place that focuses on the transformation from student to researcher, that pays you and gives you access to what you need…I don’t think that’s something we can do without.

Writing the Paper Changes the Results

You spent months on your calculation, but finally it’s paid off. Now you just have to write the paper. That’s the easy part, right?

Not quite. Even if writing itself is easy for you, writing a paper is never just writing. To write a paper, you have to make your results as clear as possible, to fit them into one cohesive story. And often, doing that requires new calculations.

It’s something that first really struck me when talking to mathematicians, who may be the most extreme case. For them, a paper needs to be a complete, rigorous proof. Even when they have a result solidly plotted out in their head, when they’re sure they can prove something and they know what the proof needs to “look like”, actually getting the details right takes quite a lot of work.

Physicists don’t have quite the same standards of rigor, but we have a similar paper-writing experience. Often, trying to make our work clear raises novel questions. As we write, we try to put ourselves in the mind of a potential reader. Sometimes our imaginary reader is content and quiet. Other times, though, they object:

“Does this really work for all cases? What about this one? Did you make sure you can’t do this, or are you just assuming? Where does that pattern come from?”

Addressing those objections requires more work, more calculations. Sometimes, it becomes clear we don’t really understand our results at all! The paper takes a new direction, flows with new work to a new, truer message, one we wouldn’t have discovered if we didn’t sit down and try to write it out.

At Least One Math Term That Makes Sense

I’ve complained before about how mathematicians name things. Mathematicans seem to have a knack for taking an ordinary bland word that’s almost indistinguishable from the other ordinary, bland words they’ve used before and assigning it an incredibly specific mathematical concept. Varieties and forms, motives and schemes, in each case you end up wishing they picked a word that was just a little more descriptive.

Sometimes, though, a word may seem completely out of place when it actually has a fairly reasonable explanation. Such is the case for the word “period“.

Suppose you want to classify numbers. You have the integers, and the rational numbers. A bigger class of numbers are “algebraic”, in that you can get them “from algebra”: more specifically, as solutions of polynomial equations with rational coefficients. Numbers that aren’t algebraic are “transcendental”, a popular example being \pi.

Periods lie in between: a set that contains algebraic numbers, but also many of the transcendental numbers. They’re numbers you can get, not from algebra, but from calculus: they’re integrals over rational functions. These numbers were popularized by Kontsevich and Zagier, and they’ve led to a lot of fruitful inquiry in both math and physics.

But why the heck are they called periods?

Think about e^{i x}.

euler13

Or if you prefer, think about a circle

e^{i x} is a periodic function, with period 2\pi.  Take x from 0 to 2\pi and the function repeats, you’ve traveled in a circle.

Thought of another way, 2\pi is the volume of the circle. It’s the integral, around the circle, of \frac{dz}{z}. And that integral nicely matches Kontsevich and Zagier’s definition of a period.

The idea of a period, then, comes from generalizing this. What happens when you only go partway around the circle, to some point z in the complex plane? Then you need to go to a point x=-i \ln z. So a logarithm can also be thought of as measuring the period of e^{i x}. And indeed, since a logarithm can be expressed as \int\frac{dz}{z}, they count as periods in the Kontsevich-Zagier sense.

Starting there, you can loosely think about the polylogarithm functions I like to work with as collections of logs, measuring periods of interlocking circles.

And if you need to go beyond polylogarithms, when you can’t just go circle by circle?

Then you need to think about functions with two periods, like Weierstrass’s elliptic function. Just as you can think about e^{i x} as a circle, you can think of Weierstrass’s function in terms of a torus.

torus_1000

Obligatory donut joke here

The torus has two periods, corresponding to the two circles you can draw around it. The periods of Weierstrass’s function are transcendental numbers, and they fit Kontsevich and Zagier’s definition of periods. And if you take the inverse of Weierstrass’s function, you get an elliptic integral, just like taking the inverse of e^{i x} gives a logarithm.

So mathematicians, I apologize. Periods, at least, make sense.

I’m still mad about “varieties” though.

Valentine’s Day Physics Poem 2018

Valentine’s Day was this week, so long-time readers should know what to expect. To continue this blog’s tradition, I’m posting another one of my old physics poems.

 

Winding Number One

 

When you feel twisted up inside, you may be told to step back

That after a long time, from a long distance

All things fall off.

 

So I stepped back.

 

But looking in from a distance

On the border (at infinity)

A shape remained

Etched deep

In the equation of my being

 

A shape that wouldn’t fall off

Even at infinity.

 

And they may tell you to wait and see,

That you will evolve in time

That all things change, continuously.

 

So I let myself change.

 

But no matter how long I waited

How much I evolved

I could not return

My new state cannot be deformed

To what I was before.

 

The shape at my border

Is basic, immutable.

 

Faced with my thoughts

I try to draw a map

And run out of space.

 

I need two selves

Two lives

To map my soul.

 

A double cover.

 

And now, faced by my dual

Tracing each index

Integrated over manifold possibilities

We do not vanish

We have winding number one.