Occasionally, you’ll see people argue that PhD degrees are unnecessary. Sometimes they’re non-scientists who don’t know what they’re talking about, sometimes they’re Freeman Dyson.

With the wide range of arguers comes a wide range of arguments, and I don’t pretend to be able to address them all. But I do think that PhD programs, or something like them, are necessary. Grad school performs a task that almost nothing else can: it turns students into researchers.

The difference between studying a subject and researching it is a bit like the difference between swimming laps in a pool and being a fish. You can get pretty good at swimming, to the point where you can go back and forth with no real danger of screwing up. But a fish lives there.

To do research in a subject, you really have to be able to “live there”. It doesn’t have to be your whole life, or even the most important part of your life. But it has to be somewhere you’re comfortable, where you can immerse yourself and interact with it naturally. You have to have “fluency”, in the same sort of sense you can be fluent in a language. And just as you can learn a language much faster by immersion than by just taking classes, most people find it a lot easier to become a researcher if they’re in an environment built around research.

Does that have to be grad school? Not necessarily. Some people get immersed in real research from an early age (Dyson certainly fell into that category). But even (especially) for a curious person, it’s easy to get immersed in something else instead. As a kid, I would probably happily have become a Dungeons and Dragons researcher if that was a real thing.

Grad school is a choice, to immerse yourself in something specific. You want to become a physicist? You can go somewhere where everyone cares about physics. A mathematician? Same deal. They even pay you, so you don’t need to try to fit research in between a bunch of part-time jobs. They have classes for those who learn better from classes, libraries for those who learn better from books, and for those who learn from conversation you can walk down the hall, knock on a door, and learn something new. You get the opportunity to surround yourself with a topic, to work it into your bones.

And the crazy thing? It really works. You go in with a student’s knowledge of a subject, often decades out of date, and you end up giving talks in front of the world’s experts. In most cases, you end up genuinely shocked by how much you’ve changed, how much you’ve grown. I know I was.

I’m not saying that all aspects of grad school are necessary. The thesis doesn’t make sense in every field, there’s a reason why theoretical physicists usually just staple their papers together and call it a day. Different universities have quite different setups for classes and teaching experience, so it’s unlikely that there’s one true way to arrange those. Even the concept of a single advisor might be more of an administrative convenience than a real necessity. But the core idea, of a place that focuses on the transformation from student to researcher, that pays you and gives you access to what you need…I don’t think that’s something we can do without.

# Writing the Paper Changes the Results

You spent months on your calculation, but finally it’s paid off. Now you just have to write the paper. That’s the easy part, right?

Not quite. Even if writing itself is easy for you, writing a paper is never just writing. To write a paper, you have to make your results as clear as possible, to fit them into one cohesive story. And often, doing that requires new calculations.

It’s something that first really struck me when talking to mathematicians, who may be the most extreme case. For them, a paper needs to be a complete, rigorous proof. Even when they have a result solidly plotted out in their head, when they’re sure they can prove something and they know what the proof needs to “look like”, actually getting the details right takes quite a lot of work.

Physicists don’t have quite the same standards of rigor, but we have a similar paper-writing experience. Often, trying to make our work clear raises novel questions. As we write, we try to put ourselves in the mind of a potential reader. Sometimes our imaginary reader is content and quiet. Other times, though, they object:

“Does this really work for all cases? What about this one? Did you make sure you can’t do this, or are you just assuming? Where does that pattern come from?”

Addressing those objections requires more work, more calculations. Sometimes, it becomes clear we don’t really understand our results at all! The paper takes a new direction, flows with new work to a new, truer message, one we wouldn’t have discovered if we didn’t sit down and try to write it out.

By A. Physicist

…because it disagrees with precision electroweak measurements

…………………………………..with bounds from ATLAS and CMS

…………………………………..with the power spectrum of the CMB

…………………………………..with Eötvös experiments

…because it isn’t gauge invariant

………………………….Lorentz invariant

………………………….diffeomorphism invariant

………………………….background-independent, whatever that means

…because it violates unitarity

…………………………………locality

…………………………………causality

…………………………………observer-independence

…………………………………technical naturalness

…………………………………international treaties

…………………………………cosmic censorship

…because you screwed up the calculation

…because you didn’t actually do the calculation

…because I don’t understand the calculation

…because you predict too many magnetic monopoles

……………………………………too many proton decays

……………………………………too many primordial black holes

…………………………………..remnants, at all

…because it’s fine-tuned

…because it’s suspiciously finely-tuned

…because it’s finely tuned to be always outside of experimental bounds

…because you’re misunderstanding quantum mechanics

…………………………………………………………..black holes

………………………………………………………….effective field theory

…………………………………………………………..thermodynamics

…………………………………………………………..the scientific method

…because Condensed Matter would contribute more to Chinese GDP

…because the approximation you’re making is unjustified

…………………………………………………………………………is not valid

…………………………………………………………………………is wildly overoptimistic

………………………………………………………………………….is just kind of lazy

…because there isn’t a plausible UV completion

…because you care too much about the UV

…because it only works in polynomial time

…………………………………………..exponential time

…………………………………………..factorial time

…because even if it’s fast it requires more memory than any computer on Earth

…because it requires more bits of memory than atoms in the visible universe

…because it has no meaningful advantages over current methods

…because it has meaningful advantages over my own methods

…because it can’t just be that easy

…because it’s not the kind of idea that usually works

…because it’s not the kind of idea that usually works in my field

…because it isn’t canonical

…because it’s ugly

…because it’s baroque

…because it ain’t baroque, and thus shouldn’t be fixed

…because only a few people work on it

…because far too many people work on it

…because clearly it will only work for the first case

……………………………………………………………….the first two cases

……………………………………………………………….the first seven cases

……………………………………………………………….the cases you’ve published and no more

…because I know you’re wrong

…because I strongly suspect you’re wrong

…because I strongly suspect you’re wrong, but saying I know you’re wrong looks better on a grant application

…….in a blog post

…because I’m just really pessimistic about something like that ever actually working

…because I’d rather work on my own thing, that I’m much more optimistic about

…because if I’m clear about my reasons

……and what I know

…….and what I don’t

……….then I’ll convince you you’re wrong.

……….or maybe you’ll convince me?

# Of Grad Students and Money

I usually avoid talking politics on this blog. In part, that’s because I usually don’t have something worth saying.

When the US House of Representatives voted on a tax bill that included a tax on grad student tuition waivers, though, I was tempted. Grad school wasn’t so long ago for me, and combining my friends’ experiences with mine I thought I knew enough for a post.

In the end, the tax on tuition waivers was dropped from the bill. I’m not going to comment on the rest of the bill, I really don’t have any relevant expertise there.

I do want to say a bit about what I learned, though.

First, the basics:

In the US, PhD students don’t typically pay tuition. Instead, they get paid a stipend, which gets taxed just like any other income. In exchange, they work for their department at the university, as Teaching Assistants and Research Assistants.

PhD tuition isn’t zero, though. Their tuition (often comparable to undergraduate tuition at the same university) is waived, but someone still pays it. Sometimes that “someone” is the department, paying tuition alongside wages as part of the cost of a Teaching Assistant. Sometimes it’s a grant held by a professor, as part of the cost of that professor hiring a Research Assistant. Sometimes it’s another organization: the National Science Foundation or the Fulbright Program, paying for a student who showed their worth in an application process.

I know a fair number of professors, many of whom have worked with university administrations, so I thought this would be simple to answer. Then I started asking people, and everyone I asked said something different.

Some thought it was mostly set by comparing to other universities. Others had the impression it was tied to undergrad tuition, that the university had a standard price it charges per course. Others pointed out that at many places, the cost of funding a grad student is the same as the cost of a postdoc. Since postdoc salaries are at least somewhat competitive, this implies that the total of grad student tuition plus stipend is set by the postdoc market, and then the university takes as much of it for tuition as they can before the stipend becomes unreasonably low.

What no one claimed, even after I asked them directly, was that grad student tuition represented the cost of educating a grad student. Grad education does cost money, in professor salaries and campus resources. But I couldn’t find anyone who would claim that this cost was anywhere near what universities charged in PhD tuition.

Rather, grad tuition seems to be part of the bulk of mysterious “overhead” that universities take out of grants. “Overhead” varies from grant to grant and situation to situation, with universities taking less out of some places and more out of others. Either way, it isn’t really overhead in the conventional sense: rather than being the cost to the university of administering that grant or educating that grad student, it’s treated as a source of money for the university to funnel elsewhere, to fund everything else they do.

If grad tuition waivers had ended up taxed, couldn’t universities just pay their grad students’ tuition some other way?

Yes, but you probably wouldn’t like it.

Waiving tuition is only one way to let grad students go tuition-free. Another way, which would not have been taxed under the proposed bill, is scholarships.

There are already some US universities that cover grad student tuition with scholarships, and I get the impression it’s a common setup in Canada. But from what I’ve seen, it doesn’t work very well.

The problem, as far as I can tell, is that once a university decides that something is a “scholarship”, it wants to pay it like a scholarship. For some reason, this appears to mean randomly, over the course of the year, rather than at the beginning of the year. This isn’t a huge problem when it’s just tuition, since usually universities are sensible enough to wait until you’ve gotten your scholarship to charge you. But often, universities that are already covering tuition with a scholarship will cover a significant chunk of stipend with it too.

The end result, as I’ve seen happen in several places, is that students show up and are told they’ll be paid a particular stipend. They sign rental contracts, they make plans assuming that money will be there. And then several months pass, and it turns out most of the stipend they were promised is a “scholarship”, and that scholarship won’t actually be paid until the university feels like it. So for the first few months, those students have to hope they have forgiving landlords, because it’s not like they can get the university to pay them on time just because they said they were going to.

Of course, I should mention that even without scholarships, there are universities that pay their students late, which leads into my overall point: this system is a huge mess. Grad students are in a weird in-between place, treated like employees part of the time and students part of the time, with the actual rationale in each case frustratingly opaque. In some places, with attentive departments or savvy grad student unions, the mess gets kept to a minimum. Others aren’t so lucky. What’s worse is that this kind of system is often the sort where, if you put it under any pressure, it shuffles the problem around until it ends up with someone who can’t complain. And chances are, that person is a grad student.

I don’t know how to fix this. It seems like the sort of thing where you have to just reform the system all in one go, in a way that takes everything into account. I don’t know of any proposed plans that do that.

One final note: I usually have a ban on politics in the comments. That would be more than a little hypocritical to enforce here. I’d still like to prevent the more vicious arguments, to keep the discussion civil and informative. As such, the following rules are intended as conversational speed bumps, with the hope that in writing around them you take a bit more time to think about what you have to say.

For the comments here, please: do not mention specific politicians, political parties, or ideologies. Please avoid personal insults, especially towards your fellow commenters. Please try to avoid speculation about peoples’ motives, and focus as much as possible on specifics: specific experiences you’ve had, specific rules and regulations, specific administrative practices, specific economic studies. If at all possible, try to inform, not just vent, and maybe we can learn something from each other.

Anyone will tell you that academia is broken.

The why varies, of course: some blame publication pressure, or greedy journals. Some think it’s the fault of grant committees, or tenure committees, or grad admission committees. Some argue we’re driving away the wrong people, others that we’re letting in the wrong people. Some place the fault with the media, or administrators, or the government, or the researchers themselves. Some believe the problem is just a small group of upstarts, others want to tear the whole system down.

If there’s one common theme to every “academia is broken” take, it’s limited resources. There are only so many people who can make a living doing research. Academia has to pick and choose who these people are and what they get to do, and anyone who thinks the system is broken thinks those choices could be made better.

As I was writing my version of the take, I started wondering. What if we didn’t have to choose? What would academia look like in a world without limited resources, where no-one needed to work for a living? Can we imagine what that world might look like?

Then I realized I didn’t need to imagine it. I’d already seen it.

And it was glorious

Let me tell you a bit about Dungeons and Dragons.

Dungeons and Dragons doesn’t have “pro gamers”, nobody makes money playing it. It isn’t even really the kind of game you can win or lose. It’s collaborative storytelling, backed up with a pile of dice and rulebooks. Nonetheless, Dungeons and Dragons has an active community dedicated to thinking about the game. They call themselves “optimizers”, and they focus on figuring out the best way the rules allow to do what they want to do.

Sometimes, the goal is practical: “what’s the best archer I can make?” “how can I make a character that has something useful to do no matter what?” Sometimes it’s more farfetched: “can I deal infinite damage?” “how can I make a god at level one?” Optimizing for these goals requires seeking out obscure rules, debating loopholes and the meaning of the text, and calculating probabilities.

I like to joke that Dungeons and Dragons was my first academic community, and that isn’t too far from the truth. These are people obsessed with understanding a complex system, who “publish” their research in forum posts , who collaborate and compete and care about finding the truth. While these people do have day jobs, that wasn’t a real limit. Dungeons and Dragons, I am forced to admit, is easier than theoretical physics. Even with day jobs or school, most of the D&D optimization community had plenty of time to do all the “research” they wanted. In a very real sense, they’re a glimpse at a post-scarcity academia.

There’s another parallel, one relevant to the current situation in theoretical physics. When I was most active in optimization, we played an edition of the game that was out of print. Normally there’s a sort of feedback between game designers and optimizers. As new expansions and errata are released, debates in the optimization community get resolved or re-ignited. With an out of print edition though, that feedback isn’t available. The optimization community was left by itself, examining whatever evidence it already had. This feels a lot like the current situation in physics, when so many experiments are just confirming the Standard Model. Without much feedback, the community has to evolve on its own.

So what did post-scarcity academia look like?

First, the good: this was a community highly invested in education. The best way to gain status wasn’t to build the strongest character, or discover a new trick. Instead, the most respected members of the community were the handbook writers, people who wrote long, clearly written forum posts summarizing optimization knowledge for newer players. I’m still not at the point where I read physics textbooks for fun, but back when I was active I would absolutely read optimization handbooks for fun. For those who wanted to get involved, the learning curve was about as well-signposted as it could be.

It was a community that could display breathtaking creativity, as well as extreme diligence. Some optimization was off-the-cuff and easy, but a lot of it took real work or real insight, and it showed. People would write short stories about the characters they made, or spend weeks cataloging every book that mentioned a particular rule. Despite not having to do their “research” for a living, motivation was never in short supply.

All that said, I think people yearning for a post-scarcity academia would be disappointed. If you think people do derivative, unoriginal work just because of academic careers, then I regret to inform you that a lot of optimization was unoriginal. There were a lot of posts that were just remixes of old ideas, packaged into a “new” build. There were also plenty of repetitive, pointless arguments, to the point that we’d joke about “Monkday” and “Wizard Wednesday”.

There was also a lot of attention-seeking behavior. There’s no optimization media, no optimization jobs that look for famous candidates, but people still cared about being heard, and pitched their work accordingly. We’d get a lot of overblown posts: “A Fighter that can beat any Wizard!” (because he’s been transformed by a spell into an all-powerful shapeshifter), “A Sorceror that can beat any Wizard!” (using houserules which change every time someone points out a flaw in the idea).

(Wizards, as you may be noticing, were kind of the String Theory of that community.)

Some problems in academia are caused by bad incentives, by the structure of academic careers. Some, though, are caused because academics are human beings. If we didn’t have to work for a living, academics would probably have different priorities, and we might work on a wider range of projects. But I suspect we’d still have good days and bad, that we’d still puff ourselves up for attention and make up dubious solutions to famous problems.

Of course, Dungeons and Dragons optimizers aren’t the only example of “post-scarcity academia”, or even a perfect example. They’ve got their own pressures, due to the structure of the community, that shape them in particular ways. I’d be interested to learn about other “amateur academics”, and how they handle things. My guess is that the groups whose work is closer to “real academia” (for example, the Society for Creative Anachronism) are more limited by their day jobs, but otherwise might be more informative. If there’s a “post-scarcity academia” you’re familiar with, mention it in the comments!

# Our Bargain

Sabine Hossenfelder has a blog post this week chastising particle physicists and cosmologists for following “upside-down Popper”, or assuming a theory is worth working on merely because it’s falsifiable. She describes her colleagues churning out one hypothesis after another, each tweaking an old idea just enough to make it falsifiable in the next experiment, without caring whether the hypothesis is actually likely to be true.

Sabine is much more of an expert in this area of physics (phenomenology) than I am, and I don’t presume to tell her she’s wrong about that community. But the problem she’s describing is part of something bigger, something that affects my part of physics as well.

There’s a core question we’d all like to answer: what should physicists work on? What criteria should guide us?

Falsifiability isn’t the whole story. The next obvious criterion is a sense of simplicity, of Occam’s Razor or mathematical elegance. Sabine has argued against the latter, which prompted a friend of mine to comment that between rejecting falsifiability and elegance, Sabine must want us to stop doing high-energy physics at all!

That’s more than a little unfair, though. I think Sabine has a reasonably clear criterion in mind. It’s the same criterion that most critics of the physics mainstream care about. It’s even the same criterion being used by the “other side”, the sort of people who criticize anything that’s not string/SUSY/inflation.

The criterion is quite a simple one: physics research should be productive. Anything we publish, anything we work on, should bring us closer to understanding the real world.

And before you object that this criterion is obvious, that it’s subjective, that it ignores the very real disagreements between the Sabines and the Luboses of the world…before any of that, please let me finish.

We can’t achieve this criterion. And we shouldn’t.

We can’t demand that all physics be productive without breaking a fundamental bargain, one we made when we accepted that science could be a career.

The Hunchback of Notre Science

It wasn’t always this way. Up until the nineteenth century, “scientist” was a hobby, not a job.

After Newton published his theory of gravity, he was famously accused by Robert Hooke of stealing the idea. There’s some controversy about this, but historians agree on a few points: that Hooke did write a letter to Newton suggesting a $1/r^2$ force law, and that Hooke, unlike Newton, never really worked out the law’s full consequences.

Why not? In part, because Hooke, unlike Newton, had a job.

Hooke was arguably the first person for whom science was a full-time source of income. As curator of experiments for the Royal Society, it was his responsibility to set up demonstrations for each Royal Society meeting. Later, he also handled correspondence for the Royal Society Journal. These responsibilities took up much of his time, and as a result, even if he was capable of following up on the consequences of $1/r^2$ he wouldn’t have had time to focus on it. That kind of calculation wasn’t what he was being paid for.

We’re better off than Hooke today. We still have our responsibilities, to journals and teaching and the like, at various stages of our careers. But in the centuries since Hooke expectations have changed, and real original research is no longer something we have to fit in our spare time. It’s now a central expectation of the job.

When scientific research became a career, we accepted a kind of bargain. On the positive side, you no longer have to be independently wealthy to contribute to science. More than that, the existence of professional scientists is the bedrock of technological civilization. With enough scientists around, we get modern medicine and the internet and space programs and the LHC, things that wouldn’t be possible in a world of rare wealthy geniuses.

We pay a price for that bargain, though. If science is a steady job, then it has to provide steady work. A scientist has to be able to go in, every day, and do science.

And the problem is, science doesn’t always work like that. There isn’t always something productive to work on. Even when there is, there isn’t always something productive for you to work on.

Sabine blames “upside-down Popper” on the current publish-or-perish environment in physics. If physics careers weren’t so cut-throat and the metrics they are judged by weren’t so flawed, then maybe people would have time to do slow, careful work on deeper topics rather than pumping out minimally falsifiable papers as fast as possible.

There’s a lot of truth to this, but I think at its core it’s a bit too optimistic. Each of us only has a certain amount of expertise, and sometimes that expertise just isn’t likely to be productive at the moment. Because science is a job, a person in that position can’t just go work at the Royal Mint like Newton did. (The modern-day equivalent would be working for Wall Street, but physicists rarely come back from that.) Instead, they keep doing what they know how to do, slowly branching out, until they’ve either learned something productive or their old topic becomes useful once more. You can think of it as a form of practice, where scientists keep their skills honed until they’re needed.

So if we slow down the rate of publication, if we create metrics for universities that let them hire based on the depth and importance of work and not just number of papers and citations, if we manage all of that then yes we will improve science a great deal. But Lisa Randall still won’t work on Haag’s theorem.

In the end, we’ll still have physicists working on topics that aren’t actually productive.

A physicist lazing about unproductively under an apple tree

So do we have to pay physicists to work on whatever they want, no matter how ridiculous?

No, I’m not saying that. We can’t expect everyone to do productive work all the time, but we can absolutely establish standards to make the work more likely to be productive.

Strange as it may sound, I think our standards for this are already quite good, or at least better than many other fields.

First, there’s falsifiability itself, or specifically our attitude towards it.

Physics’s obsession with falsifiability has one important benefit: it means that when someone proposes a new model of dark matter or inflation that they tweaked to be just beyond the current experiments, they don’t claim to know it’s true. They just claim it hasn’t been falsified yet.

This is quite different from what happens in biology and the social sciences. There, if someone tweaks their study to be just within statistical significance, people typically assume the study demonstrated something real. Doctors base treatments on it, and politicians base policy on it. Upside-down Popper has its flaws, but at least it’s never going to kill anybody, or put anyone in prison.

Admittedly, that’s a pretty low bar. Let’s try to set a higher one.

Moving past falsifiability, what about originality? We have very strong norms against publishing work that someone else has already done.

Ok, you (and probably Sabine) would object, isn’t that easy to get around? Aren’t all these Popper-flippers pretending to be original but really just following the same recipe each time, modifying their theory just enough to stay falsifiable?

To some extent. But if they were really following a recipe, you could beat them easily: just write the recipe down.

Physics progresses best when we can generalize, when we skip from case-by-case to understanding whole swaths of cases at once. Over time, there have been plenty of cases in which people have done that, where a number of fiddly hand-made models have been summarized in one parameter space. Once that happens, the rule of originality kicks in: now, no-one can propose another fiddly model like that again. It’s already covered.

As long as the recipe really is just a recipe, you can do this. You can write up what these people are doing in computer code, release the code, and then that’s that, they have to do something else. The problem is, most of the time it’s not really a recipe. It’s close enough to one that they can rely on it, close enough to one that they can get paper after paper when they need to…but it still requires just enough human involvement, just enough genuine originality, to be worth a paper.

The good news is that the range of “recipes” we can code up increases with time. Some spaces of theories we might never be able to describe in full generality (I’m glad there are people trying to do statistics on the string landscape, but good grief it looks quixotic). Some of the time though, we have a real chance of putting a neat little bow on a subject, labeled “no need to talk about this again”.

This emphasis on originality keeps the field moving. It means that despite our bargain, despite having to tolerate “practice” work as part of full-time physics jobs, we can still nudge people back towards productivity.

One final point: it’s possible you’re completely ok with the idea of physicists spending most of their time “practicing”, but just wish they wouldn’t make such a big deal about it. Maybe you can appreciate that “can I cook up a model where dark matter kills the dinosaurs” is an interesting intellectual exercise, but you don’t think it should be paraded in front of journalists as if it were actually solving a real problem.

In that case, I agree with you, at least up to a point. It is absolutely true that physics has a dysfunctional relationship with the media. We’re too used to describing whatever we’re working on as the most important thing in the universe, and journalists are convinced that’s the only way to get the public to pay attention. This is something we can and should make progress on. An increasing number of journalists are breaking from the trend and focusing not on covering the “next big thing”, but in telling stories about people. We should do all we can to promote those journalists, to spread their work over the hype, to encourage the kind of stories that treat “practice” as interesting puzzles pursued by interesting people, not the solution to the great mysteries of physics. I know that if I ever do anything newsworthy, there are some journalists I’d give the story to before any others.

At the same time, it’s important to understand that some of the dysfunction here isn’t unique to physics, or even to science. Deep down the reason nobody can admit that their physics is “practice” work is the same reason people at job interviews claim to love the company, the same reason college applicants have to tell stirring stories of hardship and couples spend tens of thousands on weddings. We live in a culture in which nothing can ever just be “ok”, in which admitting things are anything other than exceptional is akin to calling them worthless. It’s an arms-race of exaggeration, and it goes far beyond physics.

(I should note that this “culture” may not be as universal as I think it is. If so, it’s possible its presence in physics is due to you guys letting too many of us Americans into the field.)

We made a bargain when we turned science into a career. We bought modernity, but the price we pay is subsidizing some amount of unproductive “practice” work. We can negotiate the terms of our bargain, and we should, tilting the field with incentives to get it closer to the truth. But we’ll never get rid of it entirely, because science is still done by people. And sometimes, despite what we’re willing to admit, people are just “ok”.

# Thoughts on Polchinski’s Memoir

I didn’t get a chance to meet Joseph Polchinski when I was visiting Santa Barbara last spring. At the time, I heard his health was a bit better, but he still wasn’t feeling well enough to come in to campus. Now that I’ve read his memoir, I almost feel like I have met him. There’s a sense of humor, a diffidence, and a passion for physics that shines through the pages.

The following are some scattered thoughts inspired by the memoir:

A friend of mine once complained to me that in her field grad students all brag about the colleges they went to. I mentioned that in my field your undergrad never comes up…unless it was Caltech. For some reason, everyone I’ve met who went to Caltech is full of stories about the place, and Polchinski is no exception. Speaking as someone who didn’t go there, it seems like Caltech has a profound effect on its students that other places don’t.

Polchinski mentions hearing stories about geniuses of the past, and how those stories helped temper some of his youthful arrogance. There’s an opposite effect that’s also valuable: hearing stories like Polchinski’s, his descriptions of struggling with anxiety and barely publishing and “not really accomplishing anything” till age 40, can be a major comfort to those of us who worry we’ve fallen behind in the academic race. That said, it’s important not to take these things too far: times have changed, you’re not Polchinski, and much like his door-stealing trick at Caltech getting a postdoc without any publications is something you shouldn’t try at home. Even Witten’s students need at least one.

Last week I was a bit puzzled by nueww’s comment, a quote from Polchinski’s memoir which distinguishes “math of the equations” from “math of the solutions”, attributing the former to physicists and the latter to mathematicians. Reading the context in the memoir and the phrase’s origin in a remark by Susskind cleared up a bit, but still left me uneasy. I only figured out why after Lubos Motl posted about it: it doesn’t match my experience of mathematicians at all!

If anything, I think physicists usually care more about the “solutions” than mathematicians do. In my field, often a mathematician will construct some handy basis of functions and then frustrate everyone by providing no examples of how to use them. In the wider math community I’ve met graph theorists who are happy to prove something is true for all graphs of size $10^{10^10}$ and larger, not worrying about the vast number of graphs where it fails because it’s just a finite number of special cases. And I don’t think this is just my experience: a common genre of jokes revolve around mathematicians proving a solution exists and then not bothering to do anything with it (for example, see the joke with the hotel fire here).

I do think there’s a meaningful sense in which mathematicians care about details that we’re happy to ignore, but “solutions” versus “equations” isn’t really the right axis. It’s something more like “rigor” versus “principles”. Mathematicians will often begin a talk by defining a series of maps between different spaces, carefully describing where they are and aren’t valid. A physicist might just write down a function. That sort of thing is dangerous in mathematics: there are always special, pathological cases that make careful definitions necessary. In physics, those cases rarely come up, and when they do there’s often a clear physical problem that brings them to the forefront. We have a pretty good sense of when we need rigor, and when we don’t we’re happy to lay things out without filling in the details, putting a higher priority on moving forward and figuring out the basic principles underlying reality.

Polchinski talks a fair bit about his role in the idea of the multiverse, from hearing about Weinberg’s anthropic argument to coming to terms with the string landscape. One thing his account makes clear is how horrifying the concept seemed at first: how the idea that the parameters of our universe might just be random could kill science and discourage experimentalists. This touches on something that I think gets lost in arguments about the multiverse: even the people most involved in promoting the multiverse in public aren’t happy about it.

It also sharpened my thinking about the multiverse a bit. I’ve talked before about how I don’t think the popularity of the multiverse is actually going to hurt theoretical physics as a field. Polchinski’s worries made me think about the experimental side of the equation: why do experiments if the world might just be random? I think I have a clearer answer to this now, but it’s a bit long, so I’ll save it for a future post.

One nice thing about these long-term accounts is you get to see how much people shift between fields over time. Polchinski didn’t start out working in string theory, and most of the big names in my field, like Lance Dixon and David Kosower, didn’t start out in scattering amplitudes. Academic careers are long, and however specialized we feel at any one time we can still get swept off in a new direction.

I’m grateful for this opportunity to “meet” Polchinski, if only through his writing. His is a window on the world of theoretical physics that is all too rare, and valuable as a result.