Tag Archives: PublicPerception

The Quantum Kids

I gave a pair of public talks at the Niels Bohr International Academy this week on “The Quest for Quantum Gravity” as part of their “News from the NBIA” lecture series. The content should be familiar to long-time readers of this blog: I talked about renormalization, and gravitons, and the whole story leading up to them.

(I wanted to title the talk “How I Learned to Stop Worrying and Love Quantum Gravity”, like my blog post, but was told Danes might not get the Doctor Strangelove reference.)

I also managed to work in some history, which made its way into the talk after Poul Damgaard, the director of the NBIA, told me I should ask the Niels Bohr Archive about Gamow’s Thought Experiment Device.

“What’s a Thought Experiment Device?”

einsteinbox

This, apparently

If you’ve heard of George Gamow, you’ve probably heard of the Alpher-Bethe-Gamow paper, his work with grad student Ralph Alpher on the origin of atomic elements in the Big Bang, where he added Hans Bethe to the paper purely for an alpha-beta-gamma pun.

As I would learn, Gamow’s sense of humor was prominent quite early on. As a research fellow at the Niels Bohr Institute (essentially a postdoc) he played with Bohr’s kids, drew physics cartoons…and made Thought Experiment Devices. These devices were essentially toy experiments, apparatuses that couldn’t actually work but that symbolized some physical argument. The one I used in my talk, pictured above, commemorated Bohr’s triumph over one of Einstein’s objections to quantum theory.

Learning more about the history of the institute, I kept noticing the young researchers, the postdocs and grad students.

h155

Lev Landau, George Gamow, Edward Teller. The kids are Aage and Ernest Bohr. Picture from the Niels Bohr Archive.

We don’t usually think about historical physicists as grad students. The only exception I can think of is Feynman, with his stories about picking locks at the Manhattan project. But in some sense, Feynman was always a grad student.

This was different. This was Lev Landau, patriarch of Russian physics, crowning name in a dozen fields and author of a series of textbooks of legendary rigor…goofing off with Gamow. This was Edward Teller, father of the Hydrogen Bomb, skiing on the institute lawn.

These were the children of the quantum era. They came of age when the laws of physics were being rewritten, when everything was new. Starting there, they could do anything, from Gamow’s cosmology to Landau’s superconductivity, spinning off whole fields in the new reality.

On one level, I envy them. It’s possible they were the last generation to be on the ground floor of a change quite that vast, a shift that touched all of physics, the opportunity to each become gods of their own academic realms.

I’m glad to know about them too, though, to see them as rambunctious grad students. It’s all too easy to feel like there’s an unbridgeable gap between postdocs and professors, to worry that the only people who make it through seem to have always been professors at heart. Seeing Gamow and Landau and Teller as “quantum kids” dispels that: these are all-too-familiar grad students and postdocs, joking around in all-too-familiar ways, who somehow matured into some of the greatest physicists of their era.

Advertisements

Our Bargain

Sabine Hossenfelder has a blog post this week chastising particle physicists and cosmologists for following “upside-down Popper”, or assuming a theory is worth working on merely because it’s falsifiable. She describes her colleagues churning out one hypothesis after another, each tweaking an old idea just enough to make it falsifiable in the next experiment, without caring whether the hypothesis is actually likely to be true.

Sabine is much more of an expert in this area of physics (phenomenology) than I am, and I don’t presume to tell her she’s wrong about that community. But the problem she’s describing is part of something bigger, something that affects my part of physics as well.

There’s a core question we’d all like to answer: what should physicists work on? What criteria should guide us?

Falsifiability isn’t the whole story. The next obvious criterion is a sense of simplicity, of Occam’s Razor or mathematical elegance. Sabine has argued against the latter, which prompted a friend of mine to comment that between rejecting falsifiability and elegance, Sabine must want us to stop doing high-energy physics at all!

That’s more than a little unfair, though. I think Sabine has a reasonably clear criterion in mind. It’s the same criterion that most critics of the physics mainstream care about. It’s even the same criterion being used by the “other side”, the sort of people who criticize anything that’s not string/SUSY/inflation.

The criterion is quite a simple one: physics research should be productive. Anything we publish, anything we work on, should bring us closer to understanding the real world.

And before you object that this criterion is obvious, that it’s subjective, that it ignores the very real disagreements between the Sabines and the Luboses of the world…before any of that, please let me finish.

We can’t achieve this criterion. And we shouldn’t.

We can’t demand that all physics be productive without breaking a fundamental bargain, one we made when we accepted that science could be a career.

1200px-13_portrait_of_robert_hooke

The Hunchback of Notre Science

It wasn’t always this way. Up until the nineteenth century, “scientist” was a hobby, not a job.

After Newton published his theory of gravity, he was famously accused by Robert Hooke of stealing the idea. There’s some controversy about this, but historians agree on a few points: that Hooke did write a letter to Newton suggesting a 1/r^2 force law, and that Hooke, unlike Newton, never really worked out the law’s full consequences.

Why not? In part, because Hooke, unlike Newton, had a job.

Hooke was arguably the first person for whom science was a full-time source of income. As curator of experiments for the Royal Society, it was his responsibility to set up demonstrations for each Royal Society meeting. Later, he also handled correspondence for the Royal Society Journal. These responsibilities took up much of his time, and as a result, even if he was capable of following up on the consequences of 1/r^2 he wouldn’t have had time to focus on it. That kind of calculation wasn’t what he was being paid for.

We’re better off than Hooke today. We still have our responsibilities, to journals and teaching and the like, at various stages of our careers. But in the centuries since Hooke expectations have changed, and real original research is no longer something we have to fit in our spare time. It’s now a central expectation of the job.

When scientific research became a career, we accepted a kind of bargain. On the positive side, you no longer have to be independently wealthy to contribute to science. More than that, the existence of professional scientists is the bedrock of technological civilization. With enough scientists around, we get modern medicine and the internet and space programs and the LHC, things that wouldn’t be possible in a world of rare wealthy geniuses.

We pay a price for that bargain, though. If science is a steady job, then it has to provide steady work. A scientist has to be able to go in, every day, and do science.

And the problem is, science doesn’t always work like that. There isn’t always something productive to work on. Even when there is, there isn’t always something productive for you to work on.

Sabine blames “upside-down Popper” on the current publish-or-perish environment in physics. If physics careers weren’t so cut-throat and the metrics they are judged by weren’t so flawed, then maybe people would have time to do slow, careful work on deeper topics rather than pumping out minimally falsifiable papers as fast as possible.

There’s a lot of truth to this, but I think at its core it’s a bit too optimistic. Each of us only has a certain amount of expertise, and sometimes that expertise just isn’t likely to be productive at the moment. Because science is a job, a person in that position can’t just go work at the Royal Mint like Newton did. (The modern-day equivalent would be working for Wall Street, but physicists rarely come back from that.) Instead, they keep doing what they know how to do, slowly branching out, until they’ve either learned something productive or their old topic becomes useful once more. You can think of it as a form of practice, where scientists keep their skills honed until they’re needed.

So if we slow down the rate of publication, if we create metrics for universities that let them hire based on the depth and importance of work and not just number of papers and citations, if we manage all of that then yes we will improve science a great deal. But Lisa Randall still won’t work on Haag’s theorem.

In the end, we’ll still have physicists working on topics that aren’t actually productive.

img_0622

A physicist lazing about unproductively under an apple tree

So do we have to pay physicists to work on whatever they want, no matter how ridiculous?

No, I’m not saying that. We can’t expect everyone to do productive work all the time, but we can absolutely establish standards to make the work more likely to be productive.

Strange as it may sound, I think our standards for this are already quite good, or at least better than many other fields.

First, there’s falsifiability itself, or specifically our attitude towards it.

Physics’s obsession with falsifiability has one important benefit: it means that when someone proposes a new model of dark matter or inflation that they tweaked to be just beyond the current experiments, they don’t claim to know it’s true. They just claim it hasn’t been falsified yet.

This is quite different from what happens in biology and the social sciences. There, if someone tweaks their study to be just within statistical significance, people typically assume the study demonstrated something real. Doctors base treatments on it, and politicians base policy on it. Upside-down Popper has its flaws, but at least it’s never going to kill anybody, or put anyone in prison.

Admittedly, that’s a pretty low bar. Let’s try to set a higher one.

Moving past falsifiability, what about originality? We have very strong norms against publishing work that someone else has already done.

Ok, you (and probably Sabine) would object, isn’t that easy to get around? Aren’t all these Popper-flippers pretending to be original but really just following the same recipe each time, modifying their theory just enough to stay falsifiable?

To some extent. But if they were really following a recipe, you could beat them easily: just write the recipe down.

Physics progresses best when we can generalize, when we skip from case-by-case to understanding whole swaths of cases at once. Over time, there have been plenty of cases in which people have done that, where a number of fiddly hand-made models have been summarized in one parameter space. Once that happens, the rule of originality kicks in: now, no-one can propose another fiddly model like that again. It’s already covered.

As long as the recipe really is just a recipe, you can do this. You can write up what these people are doing in computer code, release the code, and then that’s that, they have to do something else. The problem is, most of the time it’s not really a recipe. It’s close enough to one that they can rely on it, close enough to one that they can get paper after paper when they need to…but it still requires just enough human involvement, just enough genuine originality, to be worth a paper.

The good news is that the range of “recipes” we can code up increases with time. Some spaces of theories we might never be able to describe in full generality (I’m glad there are people trying to do statistics on the string landscape, but good grief it looks quixotic). Some of the time though, we have a real chance of putting a neat little bow on a subject, labeled “no need to talk about this again”.

This emphasis on originality keeps the field moving. It means that despite our bargain, despite having to tolerate “practice” work as part of full-time physics jobs, we can still nudge people back towards productivity.

 

One final point: it’s possible you’re completely ok with the idea of physicists spending most of their time “practicing”, but just wish they wouldn’t make such a big deal about it. Maybe you can appreciate that “can I cook up a model where dark matter kills the dinosaurs” is an interesting intellectual exercise, but you don’t think it should be paraded in front of journalists as if it were actually solving a real problem.

In that case, I agree with you, at least up to a point. It is absolutely true that physics has a dysfunctional relationship with the media. We’re too used to describing whatever we’re working on as the most important thing in the universe, and journalists are convinced that’s the only way to get the public to pay attention. This is something we can and should make progress on. An increasing number of journalists are breaking from the trend and focusing not on covering the “next big thing”, but in telling stories about people. We should do all we can to promote those journalists, to spread their work over the hype, to encourage the kind of stories that treat “practice” as interesting puzzles pursued by interesting people, not the solution to the great mysteries of physics. I know that if I ever do anything newsworthy, there are some journalists I’d give the story to before any others.

At the same time, it’s important to understand that some of the dysfunction here isn’t unique to physics, or even to science. Deep down the reason nobody can admit that their physics is “practice” work is the same reason people at job interviews claim to love the company, the same reason college applicants have to tell stirring stories of hardship and couples spend tens of thousands on weddings. We live in a culture in which nothing can ever just be “ok”, in which admitting things are anything other than exceptional is akin to calling them worthless. It’s an arms-race of exaggeration, and it goes far beyond physics.

(I should note that this “culture” may not be as universal as I think it is. If so, it’s possible its presence in physics is due to you guys letting too many of us Americans into the field.)

 

We made a bargain when we turned science into a career. We bought modernity, but the price we pay is subsidizing some amount of unproductive “practice” work. We can negotiate the terms of our bargain, and we should, tilting the field with incentives to get it closer to the truth. But we’ll never get rid of it entirely, because science is still done by people. And sometimes, despite what we’re willing to admit, people are just “ok”.

Congratulations to Rainer Weiss, Barry Barish, and Kip Thorne!

The Nobel Prize in Physics was announced this week, awarded to Rainer Weiss, Kip Thorne, and Barry Barish for their work on LIGO, the gravitational wave detector.

Nobel2017

Many expected the Nobel to go to LIGO last year, but the Nobel committee waited. At the time, it was expected the prize would be awarded to Rainer Weiss, Kip Thorne, and Ronald Drever, the three founders of the LIGO project, but there were advocates for Barry Barish was well. Traditionally, the Nobel is awarded to at most three people, so the argument got fairly heated, with opponents arguing Barish was “just an administrator” and advocates pointing out that he was “just the administrator without whom the project would have been cancelled in the 90’s”.

All of this ended up being irrelevant when Drever died last March. The Nobel isn’t awarded posthumously, so the list of obvious candidates (or at least obvious candidates who worked on LIGO) was down to three, which simplified thing considerably for the committee.

LIGO’s work is impressive and clearly Nobel-worthy, but I would be remiss if I didn’t mention that there is some controversy around it. In June, several of my current colleagues at the Niels Bohr Institute uploaded a paper arguing that if you subtract the gravitational wave signal that LIGO claims to have found then the remaining data, the “noise”, is still correlated between LIGO’s two detectors, which it shouldn’t be if it were actually just noise. LIGO hasn’t released an official response yet, but a LIGO postdoc responded with a guest post on Sean Carroll’s blog, and the team at NBI had responses of their own.

I’d usually be fairly skeptical of this kind of argument: it’s easy for an outsider looking at the data from a big experiment like this to miss important technical details that make the collaboration’s analysis work. That said, having seen some conversations between these folks, I’m a bit more sympathetic. LIGO hadn’t been communicating very clearly initially, and it led to a lot of unnecessary confusion on both sides.

One thing that I don’t think has been emphasized enough is that there are two claims LIGO is making: that they detected gravitational waves, and that they detected gravitational waves from black holes of specific masses at a specific distance. The former claim could be supported by the existence of correlated events between the detectors, without many assumptions as to what the signals should look like. The team at NBI seem to have found a correlation of that sort, but I don’t know if they still think the argument in that paper holds given what they’ve said elsewhere.

The second claim, that the waves were from a collision of black holes with specific masses, requires more work. LIGO compares the signal to various models, or “templates”, of black hole events, trying to find one that matches well. This is what the group at NBI subtracts to get the noise contribution. There’s a lot of potential for error in this sort of template-matching. If two templates are quite similar, it may be that the experiment can’t tell the difference between them. At the same time, the individual template predictions have their own sources of uncertainty, coming from numerical simulations and “loops” in particle physics-style calculations. I haven’t yet found a clear explanation from LIGO of how they take these various sources of error into account. It could well be that even if they definitely saw gravitational waves, they don’t actually have clear evidence for the specific black hole masses they claim to have seen.

I’m sure we’ll hear more about this in the coming months, as both groups continue to talk through their disagreement. Hopefully we’ll get a clearer picture of what’s going on. In the meantime, though, Weiss, Barish, and Thorne have accomplished something impressive regardless, and should enjoy their Nobel.

On the Care and Feeding of Ideas

I read Zen and the Art of Motorcycle Maintenance in high school. It’s got a reputation for being obnoxiously mystical, but one of its points seemed pretty reasonable: the claim that the hard part of science, and the part we understand the least, is coming up with hypotheses.

In some sense, theoretical physics is all about hypotheses. By this I don’t mean that we just say “what if?” all the time. I mean that in theoretical physics most of the work is figuring out the right way to ask a question. Phrase your question in the right way and the answer becomes obvious (or at least, obvious after a straightforward calculation). Because our questions are mathematical, the right question can logically imply its own solution.

From the point of view of “Zen and the Art”, as well as most non-scientists I’ve met, this part is utterly mysterious. The ideas you need here seem like they can’t come from hard work or careful observation. In order to ask the right questions, you just need to be “smart”.

In practice, I’ve noticed there’s more to it than that. We can’t just sit around and wait for an idea to show up. Instead, as physicists we develop a library of tricks, often unstated, that let us work towards the ideas we need.

Sometimes, this involves finding simpler cases, working with them until we understand the right questions to ask. Sometimes it involves doing numerics, or using crude guesses, not because either method will give the final answer but because it will show what the answer should look like. Sometimes we need to rephrase the problem many times, in many different contexts, before we happen on one that works. Most of this doesn’t end up in published papers, so in the end we usually have to pick it up from experience.

Along the way, we often find tricks to help us think better. Mostly this is straightforward stuff: reminders to keep us on-task, keeping our notes organized and our code commented so we have a good idea of what we were doing when we need to go back to it. Everyone has their own personal combination of these things in the background, and they’re rarely discussed.

The upshot is that coming up with ideas is hard work. We need to be smart, sure, but that’s not enough by itself: there are a lot of smart people who aren’t physicists after all.

With all that said, some geniuses really do seem to come up with ideas out of thin air. It’s not the majority of the field: we’re not the idiosyncratic Sheldon Coopers everyone seems to imagine. But for a few people, it really does feel like there’s something magical about where they get their ideas. I’ve had the privilege of working with a couple people like this, and the way they think sometimes seems qualitatively different from our usual way of building ideas. I can’t see any of the standard trappings, the legacy of partial results and tricks of thought, that would lead to where they end up. That doesn’t mean they don’t use tricks just like the rest of us, in the end. But I think genius, if it means anything at all, is thinking in a novel enough way that from the outside it looks like magic.

Most of the time, though, we just need to hone our craft. We build our methods and shape our minds as best we can, and we get better and better at the central mystery of science: asking the right questions.

We’re Weird

Preparing to move to Denmark, it strikes me just how strange what I’m doing would seem to most people. I’m moving across the ocean to a place where I don’t know the language. (Or at least, don’t know more than half a duolingo lesson.) I’m doing this just three years after another international move. And while I’m definitely nervous, this isn’t the big life changing shift it would be for many people. It’s just how academic careers are expected to work.

At borders, I’m often asked why I am where I am. Why be an American working in Canada? Why move to Denmark? And in general, the answer is just that it’s where I need to be to do what I want to do, because it’s where the other people who do what I want to do are. A few people seed this process by managing to find faculty jobs in their home countries, and others sort themselves out by their interests. In the end, we end up with places like Perimeter, an institute in the middle of Canada with barely any Canadians.

This is more pronounced for smaller fields than for larger ones. A chemist or biologist might just manage to have their whole career in the same state of the US, or the same country in Europe. For a theoretical physicist, this is much less likely. I also suspect it’s more true of more “universal” fields: that most professors of Portuguese literature are in Portugal or Brazil, for example.

For theoretical physics, the result is an essentially random mix of people around the world. This works, in part, because essentially everyone does science in English. Occasionally, a group of collaborators happens to speak the same non-English language, so you sometimes hear people talking science in Russian or Spanish or French. But even then there are times people will default to English anyway, because they’re used to it. We publish in English, we chat in English. And as a result, wherever we end up we can at least talk to our colleagues, even if the surrounding world is trickier.

Communities this international, with four different accents in every conversation, are rare, and I occasionally forget that. Before grad school, the closest I came to this was on the internet. On Dungeons and Dragons forums, much like in academia, everyone was drawn together by shared interests and expertise. We had Australians logging on in the middle of everyone else’s night to argue with the Germans, and Brazilians pointing out how the game’s errata was implemented differently in Portuguese.

It’s fun to be in that sort of community in the real world. There’s always something to learn from each other, even on completely mundane topics. Lunch often turns into a discussion of different countries’ cuisines. As someone who became an academic because I enjoy learning, it’s great to have the wheels constantly spinning like that. I should remember, though, that most of the world doesn’t live like this: we’re currently a pretty weird bunch.

The Way You Think Everything Is Connected Isn’t the Way Everything Is Connected

I hear it from older people, mostly.

“Oh, I know about quantum physics, it’s about how everything is connected!”

“String theory: that’s the one that says everything is connected, right?”

“Carl Sagan said we are all stardust. So really, everything is connected.”

connect_four

It makes Connect Four a lot easier anyway

I always cringe a little when I hear this. There’s a misunderstanding here, but it’s not a nice clean one I can clear up in a few sentences. It’s a bunch of interconnected misunderstandings, mixing some real science with a lot of confusion.

To get it out of the way first, no, string theory is not about how “everything is connected”. String theory describes the world in terms of strings, yes, but don’t picture those strings as links connecting distant places: string theory’s proposed strings are very, very short, much smaller than the scales we can investigate with today’s experiments. The reason they’re thought to be strings isn’t because they connect distant things, it’s because it lets them wiggle (counteracting some troublesome wiggles in quantum gravity) and wind (curling up in six extra dimensions in a multitude of ways, giving us what looks like a lot of different particles).

(Also, for technical readers: yes, strings also connect branes, but that’s not the sort of connection these people are talking about.)

What about quantum mechanics?

Here’s where it gets trickier. In quantum mechanics, there’s a phenomenon called entanglement. Entanglement really does connect things in different places…for a very specific definition of “connect”. And there’s a real (but complicated) sense in which these connections end up connecting everything, which you can read about here. There’s even speculation that these sorts of “connections” in some sense give rise to space and time.

You really have to be careful here, though. These are connections of a very specific sort. Specifically, they’re the sort that you can’t do anything through.

Connect two cans with a length of string, and you can send messages between them. Connect two particles with entanglement, though, and you can’t send messages between them…at least not any faster than between two non-entangled particles. Even in a quantum world, physics still respects locality: the principle that you can only affect the world where you are, and that any changes you make can’t travel faster than the speed of light. Ansibles, science-fiction devices that communicate faster than light, can’t actually exist according to our current knowledge.

What kind of connection is entanglement, then? That’s a bit tricky to describe in a short post. One way to think about entanglement is as a connection of logic.

Imagine someone takes a coin and cuts it along the rim into a heads half and a tails half. They put the two halves in two envelopes, and randomly give you one. You don’t know whether you have heads or tails…but you know that if you open your envelope and it shows heads, the other envelope must have tails.

m_nickel

Unless they’re a spy. Then it could contain something else.

Entanglement starts out with connections like that. Instead of a coin, take a particle that isn’t spinning and “split” it into two particles spinning in different directions, “spin up” and “spin down”. Like the coin, the two particles are “logically connected”: you know if one of them is “spin up” the other is “spin down”.

What makes a quantum coin different from a classical coin is that there’s no way to figure out the result in advance. If you watch carefully, you can see which coin gets put in to which envelope, but no matter how carefully you look you can’t predict which particle will be spin up and which will be spin down. There’s no “hidden information” in the quantum case, nowhere nearby you can look to figure it out.

That makes the connection seem a lot weirder than a regular logical connection. It also has slightly different implications, weirdness in how it interacts with the rest of quantum mechanics, things you can exploit in various ways. But none of those ways, none of those connections, allow you to change the world faster than the speed of light. In a way, they’re connecting things in the same sense that “we are all stardust” is connecting things: tied together by logic and cause.

So as long as this is all you mean by “everything is connected” then sure, everything is connected. But often, people seem to mean something else.

Sometimes, they mean something explicitly mystical. They’re people who believe in dowsing rods and astrology, in sympathetic magic, rituals you can do in one place to affect another. There is no support for any of this in physics. Nothing in quantum mechanics, in string theory, or in big bang cosmology has any support for altering the world with the power of your mind alone, or the stars influencing your day to day life. That’s just not the sort of connection we’re talking about.

Sometimes, “everything is connected” means something a bit more loose, the idea that someone’s desires guide their fate, that you could “know” something happened to your kids the instant it happens from miles away. This has the same problem, though, in that it’s imagining connections that let you act faster than light, where people play a special role. And once again, these just aren’t that sort of connection.

Sometimes, finally, it’s entirely poetic. “Everything is connected” might just mean a sense of awe at the deep physics in mundane matter, or a feeling that everyone in the world should get along. That’s fine: if you find inspiration in physics then I’m glad it brings you happiness. But poetry is personal, so don’t expect others to find the same inspiration. Your “everyone is connected” might not be someone else’s.

Where Grants Go on the Ground

I’ve seen several recent debates about grant funding, arguments about whether this or that scientist’s work is “useless” and shouldn’t get funded. Wading into the specifics is a bit more political than I want to get on this blog right now, and if you’re looking for a general defense of basic science there are plenty to choose from. I’d like to focus on a different part, one where I think the sort of people who want to de-fund “useless” research are wildly overoptimistic.

People who call out “useless” research act as if government science funding works in a simple, straightforward way: scientists say what they want to work on, the government chooses which projects it thinks are worth funding, and the scientists the government chooses get paid.

This may be a (rough) picture of how grants are assigned. For big experiments and grants with very specific purposes, it’s reasonably accurate. But for the bulk of grants distributed among individual scientists, it ignores what happens to the money on the ground, after the scientists get it.

The simple fact of the matter is that what a grant is “for” doesn’t have all that much influence on what it gets spent on. In most cases, scientists work on what they want to, and find ways to pay for it.

Sometimes, this means getting grants for applied work, doing some of that, but also fitting in more abstract theoretical projects during downtime. Sometimes this means sharing grant money, if someone has a promising grad student they can’t fund at the moment and needs the extra help. (When I first got research funding as a grad student, I had to talk to the particle physics group’s secretary, and I’m still not 100% sure why.) Sometimes this means being funded to look into something specific and finding a promising spinoff that takes you in an entirely different direction. Sometimes you can get quite far by telling a good story, like a mathematician I know who gets defense funding to study big abstract mathematical systems because some related systems happen to have practical uses.

Is this unethical? Some of it, maybe. But from what I’ve seen of grant applications, it’s understandable.

The problem is that if scientists are too loose with what they spend grant money on, grant agency asks tend to be far too specific. I’ve heard of grants that ask you to give a timeline, over the next five years, of each discovery you’re planning to make. That sort of thing just isn’t possible in science: we can lay out a rough direction to go, but we don’t know what we’ll find.

The end result is a bit like complaints about job interviews, where everyone is expected to say they love the company even though no-one actually does. It creates an environment where everyone has to twist the truth just to keep up with everyone else.

The other thing to keep in mind is that there really isn’t any practical way to enforce any of this. Sure, you can require receipts for equipment and the like, but once you’re paying for scientists’ time you don’t have a good way to monitor how they spend it. The best you can do is have experts around to evaluate the scientists’ output…but if those experts understand enough to do that, they’re going to be part of the scientific community, like grant committees usually already are. They’ll have the same expectations as the scientists, and give similar leeway.

So if you want to kill off some “useless” area of research, you can’t do it by picking and choosing who gets grants for what. There are advocates of more drastic actions of course, trying to kill whole agencies or fields, and that’s beyond the scope of this post. But if you want science funding to keep working the way it does, and just have strong opinions about what scientists should do with it, then calling out “useless” research doesn’t do very much: if the scientists in question think it’s useful, they’ll find a way to keep working on it. You’ve slowed them down, but you’ll still end up paying for research you don’t like.

Final note: The rule against political discussion in the comments is still in effect. For this post, that means no specific accusations of one field or another as being useless, or one politician/political party/ideology or another of being the problem here. Abstract discussions and discussions of how the grant system works should be fine.