Tag Archives: PublicPerception

On the Care and Feeding of Ideas

I read Zen and the Art of Motorcycle Maintenance in high school. It’s got a reputation for being obnoxiously mystical, but one of its points seemed pretty reasonable: the claim that the hard part of science, and the part we understand the least, is coming up with hypotheses.

In some sense, theoretical physics is all about hypotheses. By this I don’t mean that we just say “what if?” all the time. I mean that in theoretical physics most of the work is figuring out the right way to ask a question. Phrase your question in the right way and the answer becomes obvious (or at least, obvious after a straightforward calculation). Because our questions are mathematical, the right question can logically imply its own solution.

From the point of view of “Zen and the Art”, as well as most non-scientists I’ve met, this part is utterly mysterious. The ideas you need here seem like they can’t come from hard work or careful observation. In order to ask the right questions, you just need to be “smart”.

In practice, I’ve noticed there’s more to it than that. We can’t just sit around and wait for an idea to show up. Instead, as physicists we develop a library of tricks, often unstated, that let us work towards the ideas we need.

Sometimes, this involves finding simpler cases, working with them until we understand the right questions to ask. Sometimes it involves doing numerics, or using crude guesses, not because either method will give the final answer but because it will show what the answer should look like. Sometimes we need to rephrase the problem many times, in many different contexts, before we happen on one that works. Most of this doesn’t end up in published papers, so in the end we usually have to pick it up from experience.

Along the way, we often find tricks to help us think better. Mostly this is straightforward stuff: reminders to keep us on-task, keeping our notes organized and our code commented so we have a good idea of what we were doing when we need to go back to it. Everyone has their own personal combination of these things in the background, and they’re rarely discussed.

The upshot is that coming up with ideas is hard work. We need to be smart, sure, but that’s not enough by itself: there are a lot of smart people who aren’t physicists after all.

With all that said, some geniuses really do seem to come up with ideas out of thin air. It’s not the majority of the field: we’re not the idiosyncratic Sheldon Coopers everyone seems to imagine. But for a few people, it really does feel like there’s something magical about where they get their ideas. I’ve had the privilege of working with a couple people like this, and the way they think sometimes seems qualitatively different from our usual way of building ideas. I can’t see any of the standard trappings, the legacy of partial results and tricks of thought, that would lead to where they end up. That doesn’t mean they don’t use tricks just like the rest of us, in the end. But I think genius, if it means anything at all, is thinking in a novel enough way that from the outside it looks like magic.

Most of the time, though, we just need to hone our craft. We build our methods and shape our minds as best we can, and we get better and better at the central mystery of science: asking the right questions.

Advertisements

We’re Weird

Preparing to move to Denmark, it strikes me just how strange what I’m doing would seem to most people. I’m moving across the ocean to a place where I don’t know the language. (Or at least, don’t know more than half a duolingo lesson.) I’m doing this just three years after another international move. And while I’m definitely nervous, this isn’t the big life changing shift it would be for many people. It’s just how academic careers are expected to work.

At borders, I’m often asked why I am where I am. Why be an American working in Canada? Why move to Denmark? And in general, the answer is just that it’s where I need to be to do what I want to do, because it’s where the other people who do what I want to do are. A few people seed this process by managing to find faculty jobs in their home countries, and others sort themselves out by their interests. In the end, we end up with places like Perimeter, an institute in the middle of Canada with barely any Canadians.

This is more pronounced for smaller fields than for larger ones. A chemist or biologist might just manage to have their whole career in the same state of the US, or the same country in Europe. For a theoretical physicist, this is much less likely. I also suspect it’s more true of more “universal” fields: that most professors of Portuguese literature are in Portugal or Brazil, for example.

For theoretical physics, the result is an essentially random mix of people around the world. This works, in part, because essentially everyone does science in English. Occasionally, a group of collaborators happens to speak the same non-English language, so you sometimes hear people talking science in Russian or Spanish or French. But even then there are times people will default to English anyway, because they’re used to it. We publish in English, we chat in English. And as a result, wherever we end up we can at least talk to our colleagues, even if the surrounding world is trickier.

Communities this international, with four different accents in every conversation, are rare, and I occasionally forget that. Before grad school, the closest I came to this was on the internet. On Dungeons and Dragons forums, much like in academia, everyone was drawn together by shared interests and expertise. We had Australians logging on in the middle of everyone else’s night to argue with the Germans, and Brazilians pointing out how the game’s errata was implemented differently in Portuguese.

It’s fun to be in that sort of community in the real world. There’s always something to learn from each other, even on completely mundane topics. Lunch often turns into a discussion of different countries’ cuisines. As someone who became an academic because I enjoy learning, it’s great to have the wheels constantly spinning like that. I should remember, though, that most of the world doesn’t live like this: we’re currently a pretty weird bunch.

The Way You Think Everything Is Connected Isn’t the Way Everything Is Connected

I hear it from older people, mostly.

“Oh, I know about quantum physics, it’s about how everything is connected!”

“String theory: that’s the one that says everything is connected, right?”

“Carl Sagan said we are all stardust. So really, everything is connected.”

connect_four

It makes Connect Four a lot easier anyway

I always cringe a little when I hear this. There’s a misunderstanding here, but it’s not a nice clean one I can clear up in a few sentences. It’s a bunch of interconnected misunderstandings, mixing some real science with a lot of confusion.

To get it out of the way first, no, string theory is not about how “everything is connected”. String theory describes the world in terms of strings, yes, but don’t picture those strings as links connecting distant places: string theory’s proposed strings are very, very short, much smaller than the scales we can investigate with today’s experiments. The reason they’re thought to be strings isn’t because they connect distant things, it’s because it lets them wiggle (counteracting some troublesome wiggles in quantum gravity) and wind (curling up in six extra dimensions in a multitude of ways, giving us what looks like a lot of different particles).

(Also, for technical readers: yes, strings also connect branes, but that’s not the sort of connection these people are talking about.)

What about quantum mechanics?

Here’s where it gets trickier. In quantum mechanics, there’s a phenomenon called entanglement. Entanglement really does connect things in different places…for a very specific definition of “connect”. And there’s a real (but complicated) sense in which these connections end up connecting everything, which you can read about here. There’s even speculation that these sorts of “connections” in some sense give rise to space and time.

You really have to be careful here, though. These are connections of a very specific sort. Specifically, they’re the sort that you can’t do anything through.

Connect two cans with a length of string, and you can send messages between them. Connect two particles with entanglement, though, and you can’t send messages between them…at least not any faster than between two non-entangled particles. Even in a quantum world, physics still respects locality: the principle that you can only affect the world where you are, and that any changes you make can’t travel faster than the speed of light. Ansibles, science-fiction devices that communicate faster than light, can’t actually exist according to our current knowledge.

What kind of connection is entanglement, then? That’s a bit tricky to describe in a short post. One way to think about entanglement is as a connection of logic.

Imagine someone takes a coin and cuts it along the rim into a heads half and a tails half. They put the two halves in two envelopes, and randomly give you one. You don’t know whether you have heads or tails…but you know that if you open your envelope and it shows heads, the other envelope must have tails.

m_nickel

Unless they’re a spy. Then it could contain something else.

Entanglement starts out with connections like that. Instead of a coin, take a particle that isn’t spinning and “split” it into two particles spinning in different directions, “spin up” and “spin down”. Like the coin, the two particles are “logically connected”: you know if one of them is “spin up” the other is “spin down”.

What makes a quantum coin different from a classical coin is that there’s no way to figure out the result in advance. If you watch carefully, you can see which coin gets put in to which envelope, but no matter how carefully you look you can’t predict which particle will be spin up and which will be spin down. There’s no “hidden information” in the quantum case, nowhere nearby you can look to figure it out.

That makes the connection seem a lot weirder than a regular logical connection. It also has slightly different implications, weirdness in how it interacts with the rest of quantum mechanics, things you can exploit in various ways. But none of those ways, none of those connections, allow you to change the world faster than the speed of light. In a way, they’re connecting things in the same sense that “we are all stardust” is connecting things: tied together by logic and cause.

So as long as this is all you mean by “everything is connected” then sure, everything is connected. But often, people seem to mean something else.

Sometimes, they mean something explicitly mystical. They’re people who believe in dowsing rods and astrology, in sympathetic magic, rituals you can do in one place to affect another. There is no support for any of this in physics. Nothing in quantum mechanics, in string theory, or in big bang cosmology has any support for altering the world with the power of your mind alone, or the stars influencing your day to day life. That’s just not the sort of connection we’re talking about.

Sometimes, “everything is connected” means something a bit more loose, the idea that someone’s desires guide their fate, that you could “know” something happened to your kids the instant it happens from miles away. This has the same problem, though, in that it’s imagining connections that let you act faster than light, where people play a special role. And once again, these just aren’t that sort of connection.

Sometimes, finally, it’s entirely poetic. “Everything is connected” might just mean a sense of awe at the deep physics in mundane matter, or a feeling that everyone in the world should get along. That’s fine: if you find inspiration in physics then I’m glad it brings you happiness. But poetry is personal, so don’t expect others to find the same inspiration. Your “everyone is connected” might not be someone else’s.

Where Grants Go on the Ground

I’ve seen several recent debates about grant funding, arguments about whether this or that scientist’s work is “useless” and shouldn’t get funded. Wading into the specifics is a bit more political than I want to get on this blog right now, and if you’re looking for a general defense of basic science there are plenty to choose from. I’d like to focus on a different part, one where I think the sort of people who want to de-fund “useless” research are wildly overoptimistic.

People who call out “useless” research act as if government science funding works in a simple, straightforward way: scientists say what they want to work on, the government chooses which projects it thinks are worth funding, and the scientists the government chooses get paid.

This may be a (rough) picture of how grants are assigned. For big experiments and grants with very specific purposes, it’s reasonably accurate. But for the bulk of grants distributed among individual scientists, it ignores what happens to the money on the ground, after the scientists get it.

The simple fact of the matter is that what a grant is “for” doesn’t have all that much influence on what it gets spent on. In most cases, scientists work on what they want to, and find ways to pay for it.

Sometimes, this means getting grants for applied work, doing some of that, but also fitting in more abstract theoretical projects during downtime. Sometimes this means sharing grant money, if someone has a promising grad student they can’t fund at the moment and needs the extra help. (When I first got research funding as a grad student, I had to talk to the particle physics group’s secretary, and I’m still not 100% sure why.) Sometimes this means being funded to look into something specific and finding a promising spinoff that takes you in an entirely different direction. Sometimes you can get quite far by telling a good story, like a mathematician I know who gets defense funding to study big abstract mathematical systems because some related systems happen to have practical uses.

Is this unethical? Some of it, maybe. But from what I’ve seen of grant applications, it’s understandable.

The problem is that if scientists are too loose with what they spend grant money on, grant agency asks tend to be far too specific. I’ve heard of grants that ask you to give a timeline, over the next five years, of each discovery you’re planning to make. That sort of thing just isn’t possible in science: we can lay out a rough direction to go, but we don’t know what we’ll find.

The end result is a bit like complaints about job interviews, where everyone is expected to say they love the company even though no-one actually does. It creates an environment where everyone has to twist the truth just to keep up with everyone else.

The other thing to keep in mind is that there really isn’t any practical way to enforce any of this. Sure, you can require receipts for equipment and the like, but once you’re paying for scientists’ time you don’t have a good way to monitor how they spend it. The best you can do is have experts around to evaluate the scientists’ output…but if those experts understand enough to do that, they’re going to be part of the scientific community, like grant committees usually already are. They’ll have the same expectations as the scientists, and give similar leeway.

So if you want to kill off some “useless” area of research, you can’t do it by picking and choosing who gets grants for what. There are advocates of more drastic actions of course, trying to kill whole agencies or fields, and that’s beyond the scope of this post. But if you want science funding to keep working the way it does, and just have strong opinions about what scientists should do with it, then calling out “useless” research doesn’t do very much: if the scientists in question think it’s useful, they’ll find a way to keep working on it. You’ve slowed them down, but you’ll still end up paying for research you don’t like.

Final note: The rule against political discussion in the comments is still in effect. For this post, that means no specific accusations of one field or another as being useless, or one politician/political party/ideology or another of being the problem here. Abstract discussions and discussions of how the grant system works should be fine.

The Many Worlds of Condensed Matter

Physics is the science of the very big and the very small. We study the smallest scales, the fundamental particles that make up the universe, and the largest, stars on up to the universe as a whole.

We also study the world in between, though.

That’s the domain of condensed matter, the study of solids, liquids, and other medium-sized arrangements of stuff. And while it doesn’t make the news as often, it’s arguably the biggest field in physics today.

(In case you’d like some numbers, the American Physical Society has divisions dedicated to different sub-fields. Condensed Matter Physics is almost twice the size of the next biggest division, Particles & Fields. Add in other sub-fields that focus on medium-sized-stuff, like those who work on solid state physics, optics, or biophysics, and you get a majority of physicists focused on the middle of the distance scale.)

When I started grad school, I didn’t pay much attention to condensed matter and related fields. Beyond the courses in quantum field theory and string theory, my “breadth” courses were on astrophysics and particle physics. But over and over again, from people in every sub-field, I kept hearing the same recommendation:

“You should take Solid State Physics. It’s a really great course!”

At the time, I never understood why. It was only later, once I had some research under my belt, that I realized:

Condensed matter uses quantum field theory!

The same basic framework, describing the world in terms of rippling quantum fields, doesn’t just work for fundamental particles. It also works for materials. Rather than describing the material in terms of its fundamental parts, condensed matter physicists “zoom out” and talk about overall properties, like sound waves and electric currents, treating them as if they were the particles of quantum field theory.

This tends to confuse the heck out of journalists. Not used to covering condensed matter (and sometimes egged on by hype from the physicists), they mix up the metaphorical particles of these systems with the sort of particles made by the LHC, with predictably dumb results.

Once you get past the clumsy journalism, though, this kind of analogy has a lot of value.

Occasionally, you’ll see an article about string theory providing useful tools for condensed matter. This happens, but it’s less widespread than some of the articles make it out to be: condensed matter is a huge and varied field, and string theory applications tend to be of interest to only a small piece of it.

It doesn’t get talked about much, but the dominant trend is actually in the other direction: increasingly, string theorists need to have at least a basic background in condensed matter.

String theory’s curse/triumph is that it can give rise not just to one quantum field theory, but many: a vast array of different worlds obtained by twisting extra dimensions in different ways. Particle physicists tend to study a fairly small range of such theories, looking for worlds close enough to ours that they still fit the evidence.

Condensed matter, in contrast, creates its own worlds. Pick the right material, take the right slice, and you get quantum field theories of almost any sort you like. While you can’t go to higher dimensions than our usual four, you can certainly look at lower ones, at the behavior of currents on a sheet of metal or atoms arranged in a line. This has led some condensed matter theorists to examine a wide range of quantum field theories with one strange behavior or another, theories that wouldn’t have occurred to particle physicists but that, in many cases, are part of the cornucopia of theories you can get out of string theory.

So if you want to explore the many worlds of string theory, the many worlds of condensed matter offer a useful guide. Increasingly, tools from that community, like integrability and tensor networks, are migrating over to ours.

It’s gotten to the point where I genuinely regret ignoring condensed matter in grad school. Parts of it are ubiquitous enough, and useful enough, that some of it is an expected part of a string theorist’s background. The many worlds of condensed matter, as it turned out, were well worth a look.

Pop Goes the Universe and Other Cosmic Microwave Background Games

(With apologies to whoever came up with this “book”.)

Back in February, Ijjas, Steinhardt, and Loeb wrote an article for Scientific American titled “Pop Goes the Universe” criticizing cosmic inflation, the proposal that the universe underwent a period of rapid expansion early in its life, smoothing it out to achieve the (mostly) uniform universe we see today. Recently, Scientific American published a response by Guth, Kaiser, Linde, Nomura, and 29 co-signers. This was followed by a counterresponse, which is the usual number of steps for this sort of thing before it dissipates harmlessly into the blogosphere.

In general, string theory, supersymmetry, and inflation tend to be criticized in very similar ways. Each gets accused of being unverifiable, able to be tuned to match any possible experimental result. Each has been claimed to be unfairly dominant, its position as “default answer” more due to the bandwagon effect than the idea’s merits. All three tend to get discussed in association with the multiverse, and blamed for dooming physics as a result. And all are frequently defended with one refrain: “If you have a better idea, what is it?”

It’s probably tempting (on both sides) to view this as just another example of that argument. In reality, though, string theory, supersymmetry, and inflation are all in very different situations. The details matter. And I worry that in this case both sides are too ready to assume the other is just making the “standard argument”, and ended up talking past each other.

When people say that string theory makes no predictions, they’re correct in a sense, but off topic: the majority of string theorists aren’t making the sort of claims that require successful predictions. When people say that inflation makes no predictions, if you assume they mean the same thing that people mean when they accuse string theory of making no predictions, then they’re flat-out wrong. Unlike string theorists, most people who work on inflation care a lot about experiment. They write papers filled with predictions, consequences for this or that model if this or that telescope sees something in the near future.

I don’t think Ijjas, Steinhardt, and Loeb were making that kind of argument.

When people say that supersymmetry makes no predictions, there’s some confusion of scope. (Low-energy) supersymmetry isn’t one specific proposal that needs defending on its own. It’s a class of different models, each with its own predictions. Given a specific proposal, one can see if it’s been ruled out by experiment, and predict what future experiments might say about it. Ruling out one model doesn’t rule out supersymmetry as a whole, but it doesn’t need to, because any given researcher isn’t arguing for supersymmetry as a whole: they’re arguing for their particular setup. The right “scope” is between specific supersymmetric models and specific non-supersymmetric models, not both as general principles.

Guth, Kaiser, Linde, and Nomura’s response follows similar lines in defending inflation. They point out that the wide variety of models are subject to being ruled out in the face of observation, and compare to the construction of the Standard Model in particle physics, with many possible parameters under the overall framework of Quantum Field Theory.

Ijjas, Steinhardt, and Loeb’s article certainly looked like it was making this sort of mistake. But as they clarify in the FAQ of their counter-response, they’ve got a more serious objection. They’re arguing that, unlike in the case of supersymmetry or the Standard Model, specific inflation models do not lead to specific predictions. They’re arguing that, because inflation typically leads to a multiverse, any specific model will in fact lead to a wide variety of possible observations. In effect, they’re arguing that the multitude of people busily making predictions based on inflationary models are missing a step in their calculations, underestimating their errors by a huge margin.

This is where I really regret that these arguments usually end after three steps (article, response, counter-response). Here Ijjas, Steinhardt, and Loeb are making what is essentially a technical claim, one that Guth, Kaiser, Linde, and Nomura could presumably respond to with a technical response, after which the rest of us would actually learn something. As-is, I certainly don’t have the background in inflation to know whether or not this point makes sense, and I’d love to hear from someone who does.

One aspect of this exchange that baffled me was the “accusation” that Ijjas, Steinhardt, and Loeb were just promoting their own work on bouncing cosmologies. (I put “accusation” in quotes because while Ijjas, Steinhardt, and Loeb seem to treat it as if it were an accusation, Guth, Kaiser, Linde, and Nomura don’t obviously mean it as one.)

“Bouncing cosmology” is Ijjas, Steinhardt, and Loeb’s answer to the standard “If you have a better idea, what is it?” response. It wasn’t the focus of their article, but while they seem to think this speaks well of them (hence their treatment of “promoting their own work” as if it were an accusation), I don’t. I read a lot of Scientific American growing up, and the best articles focused on explaining a positive vision: some cool new idea, mainstream or not, that could capture the public’s interest. That kind of article could still have included criticism of inflation, you’d want it in there to justify the use of a bouncing cosmology. But by going beyond that, it would have avoided falling into the standard back and forth that these arguments tend to, and maybe we would have actually learned from the exchange.

What Space Can Tell Us about Fundamental Physics

Back when LIGO announced its detection of gravitational waves, there was one question people kept asking me: “what does this say about quantum gravity?”

The answer, each time, was “nothing”. LIGO’s success told us nothing about quantum gravity, and very likely LIGO will never tell us anything about quantum gravity.

The sheer volume of questions made me think, though. Astronomy, astrophysics, and cosmology fascinate people. They capture the public’s imagination in a way that makes them expect breakthroughs about fundamental questions. Especially now, with the LHC so far seeing nothing new since the Higgs, people are turning to space for answers.

Is that a fair expectation? Well, yes and no.

Most astrophysicists aren’t concerned with finding new fundamental laws of nature. They’re interested in big systems like stars and galaxies, where we know most of the basic rules but can’t possibly calculate all their consequences. Like most physicists, they’re doing the vital work of “physics of decimals”.

At the same time, there’s a decent chunk of astrophysics and cosmology that does matter for fundamental physics. Just not all of it. Here are some of the key areas where space has something important to say about the fundamental rules that govern our world:

 

1. Dark Matter:

Galaxies rotate at different speeds than their stars would alone. Clusters of galaxies bend light that passes by, and do so more than their visible mass would suggest. And when scientists try to model the evolution of the universe, from early images to its current form, the models require an additional piece: extra matter that cannot interact with light. All of this suggests that there is some extra “dark” matter in the universe, not described by our standard model of particle physics.

If we want to understand this dark matter, we need to know more about its properties, and much of that can be learned from astronomy. If it turns out dark matter isn’t really matter after all, if it can be explained by a modification of gravity or better calculations of gravity’s effects, then it still will have important implications for fundamental physics, and astronomical evidence will still be key to finding those implications.

2. Dark Energy (/Cosmological Constant/Inflation/…):

The universe is expanding, and its expansion appears to be accelerating. It also seems more smooth and uniform than expected, suggesting that it had a period of much greater acceleration early on. Both of these suggest some extra quantity: a changing acceleration, a “dark energy”, the sort of thing that can often be explained by a new scalar field like the Higgs.

Again, the specifics: how (and perhaps if) the universe is expanding now, what kinds of early expansion (if any) the shape of the universe suggests, these will almost certainly have implications for fundamental physics.

3. Limits on stable stuff:

Let’s say you have a new proposal for particle physics. You’ve predicted a new particle, but it can’t interact with anything else, or interacts so weakly we’d never detect it. If your new particle is stable, then you can still say something about it, because its mass would have an effect on the early universe. Too many such particles and they would throw off cosmologists’ models, ruling them out.

Alternatively, you might predict something that could be detected, but hasn’t, like a magnetic monopole. Then cosmologists can tell you how many such particles would have been produced in the early universe, and thus how likely we would be to detect them today. If you predict too many particles and we don’t see them, then that becomes evidence against your proposal.

4. “Cosmological Collider Physics”:

A few years back, Nima Arkani-Hamed and Juan Maldacena suggested that the early universe could be viewed as an extremely high energy particle collider. While this collider performed only one experiment, the results from that experiment are spread across the sky, and observed patterns in the early universe should tell us something about the particles produced by the cosmic collider.

People are still teasing out the implications of this idea, but it looks promising, and could mean we have a lot more to learn from examining the structure of the universe.

5. Big Weird Space Stuff:

If you suspect we live in a multiverse, you might want to look for signs of other universes brushing up against our own. If your model of the early universe predicts vast cosmic strings, maybe a gravitational wave detector like LIGO will be able to see them.

6. Unexpected weirdness:

In all likelihood, nothing visibly “quantum” happens at the event horizons of astrophysical black holes. If you think there’s something to see though, the Event Horizon Telescope might be able to see it. There’s a grab bag of other predictions like this: situations where we probably won’t see anything, but where at least one person thinks there’s a question worth asking.

 

I’ve probably left something out here, but this should give you a general idea. There is a lot that fundamental physics can learn from astronomy, from the overall structure and origins of the universe to unexplained phenomena like dark matter. But not everything in astronomy has these sorts of implications: for the most part, astronomy is interesting not because it tells us something about the fundamental laws of nature, but because it tells us how the vast space above us actually happens to work.