Category Archives: General QFT

The Many Worlds of Condensed Matter

Physics is the science of the very big and the very small. We study the smallest scales, the fundamental particles that make up the universe, and the largest, stars on up to the universe as a whole.

We also study the world in between, though.

That’s the domain of condensed matter, the study of solids, liquids, and other medium-sized arrangements of stuff. And while it doesn’t make the news as often, it’s arguably the biggest field in physics today.

(In case you’d like some numbers, the American Physical Society has divisions dedicated to different sub-fields. Condensed Matter Physics is almost twice the size of the next biggest division, Particles & Fields. Add in other sub-fields that focus on medium-sized-stuff, like those who work on solid state physics, optics, or biophysics, and you get a majority of physicists focused on the middle of the distance scale.)

When I started grad school, I didn’t pay much attention to condensed matter and related fields. Beyond the courses in quantum field theory and string theory, my “breadth” courses were on astrophysics and particle physics. But over and over again, from people in every sub-field, I kept hearing the same recommendation:

“You should take Solid State Physics. It’s a really great course!”

At the time, I never understood why. It was only later, once I had some research under my belt, that I realized:

Condensed matter uses quantum field theory!

The same basic framework, describing the world in terms of rippling quantum fields, doesn’t just work for fundamental particles. It also works for materials. Rather than describing the material in terms of its fundamental parts, condensed matter physicists “zoom out” and talk about overall properties, like sound waves and electric currents, treating them as if they were the particles of quantum field theory.

This tends to confuse the heck out of journalists. Not used to covering condensed matter (and sometimes egged on by hype from the physicists), they mix up the metaphorical particles of these systems with the sort of particles made by the LHC, with predictably dumb results.

Once you get past the clumsy journalism, though, this kind of analogy has a lot of value.

Occasionally, you’ll see an article about string theory providing useful tools for condensed matter. This happens, but it’s less widespread than some of the articles make it out to be: condensed matter is a huge and varied field, and string theory applications tend to be of interest to only a small piece of it.

It doesn’t get talked about much, but the dominant trend is actually in the other direction: increasingly, string theorists need to have at least a basic background in condensed matter.

String theory’s curse/triumph is that it can give rise not just to one quantum field theory, but many: a vast array of different worlds obtained by twisting extra dimensions in different ways. Particle physicists tend to study a fairly small range of such theories, looking for worlds close enough to ours that they still fit the evidence.

Condensed matter, in contrast, creates its own worlds. Pick the right material, take the right slice, and you get quantum field theories of almost any sort you like. While you can’t go to higher dimensions than our usual four, you can certainly look at lower ones, at the behavior of currents on a sheet of metal or atoms arranged in a line. This has led some condensed matter theorists to examine a wide range of quantum field theories with one strange behavior or another, theories that wouldn’t have occurred to particle physicists but that, in many cases, are part of the cornucopia of theories you can get out of string theory.

So if you want to explore the many worlds of string theory, the many worlds of condensed matter offer a useful guide. Increasingly, tools from that community, like integrability and tensor networks, are migrating over to ours.

It’s gotten to the point where I genuinely regret ignoring condensed matter in grad school. Parts of it are ubiquitous enough, and useful enough, that some of it is an expected part of a string theorist’s background. The many worlds of condensed matter, as it turned out, were well worth a look.

What Makes Light Move?

Light always moves at the speed of light.

It’s not alone in this: anything that lacks mass moves at the speed of light. Gluons, if they weren’t constantly interacting with each other, would move at the speed of light. Neutrinos, back when we thought they were massless, were thought to move at the speed of light. Gravitational waves, and by extension gravitons, move at the speed of light.

This is, on the face of it, a weird thing to say. If I say a jet moves at the speed of sound, I don’t mean that it always moves at the speed of sound. Find it in its hangar and hopefully it won’t be moving at all.

And so, people occasionally ask me, why can’t we find light in its hangar? Why does light never stand still? What makes light move?

(For the record, you can make light “stand still” in a material, but that’s because the material is absorbing and reflecting it, so it’s not the “same” light traveling through. Compare the speed of a wave of hands in a stadium versus the speed you could run past the seats.)

This is surprisingly tricky to explain without math. Some people point out that if you want to see light at rest you need to speed up to catch it, but you can’t accelerate enough unless you too are massless. This probably sounds a bit circular. Some people talk about how, from light’s perspective, no time passes at all. This is true, but it seems to confuse more than it helps. Some people say that light is “made of energy”, but I don’t like that metaphor. Nothing is “made of energy”, nor is anything “made of mass” either. Mass and energy are properties things can have.

I do like game metaphors though. So, imagine that each particle (including photons, particles of light) is a character in an RPG.

260px-yagami_light

For bonus points, play Light in an RPG.

You can think of energy as the particle’s “character points”. When the particle builds its character it gets a number of points determined by its energy. It can spend those points increasing its “stats”: mass and momentum, via the lesser-known big brother of E=mc^2, E^2=p^2c^2+m^2c^4.

Maybe the particle chooses to play something heavy, like a Higgs boson. Then they spend a lot of points on mass, and don’t have as much to spend on momentum. If they picked something lighter, like an electron, they’d have more to spend, so they could go faster. And if they spent nothing at all on mass, like light does, they could use all of their energy “points” boosting their speed.

Now, it turns out that these “energy points” don’t boost speed one for one, which is why low-energy light isn’t any slower than high-energy light. Instead, speed is determined by the ratio between energy and momentum. When they’re proportional to each other, when E^2=p^2c^2, then a particle is moving at the speed of light.

(Why this is is trickier to explain. You’ll have to trust me or wikipedia that the math works out.)

Some of you may be happy with this explanation, but others will accuse me of passing the buck. Ok, a photon with any energy will move at the speed of light. But why do photons have any energy at all? And even if they must move at the speed of light, what determines which direction?

Here I think part of the problem is an old physics metaphor, probably dating back to Newton, of a pool table.

220px-cribbage_pool_rack_closeup

A pool table is a decent metaphor for classical physics. You have moving objects following predictable paths, colliding off each other and the walls of the table.

Where people go wrong is in projecting this metaphor back to the beginning of the game. At the beginning of a game of pool, the balls are at rest, racked in the center. Then one of them is hit with the pool cue, and they’re set into motion.

In physics, we don’t tend to have such neat and tidy starting conditions. In particular, things don’t have to start at rest before something whacks them into motion.

A photon’s “start” might come from an unstable Higgs boson produced by the LHC. The Higgs decays, and turns into two photons. Since energy is conserved, these two each must have half of the energy of the original Higgs, including the energy that was “spent” on its mass. This process is quantum mechanical, and with no preferred direction the photons will emerge in a random one.

Photons in the LHC may seem like an artificial example, but in general whenever light is produced it’s due to particles interacting, and conservation of energy and momentum will send the light off in one direction or another.

(For the experts, there is of course the possibility of very low energy soft photons, but that’s a story for another day.)

Not even the beginning of the universe resembles that racked set of billiard balls. The question of what “initial conditions” make sense for the whole universe is a tricky one, but there isn’t a way to set it up where you start with light at rest. It’s not just that it’s not the default option: it isn’t even an available option.

Light moves at the speed of light, no matter what. That isn’t because light started at rest, and something pushed it. It’s because light has energy, and a particle has to spend its “character points” on something.

 

Mass Is Just Energy You Haven’t Met Yet

How can colliding two protons give rise to more massive particles? Why do vibrations of a string have mass? And how does the Higgs work anyway?

There is one central misunderstanding that makes each of these topics confusing. It’s something I’ve brought up before, but it really deserves its own post. It’s people not realizing that mass is just energy you haven’t met yet.

It’s quite intuitive to think of mass as some sort of “stuff” that things can be made out of. In our everyday experience, that’s how it works: combine this mass of flour and this mass of sugar, and get this mass of cake. Historically, it was the dominant view in physics for quite some time. However, once you get to particle physics it starts to break down.

It’s probably most obvious for protons. A proton has a mass of 938 MeV/c², or 1.6×10⁻²⁷ kg in less physicist-specific units. Protons are each made of three quarks, two up quarks and a down quark. Naively, you’d think that the quarks would have to be around 300 MeV/c². They’re not, though: up and down quarks both have masses less than 10 MeV/c². Those three quarks account for less than a fiftieth of a proton’s mass.

The “extra” mass is because a proton is not just three quarks. It’s three quarks interacting. The forces between those quarks, the strong nuclear force that binds them together, involves a heck of a lot of energy. And from a distance, that energy ends up looking like mass.

This isn’t unique to protons. In some sense, it’s just what mass is.

The quarks themselves get their mass from the Higgs field. Far enough away, this looks like the quarks having a mass. However, zoom in and it’s energy again, the energy of interaction between quarks and the Higgs. In string theory, mass comes from the energy of vibrating strings. And so on. Every time we run into something that looks like a fundamental mass, it ends up being just another energy of interaction.

If mass is just energy, what about gravity?

When you’re taught about gravity, the story is all about mass. Mass attracts mass. Mass bends space-time. What gets left out, until you actually learn the details of General Relativity, is that energy gravitates too.

Normally you don’t notice this, because mass contributes so much more to energy than anything else. That’s really what E=m is really about: it’s a unit conversion formula. It tells you that if you want to know how much energy a given mass “really is”, you multiply it by the speed of light squared. And that’s a large enough number that most of the time, when you notice energy gravitating, it’s because that energy looks like a big chunk of mass. (It’s also why physicists like silly units like MeV/c² for mass: we can just multiply by c² and get an energy!)

It’s really tempting to think about mass as a substance, of mass as always conserved, of mass as fundamental. But in physics we often have to toss aside our everyday intuitions, and this is no exception. Mass really is just energy. It’s just energy that we’ve “zoomed out” enough not to notice.

Those Wacky 60’s Physicists

The 60’s were a weird time in academia. Psychologists were busy experimenting with LSD, seeing if they could convince people to electrocute each other, and otherwise doing the sorts of shenanigans that ended up saddling them with Institutional Review Boards so that nowadays they can’t even hand out surveys without a ten page form attesting that it won’t have adverse effects on pregnant women.

We don’t have IRBs in theoretical physics. We didn’t get quite as wacky as the psychologists did. But the 60’s were still a time of utopian dreams and experimentation, even in physics. We may not have done unethical experiments on people…but we did have the Analytic S-Matrix Program.

The Analytic S-Matrix Program was an attempt to rebuild quantum field theory from the ground up. The “S” in S-Matrix stands for “scattering”: the S-Matrix is an enormous matrix that tells you, for each set of incoming particles, the probability that they scatter into some new set of outgoing particles. Normally, this gets calculated piece by piece with what are called Feynman diagrams. The goal of the Analytic S-Matrix program was a loftier one: to derive the S-Matrix from first principles, without building it out of quantum field theory pieces. Without Feynman diagrams’ reliance on space and time, people like  Geoffrey Chew, Stanley Mandelstam, Tullio Regge, and Lev Landau hoped to reach a deeper understanding of fundamental physics.

If this sounds familiar, it should. Amplitudeologists like me view the physicists of the Analytic S-Matrix Program as our spiritual ancestors. Like us, they tried to skip the mess of Feynman diagrams, looking for mathematical tricks and unexpected symmetries to show them the way forward.

Unfortunately, they didn’t have the tools we do now. They didn’t understand the mathematical functions they needed, nor did they have novel ways of writing down their results like the amplituhedron. Instead, they had to work with what they knew, which in practice usually meant going back to Feynman diagrams.

Paradoxically then, much of the lasting impact of the Analytic S-Matrix Program has been on how we understand the results of Feynman diagram calculations. Just as psychologists learn about the Milgram experiment in school, we learn about Mandelstam variables and Regge trajectories. Recently, we’ve been digging up old concepts from those days and finding new applications, like the recent work on Landau singularities, or some as-yet unpublished work I’ve been doing.

Of course, this post wouldn’t be complete without mentioning the Analytic S-Matrix Program’s most illustrious child, String Theory. Some of the mathematics cooked up by the physicists of the 60’s, while dead ends for the problems they were trying to solve, ended up revealing a whole new world of potential.

The physicists of the 60’s were overly optimistic. Nevertheless, their work opened up questions that are still worth asking today. Much as psychologists can’t ignore what they got up to in the 60’s, it’s important for physicists to be aware of our history. You never know what you might dig up.

0521523362cvr.qxd (Page 1)

And as Levar Burton would say, you don’t have to take my word for it.

A Collider’s Eye View

When it detected the Higgs, what did the LHC see, exactly?

cern-1304107-02-thumb

What do you see with your detector-eyes, CMS?

The first problem is that the Higgs, like most particles produced in particle colliders, is unstable. In a very short amount of time the Higgs transforms into two or more lighter particles. Often, these particles will decay in turn, possibly many more times.  So when the LHC sees a Higgs boson, it doesn’t really “see the Higgs”.

The second problem is that you can’t “see” the lighter particles either. They’re much too small for that. Instead, the LHC has to measure their properties.

Does the particle have a charge? Then its path will curve in a magnetic field, and it will send electrical signals in silicon. So the LHC can “see” charge.

Can the particle be stopped, absorbed by some material? Getting absorbed releases energy, lighting up a detector. So the LHC can “see” energy, and what it takes for a particle to be absorbed.

vvvvv

Diagram of a collider’s “eye”

And that’s…pretty much it. When the LHC “sees” the Higgs, what it sees is a set of tracks in a magnetic field, indicating charge, and energy in its detectors, caused by absorption at different points. Everything else has to be inferred: what exactly the particles were, where they decayed, and from what. Some of it can be figured out in real-time, some is only understood later once we can add up everything and do statistics.

On the face of it, this sounds about as impossible as astrophysics. Like astrophysics, it works in part because what the colliders see is not the whole story. The strong force has to both be consistent with our observations of hadrons, and with nuclear physics. Neutrinos aren’t just mysterious missing energy that we can’t track, they’re an important part of cosmology. And so on.

So in the sense of that massive, interconnected web of ideas, the LHC sees the Higgs. It sees patterns of charges and energies, binned into histograms and analyzed with statistics and cross-checked, implicitly or explicitly, against all of the rest of physics at every scale we know. All of that, together, is the collider’s eye view of the universe.

GUTs vs ToEs: What Are We Unifying Here?

“Grand Unified Theory” and “Theory of Everything” may sound like meaningless grandiose titles, but they mean very different things.

In particular, Grand Unified Theory, or GUT, is a technical term, referring to a specific way to unify three of the fundamental interactions: electromagnetism, the weak force, and the strong force.

blausen_0817_smallintestine_anatomy

In contrast, guts unify the two fundamental intestines.

Those three forces are called Yang-Mills forces, and they can all be described in the same basic way. In particular, each has a strength (the coupling constant) and a mathematical structure that determines how it interacts with itself, called a group.

The core idea of a GUT, then, is pretty simple: to unite the three Yang-Mills forces, they need to have the same strength (the same coupling constant) and be part of the same group.

But wait! (You say, still annoyed at the pun in the above caption.) These forces don’t have the same strength at all! One of them’s strong, one of them’s weak, and one of them is electromagnetic!

As it turns out, this isn’t as much of a problem as it seems. While the three Yang-Mills forces seem to have very different strengths on an everyday scale, that’s not true at very high energies. Let’s steal a plot from Sweden’s Royal Institute of Technology:

running

Why Sweden? Why not!

What’s going on in this plot?

Here, each \alpha represents the strength of a fundamental force. As the force gets stronger, \alpha gets bigger (and so \alpha^{-1} gets smaller). The variable on the x-axis is the energy scale. The grey lines represent a world without supersymmetry, while the black lines show the world in a supersymmetric model.

So based on this plot, it looks like the strengths of the fundamental forces change based on the energy scale. That’s true, but if you find that confusing there’s another, mathematically equivalent way to think about it.

You can think about each force as having some sort of ultimate strength, the strength it would have if the world weren’t quantum. Without quantum mechanics, each force would interact with particles in only the simplest of ways, corresponding to the simplest diagram here.

However, our world is quantum mechanical. Because of that, when we try to measure the strength of a force, we’re not really measuring its “ultimate strength”. Rather, we’re measuring it alongside a whole mess of other interactions, corresponding to the other diagrams in that post. These extra contributions mean that what looks like the strength of the force gets stronger or weaker depending on the energy of the particles involved.

(I’m sweeping several things under the rug here, including a few infinities and electroweak unification. But if you just want a general understanding of what’s going on, this should be a good starting point.)

If you look at the plot, you’ll see the forces meet up somewhere around 10^16 GeV. They miss each-other for the faint, non-supersymmetric lines, but they meet fairly cleanly for the supersymmetric ones.

So (at least if supersymmetry is true), making the Yang-Mills forces have the same strength is not so hard. Putting them in the same mathematical group is where things get trickier. This is because any group that contains the groups of the fundamental forces will be “bigger” than just the sum of those forces: it will contain “extra forces” that we haven’t observed yet, and these forces can do unexpected things.

In particular, the “extra forces” predicted by GUTs usually make protons unstable. As far as we can tell, protons are very long-lasting: if protons decayed too fast, we wouldn’t have stars. So if protons decay, they must do it only very rarely, detectable only with very precise experiments. These experiments are powerful enough to rule out most of the simplest GUTs. The more complicated GUTs still haven’t been ruled out, but it’s enough to make fewer people interested in GUTs as a research topic.

What about Theories of Everything, or ToEs?

While GUT is a technical term, ToE is very much not. Instead, it’s a phrase that journalists have latched onto because it sounds cool. As such, it doesn’t really have a clear definition. Usually it means uniting gravity with the other fundamental forces, but occasionally people use it to refer to a theory that also unifies the various Standard Model particles into some sort of “final theory”.

Gravity is very different from the other fundamental forces, different enough that it’s kind of silly to group them as “fundamental forces” in the first place. Thus, while GUT models are the kind of thing one can cook up and tinker with, any ToE has to be based on some novel insight, one that lets you express gravity and Yang-Mills forces as part of the same structure.

So far, string theory is the only such insight we have access to. This isn’t just me being arrogant: while there are other attempts at theories of quantum gravity, aside from some rather dubious claims none of them are even interested in unifying gravity with other forces.

This doesn’t mean that string theory is necessarily right. But it does mean that if you want a different “theory of everything”, telling physicists to go out and find a new one isn’t going to be very productive. “Find a theory of everything” is a hope, not a research program, especially if you want people to throw out the one structure we have that even looks like it can do the job.

The Higgs Solution

My grandfather is a molecular biologist. Over the holidays I had many opportunities to chat with him, and our conversations often revolved around explaining some aspect of our respective fields. While talking to him, I came up with a chemistry-themed description of the Higgs field, and how it leads to electro-weak symmetry breaking. Very few of you are likely to be chemists, but I think you still might find the metaphor worthwhile.

Picture the Higgs as a mixture of ions, dissolved in water.

In this metaphor, the Higgs field is a sort of “Higgs solution”. Overall, this solution should be uniform: if you have more ions of a certain type in one place than another, over time they will dissolve until they reach a uniform mixture again. In this metaphor, the Higgs particle detected by the LHC is like a brief disturbance in the fluid: by stirring the solution at high energy, we’ve managed to briefly get more of one type of ion in one place than the average concentration.

What determines the average concentration, though?

Essentially, it’s arbitrary. If this were really a chemistry experiment, it would depend on the initial conditions: which ions we put in to the mixture in the first place. In physics, quantum mechanics plays a role, randomly selecting one option out of the many possibilities.

 

nile_red_01

Choose wisely

(Note that this metaphor doesn’t explain why there has to be a solution, why the water can’t just be “pure”. A setup that required this would probably be chemically complicated enough to confuse nearly everybody, so I’m leaving that feature out. Just trust that “no ions” isn’t one of our options.)

Up till now, the choice of mixture didn’t matter very much. But different ions interact with other chemicals in different ways, and this has some interesting implications.

Suppose we have a tube filled with our Higgs solution. We want to shoot some substance through the tube, and collect it on the other side. This other substance is going to represent a force.

If our force substance doesn’t react with the ions in our Higgs solution, it will just go through to the other side. If it does react, though, then it will be slowed down, and only some of it will get to the other side, possibly none at all.

You can think of the electro-weak force as a mixture of these sorts of substances. Normally, there is no way to tell the different substances apart. Just like the different Higgs solutions, different parts of the electro-weak force are arbitrary.

However, once we’ve chosen a Higgs solution, things change. Now, different parts of our electro-weak substance will behave differently. The parts that react with the ions in our Higgs solution will slow down, and won’t make it through the tube, while the parts that don’t interact will just flow on through.

We call the part that gets through the tube electromagnetism, and the part that doesn’t the weak nuclear force. Electromagnetism is long-range, its waves (light) can travel great distances. The weak nuclear force is short-range, and doesn’t have an effect outside of the scale of atoms.

The important thing to take away from this is that the division between electromagnetism and the weak nuclear force is totally arbitrary. Taken by themselves, they’re equivalent parts of the same, electro-weak force. It’s only because some of them interact with the Higgs, while others don’t, that we distinguish those parts from each other. If the Higgs solution were a different mixture (if the Higgs field had different charges) then a different part of the electroweak force would be long-range, and a different part would be short-range.

We wouldn’t be able to tell the difference, though. We’d see a long-range force, and a short-range force, and a Higgs field. In the end, our world would be completely the same, just based on a different, arbitrary choice.