Tag Archives: gravity

More Travel

I’m visiting the Niels Bohr Institute this week, on my way back from Amplitudes.


You might recognize the place from old conference photos.

Amplitudes itself was nice. There weren’t any surprising new developments, but a lot of little “aha” moments when one of the speakers explained something I’d heard vague rumors about. I figured I’d mention a few of the things that stood out. Be warned, this is going to be long and comparatively jargon-heavy.

The conference organizers were rather daring in scheduling Nima Arkani-Hamed for the first talk, as Nima has a tendency to arrive at the last minute and talk for twice as long as you ask him to. Miraculously, though, things worked out, if only barely: Nima arrived at the wrong campus and ran most of the way back, showing up within five minutes of the start of the conference. He also stuck to his allotted time, possibly out of courtesy to his student, Yuntao Bai, who was speaking next.

Between the two of them, Nima and Yuntao covered an interesting development, tying the Amplituhedron together with the string theory-esque picture of scattering amplitudes pioneered by Freddy Cachazo, Song He, and Ellis Ye Yuan (or CHY). There’s a simpler (and older) Amplituhedron-like object called the associahedron that can be thought of as what the Amplituhedron looks like on the surface of a string, and CHY’s setup can be thought of as a sophisticated map that takes this object and turns it into the Amplituhedron. It was nice to hear from both Nima and his student on this topic, because Nima’s talks are often high on motivation but low on detail, so it was great that Yuntao was up next to fill in the blanks.

Anastasia Volovich talked about Landau singularities, a topic I’ve mentioned before. What I hadn’t appreciated was how much they can do with them at this point. Originally, Juan Maldacena had suggested that these singularities, mathematical points that determine the behavior of amplitudes first investigated by Landau in the 60’s, might explain some of the simplicity we’ve observed in N=4 super Yang-Mills. They ended up not being enough by themselves, but what Volovich and collaborators are discovering is that with a bit of help from the Amplithedron they explain quite a lot. In particular, if they start with the Amplituhedron and do a procedure similar to Landau’s, they can find the simpler set of singularities allowed by N=4 super Yang-Mills, at least for the examples they’ve calculated. It’s still a bit unclear how this links to their previous investigations of these things in terms of cluster algebras, but it sounds like they’re making progress.

Dmitry Chicherin gave me one of those minor “aha” moments. One big useful fact about scattering amplitudes in N=4 super Yang-Mills is that they’re “dual” to different mathematical objects called Wilson loops, a fact which allows us to compare to the “POPE” approach of Basso, Sever, and Vieira. Chicherin asks the question: “What if you’re not calculating a scattering amplitude or a Wilson loop, but something halfway in between?” Interestingly, this has an answer, with the “halfway between” objects having a similar duality among themselves.

Yorgos Papathansiou talked about work I’ve been involved with. I’ll probably cover it in detail in another post, so now I’ll just mention that we’re up to six loops!

Andy Strominger talked about soft theorems. It’s always interesting seeing people who don’t traditionally work on amplitudes giving talks at Amplitudes. There’s a range of responses, from integrability people (who are basically welcomed like family) to work on fairly unrelated areas that have some “amplitudes” connection (met with yawns except from the few people interested in the connection). The response to Strominger was neither welcome nor boredom, but lively debate. He’s clearly doing something interesting, but many specialists worried he was ignorant of important no-go results in the field that could hamstring some of his bolder conjectures.

The second day focused on methods for more practical calculations, and had the overall effect of making me really want to clean up my code. Tiziano Peraro’s finite field methods in particular look like they could be quite useful. There were two competing bases of integrals on display, Von Manteuffel’s finite integrals and Rutger Boels’s uniform transcendental integrals later in the conference. Both seem to have their own virtues, and I ended up asking Rob Schabinger if it was possible to combine the two, with the result that he’s apparently now looking into it.

The more practical talks that day had a clear focus on calculations with two loops, which are becoming increasingly viable for LHC-relevant calculations. From talking to people who work on this, I get the impression that the goal of these calculations isn’t so much to find new physics as to confirm and investigate new physics found via other methods. Things are complicated enough at two loops that for the moment it isn’t feasible to describe what all the possible new particles might do at that order, and instead the goal is to understand the standard model well enough that if new physics is noticed (likely based on one-loop calculations) then the details can be pinned down by two-loop data. But this picture could conceivably change as methods improve.

Wednesday was math-focused. We had a talk by Francis Brown on his conjecture of a cosmic Galois group. This is a topic I knew a bit about already, since it’s involved in something I’ve been working on. Brown’s talk cleared up some things, but also shed light on the vagueness of the proposal. As with Yorgos’s talk, I’ll probably cover more about this in a future post, so I’ll skip the details for now.

There was also a talk by Samuel Abreu on a much more physical picture of the “symbols” we calculate with. This is something I’ve seen presented before by Ruth Britto, and it’s a setup I haven’t looked into as much as I ought to. It does seem at the moment that they’re limited to one loop, which is a definite downside. Other talks discussed elliptic integrals, the bogeyman that we still can’t deal with by our favored means but that people are at least understanding better.

The last talk on Wednesday before the hike was by David Broadhurst, who’s quite a character in his own right. Broadhurst sat in the front row and asked a question after nearly every talk, usually bringing up papers at least fifty years old, if not one hundred and fifty. At the conference dinner he was exactly the right person to read the Address to the Haggis, resurrecting a thick Scottish accent from his youth. Broadhurst’s techniques for handling high-loop elliptic integrals are quite impressively powerful, leaving me wondering if the approach can be generalized.

Thursday focused on gravity. Radu Roiban gave a better idea of where he and his collaborators are on the road to seven-loop supergravity and what the next bottlenecks are along the way. Oliver Schlotterer’s talk was another one of those “aha” moments, helping me understand a key difference between two senses in which gravity is Yang-Mills squared ( the Kawai-Lewellen-Tye relations and BCJ). In particular, the latter is much more dependent on specifics of how you write the scattering amplitude, so to the extent that you can prove something more like the former at higher loops (the original was only for trees, unlike BCJ) it’s quite valuable. Schlotterer has managed to do this at one loop, using the “Q-cut” method I’ve (briefly) mentioned before. The next day’s talk by Emil Bjerrum-Bohr focused more heavily on these Q-cuts, including a more detailed example at two loops than I’d seen that group present before.

There was also a talk by Walter Goldberger about using amplitudes methods for classical gravity, a subject I’ve looked into before. It was nice to see a more thorough presentation of those ideas, including a more honest appraisal of which amplitudes techniques are really helpful there.

There were other interesting topics, but I’m already way over my usual post length, so I’ll sign off for now. Videos from all but a few of the talks are now online, so if you’re interested you should watch them on the conference page.

You Can’t Smooth the Big Bang

As a kid, I was fascinated by cosmology. I wanted to know how the universe began, possibly disproving gods along the way, and I gobbled up anything that hinted at the answer.

At the time, I had to be content with vague slogans. As I learned more, I could match the slogans to the physics, to see what phrases like “the Big Bang” actually meant. A large part of why I went into string theory was to figure out what all those documentaries are actually about.

In the end, I didn’t end up working on cosmology due my ignorance of a few key facts while in college (mostly, who Vilenkin was). Thus, while I could match some of the old popularization stories to the science, there were a few I never really understood. In particular, there were two claims I never quite saw fleshed out: “The universe emerged from nothing via quantum tunneling” and “According to Hawking, the big bang was not a singularity, but a smooth change with no true beginning.”

As a result, I’m delighted that I’ve recently learned the physics behind these claims, in the context of a spirited take-down of both by Perimeter’s Director Neil Turok.


My boss

Neil held a surprise string group meeting this week to discuss the paper I linked above, “No smooth beginning for spacetime” with Job Feldbrugge and Jean-Luc Lehners, as well as earlier work with Steffen Gielen. In it, he talked about problems in the two proposals I mentioned: Hawking’s suggestion that the big bang was smooth with no true beginning (really, the Hartle-Hawking no boundary proposal) and the idea that the universe emerged from nothing via quantum tunneling (really, Vilenkin’s tunneling from nothing proposal).

In popularization-speak, these two proposals sound completely different. In reality, though, they’re quite similar (and as Neil argues, they end up amounting to the same thing). I’ll steal a picture from his paper to illustrate:


The picture on the left depicts the universe under the Hartle-Hawking proposal, with time increasing upwards on the page. As the universe gets older, it looks like the expanding (de Sitter) universe we live in. At the beginning, though, there’s a cap, one on which time ends up being treated not in the usual way (Lorentzian space) but on the same footing as the other dimensions (Euclidean space). This lets space be smooth, rather than bunching up in a big bang singularity. After treating time in this way the result is reinterpreted (via a quantum field theory trick called Wick rotation) as part of normal space-time.

What’s the connection to Vilenkin’s tunneling picture? Well, when we talk about quantum tunneling, we also end up describing it with Euclidean space. Saying that the universe tunneled from nothing and saying it has a Euclidean “cap” then end up being closely related claims.

Before Neil’s work these two proposals weren’t thought of as the same because they were thought to give different results. What Neil is arguing is that this is due to a fundamental mistake on Hartle and Hawking’s part. Specifically, Neil is arguing that the Wick rotation trick that Hartle and Hawking used doesn’t work in this context, when you’re trying to calculate small quantum corrections for gravity. In normal quantum field theory, it’s often easier to go to Euclidean space and use Wick rotation, but for quantum gravity Neil is arguing that this technique stops being rigorous. Instead, you should stay in Lorentzian space, and use a more powerful mathematical technique called Picard-Lefschetz theory.

Using this technique, Neil found that Hartle and Hawking’s nicely behaved result was mistaken, and the real result of what Hartle and Hawking were proposing looks more like Vilenkin’s tunneling proposal.

Neil then tried to see what happens when there’s some small perturbation from a perfect de Sitter universe. In general in physics if you want to trust a result it ought to be stable: small changes should stay small. Otherwise, you’re not really starting from the right point, and you should instead be looking at wherever the changes end up taking you. What Neil found was that the Hartle-Hawking and Vilenkin proposals weren’t stable. If you start with a small wiggle in your no-boundary universe you get, not the purple middle drawing with small wiggles, but the red one with wiggles that rapidly grow unstable. The implication is that the Hartle-Hawking and Vilenkin proposals aren’t just secretly the same, they also both can’t be the stable state of the universe.

Neil argues that this problem is quite general, and happens under the following conditions:

  1. A universe that begins smoothly and semi-classically (where quantum corrections are small) with no sharp boundary,
  2. with a positive cosmological constant (the de Sitter universe mentioned earlier),
  3. under which the universe expands many times, allowing the small fluctuations to grow large.

If the universe avoids one of those conditions (maybe the cosmological constant changes in the future and the universe stops expanding, for example) then you might be able to avoid Neil’s argument. But if not, you can’t have a smooth semi-classical beginning and still have a stable universe.

Now, no debate in physics ends just like that. Hartle (and collaborators) don’t disagree with Neil’s insistence on Picard-Lefschetz theory, but they argue there’s still a way to make their proposal work. Neil mentioned at the group meeting that he thinks even the new version of Hartle’s proposal doesn’t solve the problem, he’s been working out the calculation with his collaborators to make sure.

Often, one hears about an idea from science popularization and then it never gets mentioned again. The public hears about a zoo of proposals without ever knowing which ones worked out. I think child-me would appreciate hearing what happened to Hawking’s proposal for a universe with no boundary, and to Vilenkin’s proposal for a universe emerging from nothing. Adult-me certainly does. I hope you do too.

Thoughts from the Winter School

There are two things I’d like to talk about this week.

First, as promised, I’ll talk about what I worked on at the PSI Winter School.

Freddy Cachazo and I study what are called scattering amplitudes. At first glance, these are probabilities that two subatomic particles scatter off each other, relevant for experiments like the Large Hadron Collider. In practice, though, they can calculate much more.

For example, let’s say you have two black holes circling each other, like the ones LIGO detected. Zoom out far enough, and you can think of each one as a particle. The two particle-black holes exchange gravitons, and those exchanges give rise to the force of gravity between them.


In the end, it’s all just particle physics.


Based on that, we can use our favorite scattering amplitudes to make predictions for gravitational wave telescopes like LIGO.

There’s a bit of weirdness to this story, though, because these amplitudes don’t line up with predictions in quite the way we’re used to. The way we calculate amplitudes involves drawing diagrams, and those diagrams have loops. Normally, each “loop” makes the amplitude more quantum-mechanical. Only the diagrams with no loops (“tree diagrams”) come from classical physics alone.

(Here “classical physics” just means “not quantum”: I’m calling general relativity “classical”.)

For this problem, we only care about classical physics: LIGO isn’t sensitive enough to see quantum effects. The weird thing is, despite that, we still need loops.

(Why? This is a story I haven’t figured out how to tell in a non-technical way. The technical explanation has to do with the fact that we’re calculating a potential, not an amplitude, so there’s a Fourier transformation, and keeping track of the dimensions entails tossing around some factors of Planck’s constant. But I feel like this still isn’t quite the full story.)

So if we want to make predictions for LIGO, we want to compute amplitudes with loops. And as amplitudeologists, we should be pretty good at that.

As it turns out, plenty of other people have already had that idea, but there’s still room for improvement.

Our time with the students at the Winter School was limited, so our goal was fairly modest. We wanted to understand those other peoples’ calculations, and perhaps to think about them in a slightly cleaner way. In particular, we wanted to understand why “loops” are really necessary, and whether there was some way of understanding what the “loops” were doing in a more purely classical picture.

At this point, we feel like we’ve got the beginning of an idea of what’s going on. Time will tell whether it works out, and I’ll update you guys when we have a more presentable picture.


Unfortunately, physics wasn’t the only thing I was thinking about last week, which brings me to my other topic.

This blog has a fairly strong policy against talking politics. This is for several reasons. Partly, it’s because politics simply isn’t my area of expertise. Partly, it’s because talking politics tends to lead to long arguments in which nobody manages to learn anything. Despite this, I’m about to talk politics.

Last week, citizens of Iran, Iraq, Libya, Somalia, Sudan, Syria and Yemen were barred from entering the US. This included not only new visa applicants, but also those who already have visas or green cards. The latter group includes long-term residents of the US, many of whom were detained in airports and threatened with deportation when their flights arrived shortly after the ban was announced. Among those was the president of the Graduate Student Organization at my former grad school.

A federal judge has blocked parts of the order, and the Department of Homeland Security has announced that there will be case-by-case exceptions. Still, plenty of people are stuck: either abroad if they didn’t get in in time, or in the US, afraid that if they leave they won’t be able to return.

Politics isn’t in my area of expertise. But…

I travel for work pretty often. I know how terrifying and arbitrary border enforcement can be. I know how it feels to risk thousands of dollars and months of planning because some consulate or border official is having a bad day.

I also know how essential travel is to doing science. When there’s only one expert in the world who does the sort of work you need, you can’t just find a local substitute.

And so for this, I don’t need to be an expert in politics. I don’t need a detailed case about the risks of terrorism. I already know what I need to, and I know that this is cruel.

And so I stand in solidarity with the people who were trapped in airports, and those still trapped abroad and trapped in the US. You have been treated cruelly, and you shouldn’t have been. Hopefully, that sort of message can transcend politics.


One final thing: I’m going to be a massive hypocrite and continue to ban political comments on this blog. If you want to talk to me about any of this (and you think one or both of us might actually learn something from the exchange) please contact me in private.

Fun with Misunderstandings

Perimeter had its last Public Lecture of the season this week, with Mario Livio giving some highlights from his book Brilliant Blunders. The lecture should be accessible online, either here or on Perimeter’s YouTube page.

These lectures tend to attract a crowd of curious science-fans. To give them something to do while they’re waiting, a few local researchers walk around with T-shirts that say “Ask me, I’m a scientist!” Sometimes we get questions about the upcoming lecture, but more often people just ask us what they’re curious about.

Long-time readers will know that I find this one of the most fun parts of the job. In particular, there’s a unique challenge in figuring out just why someone asked a question. Often, there’s a hidden misunderstanding they haven’t recognized.

The fun thing about these misunderstandings is that they usually make sense, provided you’re working from the person in question’s sources. They heard a bit of this and a bit of that, and they come to the most reasonable conclusion they can given what’s available. For those of us who have heard a more complete story, this often leads to misunderstandings we would never have thought of, but that in retrospect are completely understandable.

One of the simpler ones I ran into was someone who was confused by people claiming that we were running out of water. How could there be a water shortage, he asked, if the Earth is basically a closed system? Where could the water go?

The answer is that when people are talking about a water shortage, they’re not talking about water itself running out. Rather, they’re talking about a lack of safe drinking water. Maybe the water is polluted, or stuck in the ocean without expensive desalinization. This seems like the sort of thing that would be extremely obvious, but if you just hear people complaining that water is running out without the right context then you might just not end up hearing that part of the story.

A more involved question had to do with time dilation in general relativity. The guy had heard that atomic clocks run faster if you’re higher up, and that this was because time itself runs faster in lower gravity.

Given that, he asked, what happens if someone travels to an area of low gravity and then comes back? If more time has passed for them, then they’d be in the future, so wouldn’t they be at the “wrong time” compared to other people? Would they even be able to interact with them?

This guy’s misunderstanding came from hearing what happens, but not why. While he got that time passes faster in lower gravity, he was still thinking of time as universal: there is some past, and some future, and if time passes faster for one person and slower for another that just means that one person is “skipping ahead” into the other person’s future.

What he was missing was the explanation that time dilation comes from space and time bending. Rather than “skipping ahead”, a person for whom time passes faster just experiences more time getting to the same place, because they’re traveling on a curved path through space-time.

As usual, this is easier to visualize in space than in time. I ended up drawing a picture like this:


Imagine person A and person B live on a circle. If person B stays the same distance from the center while person A goes out further, they can both travel the same angle around the circle and end up in the same place, but A will have traveled further, even ignoring the trips up and down.

What’s completely intuitive in space ends up quite a bit harder to visualize in time. But if you at least know what you’re trying to think about, that there’s bending involved, then it’s easier to avoid this guy’s kind of misunderstanding. Run into the wrong account, though, and even if it’s perfectly correct (this guy had heard some of Hawking’s popularization work on the subject), if it’s not emphasizing the right aspects you can come away with the wrong impression.

Misunderstandings are interesting because they reveal how people learn. They’re windows into different thought processes, into what happens when you only have partial evidence. And because of that, they’re one of the most fascinating parts of science popularization.

Mass Is Just Energy You Haven’t Met Yet

How can colliding two protons give rise to more massive particles? Why do vibrations of a string have mass? And how does the Higgs work anyway?

There is one central misunderstanding that makes each of these topics confusing. It’s something I’ve brought up before, but it really deserves its own post. It’s people not realizing that mass is just energy you haven’t met yet.

It’s quite intuitive to think of mass as some sort of “stuff” that things can be made out of. In our everyday experience, that’s how it works: combine this mass of flour and this mass of sugar, and get this mass of cake. Historically, it was the dominant view in physics for quite some time. However, once you get to particle physics it starts to break down.

It’s probably most obvious for protons. A proton has a mass of 938 MeV/c², or 1.6×10⁻²⁷ kg in less physicist-specific units. Protons are each made of three quarks, two up quarks and a down quark. Naively, you’d think that the quarks would have to be around 300 MeV/c². They’re not, though: up and down quarks both have masses less than 10 MeV/c². Those three quarks account for less than a fiftieth of a proton’s mass.

The “extra” mass is because a proton is not just three quarks. It’s three quarks interacting. The forces between those quarks, the strong nuclear force that binds them together, involves a heck of a lot of energy. And from a distance, that energy ends up looking like mass.

This isn’t unique to protons. In some sense, it’s just what mass is.

The quarks themselves get their mass from the Higgs field. Far enough away, this looks like the quarks having a mass. However, zoom in and it’s energy again, the energy of interaction between quarks and the Higgs. In string theory, mass comes from the energy of vibrating strings. And so on. Every time we run into something that looks like a fundamental mass, it ends up being just another energy of interaction.

If mass is just energy, what about gravity?

When you’re taught about gravity, the story is all about mass. Mass attracts mass. Mass bends space-time. What gets left out, until you actually learn the details of General Relativity, is that energy gravitates too.

Normally you don’t notice this, because mass contributes so much more to energy than anything else. That’s really what E=m is really about: it’s a unit conversion formula. It tells you that if you want to know how much energy a given mass “really is”, you multiply it by the speed of light squared. And that’s a large enough number that most of the time, when you notice energy gravitating, it’s because that energy looks like a big chunk of mass. (It’s also why physicists like silly units like MeV/c² for mass: we can just multiply by c² and get an energy!)

It’s really tempting to think about mass as a substance, of mass as always conserved, of mass as fundamental. But in physics we often have to toss aside our everyday intuitions, and this is no exception. Mass really is just energy. It’s just energy that we’ve “zoomed out” enough not to notice.

Source Your Common Sense

When I wrote that post on crackpots, one of my inspirations was a particularly annoying Twitter conversation. The guy I was talking to had convinced himself that general relativity was a mistake. He was especially pissed off by the fact that, in GR, energy is not always conserved. Screw Einstein, energy conservation is just common sense! Right?

Think a little bit about why you believe in energy conservation. Is it because you run into a lot of energy in your day-to-day life, and it’s always been conserved? Did you grow up around something that was obviously energy? Or maybe someone had to explain it to you?

Teacher Pointing at Map of World

Maybe you learned about it…from a physics teacher?

A lot of the time, things that seem obvious only got that way because you were taught them. “Energy” isn’t an intuitive concept, however much it’s misused that way. It’s something defined by physicists because it solves a particular role, a consequence of symmetries in nature. When you learn about energy conservation in school, that’s because it’s one of the simpler ways to explain a much bigger concept, so you shouldn’t be surprised if there are some inaccuracies. If you know where your “common sense” is coming from, you can anticipate when and how it might go awry.

Similarly, if, like one of the commenters on my crackpot post, you’re uncomfortable with countable and uncountable infinities, remember that infinity isn’t “common sense” either. It’s something you learned about in a math class, from a math teacher. And just like energy conservation, it’s a simplification of a more precise concept, with epsilons and deltas and all that jazz.

It’s not possible to teach all the nuances of every topic, so naturally most people will hear a partial story. What’s important is to recognize that you heard a partial story, and not enshrine it as “common sense” when the real story comes knocking.

Don’t physicists use common sense, though? What about “physical intuition”?

Physical intuition has a lot of mystique behind it, and is often described as what separates us from the mathematicians. As such, different people mean different things by it…but under no circumstances should it be confused with pure “common sense”. Physical intuition uses analogy and experience. It involves seeing a system and anticipating the sorts of things you can do with it, like playing a game and assuming there’ll be a save button. In order for these sorts of analogies to work, they generally aren’t built around everyday objects or experiences. Instead, they use physical systems that are “similar” to the one under scrutiny in important ways, while being better understood in others. Crucially, physical intuition involves working in context. It’s not just uncritical acceptance of what one would naively expect.

So when your common sense is tingling, see if you can provide a source. Is that source relevant, experience with a similar situation? Or is it in fact a half-remembered class from high school?

Things You Don’t Know about the Power of the Dark Side

Last Wednesday, Katherine Freese gave a Public Lecture at Perimeter on the topic of Dark Matter and Dark Energy. The talk should be on Perimeter’s YouTube page by the time this post is up.

Answering twitter questions during the talk made me realize that there’s a lot the average person finds confusing about Dark Matter and Dark Energy. Freese addressed much of this pretty well in her talk, but I felt like there was room for improvement. Rather than try to tackle it myself, I decided to interview an expert on the Dark Side of the universe.


Twitter doesn’t know the power of the dark side!

Lord Vader, some people have a hard time distinguishing Dark Matter and Dark Energy. What do you have to say to them?

Fools! Light side astronomers call “dark” that which they cannot observe and cannot understand. “Fear” and “anger” are different heights of emotion, but to the Jedi they are only the path to the Dark Side. Dark Energy and Dark Matter are much the same: both distinct, both essential to the universe, and both “dark” to the telescopes of the light.

Let’s start with Dark Matter. Is it really matter?

You ask an empty question. “Matter” has been defined in many ways. When we on the Dark Side refer to Dark Matter, we merely mean to state that it behaves much like the matter you know: it is drawn to and fro by gravity, sloshing about.

It is distinct from your ordinary matter in that two of the forces of nature, the strong nuclear force and electromagnetism, do not concern it. Ordinary matter is bound together in the nuclei of atoms by the strong force, or woven into atoms and molecules by electromagnetism. This makes it subject to all manner of messy collisions.

Dark Matter, in contrast, is pure, partaking neither of nuclear nor chemical reactions. It passes through each of us with no notice. Only the weak nuclear force and gravity affect it. The latter has brought it slowly into clumps and threads through the universe, each one a vast nest for groupings of stars. Truly, Dark Matter surrounds us, penetrates us, and binds the galaxy together.

Could Dark Matter be something we’re more familiar with, like neutrinos or black holes? What about a modification of gravity?

Many wondered as much, when the study of the Dark Side was young. They were wrong.

The matter you are accustomed to composes merely a twentieth of the universe, while Dark Matter is more than a quarter. There is simply not enough of these minor contributions, neutrinos and black holes, to account for the vast darkness that surrounds the galaxy, and with each astronomer’s investigation we grow more assured.

As for modifying gravity, do you seek to modify a fundamental Force?

If so, you should be wary. Forces, by their nature, are accompanied by particles, and gravity is no exception. Take care that your tinkering does not result in a new sort of particle. If so, you may be unknowingly walking the path of the Dark Side, for your modification may be just another form of Dark Matter.

What sort of things could Dark Matter be? Can Dark Matter decay into ordinary matter? Could there be anti-Dark Matter?

As of yet, your scientists are still baffled by the nature of Dark Matter. Still, there are limits. Since only rare events could produce it from ordinary matter, the universe’s supply of Dark Matter must be ancient, dating back to the dawn of the cosmos. In that case, it must decay only slowly, if at all. Similarly, if Dark Matter had antimatter forms then its interactions must be so weak that it has not simply annihilated with its antimatter half across the universe. So while either is possible, it may be simpler for your theorists if Dark Matter did not decay, and was its own antimatter counterpart. On the other hand, if Dark Matter did undergo such reactions, your kind may one day be able to detect it.

Of course, as a master of the Dark Side I know the true nature of Dark Matter. However, I could only impart it to a loyal apprentice…

Yeah, I think I’ll pass on that. They say you can only get a job in academia when someone dies, but unlike the Sith they don’t mean it literally.

Let’s move on to Dark Energy. What can you tell us about it?

Dark “Energy”, like Dark Matter, is named for what people on your Earth cannot comprehend. Nothing, not even Dark Energy, is “made of energy”. Dark Energy is “energy” merely because it behaves unlike matter.

Matter, even Dark Matter, is drawn together by the force of gravity. Under its yoke, the universe would slow down in its expansion and eventually collapse into a crunch, like the throat of an incompetent officer.

However, the universe is not collapsing, but accelerating, galaxies torn away from each other by a force that must compose more than two thirds of the universe. It is rather like the Yuuzhan Vong, a mysterious force from outside the galaxy that scouts persistently under- or over-estimate.

Umm, I’m pretty sure the Yuuzhan Vong don’t exist anymore, since Disney got rid of the Expanded Universe.

That perfidious Mouse!

Well folks, Vader is now on a rampage of revenge in the Disney offices, so I guess we’ll have to end the interview. Tune in next week, and until then, may the Force be with you!