Tag Archives: gravity

A LIGO in the Darkness

For the few of you who haven’t yet heard: LIGO has detected gravitational waves from a pair of colliding neutron stars, and that detection has been confirmed by observations of the light from those stars.

gw170817_factsheet

They also provide a handy fact sheet.

This is a big deal! On a basic level, it means that we now have confirmation from other instruments and sources that LIGO is really detecting gravitational waves.

The implications go quite a bit further than that, though. You wouldn’t think that just one observation could tell you very much, but this is an observation of an entirely new type, the first time an event has been seen in both gravitational waves and light.

That, it turns out, means that this one observation clears up a whole pile of mysteries in one blow. It shows that at least some gamma ray bursts are caused by colliding neutron stars, that neutron star collisions can give rise to the high-power “kilonovas” capable of forming heavy elements like gold…well, I’m not going to be able to give justice to the full implications in this post. Matt Strassler has a pair of quite detailed posts on the subject, and Quanta magazine’s article has a really great account of the effort that went into the detection, including coordinating the network of telescopes that made it possible.

I’ll focus here on a few aspects that stood out to me.

One fun part of the story behind this detection was how helpful “failed” observations were. VIRGO (the European gravitational wave experiment) was running alongside LIGO at the time, but VIRGO didn’t see the event (or saw it so faintly it couldn’t be sure it saw it). This was actually useful, because VIRGO has a blind spot, and VIRGO’s non-observation told them the event had to have happened in that blind spot. That narrowed things down considerably, and allowed telescopes to close in on the actual merger. IceCube, the neutrino observatory that is literally a cubic kilometer chunk of Antarctica filled with sensors, also failed to detect the event, and this was also useful: along with evidence from other telescopes, it suggests that the “jet” of particles emitted by the merged neutron stars is tilted away from us.

One thing brought up at LIGO’s announcement was that seeing gravitational waves and electromagnetic light at roughly the same time puts limits on any difference between the speed of light and the speed of gravity. At the time I wondered if this was just a throwaway line, but it turns out a variety of proposed modifications of gravity predict that gravitational waves will travel slower than light. This event rules out many of those models, and tightly constrains others.

The announcement from LIGO was screened at NBI, but they didn’t show the full press release. Instead, they cut to a discussion for local news featuring NBI researchers from the various telescope collaborations that observed the event. Some of this discussion was in Danish, so it was only later that I heard about the possibility of using the simultaneous measurement of gravitational waves and light to measure the expansion of the universe. While this event by itself didn’t result in a very precise measurement, as more collisions are observed the statistics will get better, which will hopefully clear up a discrepancy between two previous measures of the expansion rate.

A few news sources made it sound like observing the light from the kilonova has let scientists see directly which heavy elements were produced by the event. That isn’t quite true, as stressed by some of the folks I talked to at NBI. What is true is that the light was consistent with patterns observed in past kilonovas, which are estimated to be powerful enough to produce these heavy elements. However, actually pointing out the lines corresponding to these elements in the spectrum of the event hasn’t been done yet, though it may be possible with further analysis.

A few posts back, I mentioned a group at NBI who had been critical of LIGO’s data analysis and raised doubts of whether they detected gravitational waves at all. There’s not much I can say about this until they’ve commented publicly, but do keep an eye on the arXiv in the next week or two. Despite the optimistic stance I take in the rest of this post, the impression I get from folks here is that things are far from fully resolved.

Advertisements

Congratulations to Rainer Weiss, Barry Barish, and Kip Thorne!

The Nobel Prize in Physics was announced this week, awarded to Rainer Weiss, Kip Thorne, and Barry Barish for their work on LIGO, the gravitational wave detector.

Nobel2017

Many expected the Nobel to go to LIGO last year, but the Nobel committee waited. At the time, it was expected the prize would be awarded to Rainer Weiss, Kip Thorne, and Ronald Drever, the three founders of the LIGO project, but there were advocates for Barry Barish was well. Traditionally, the Nobel is awarded to at most three people, so the argument got fairly heated, with opponents arguing Barish was “just an administrator” and advocates pointing out that he was “just the administrator without whom the project would have been cancelled in the 90’s”.

All of this ended up being irrelevant when Drever died last March. The Nobel isn’t awarded posthumously, so the list of obvious candidates (or at least obvious candidates who worked on LIGO) was down to three, which simplified thing considerably for the committee.

LIGO’s work is impressive and clearly Nobel-worthy, but I would be remiss if I didn’t mention that there is some controversy around it. In June, several of my current colleagues at the Niels Bohr Institute uploaded a paper arguing that if you subtract the gravitational wave signal that LIGO claims to have found then the remaining data, the “noise”, is still correlated between LIGO’s two detectors, which it shouldn’t be if it were actually just noise. LIGO hasn’t released an official response yet, but a LIGO postdoc responded with a guest post on Sean Carroll’s blog, and the team at NBI had responses of their own.

I’d usually be fairly skeptical of this kind of argument: it’s easy for an outsider looking at the data from a big experiment like this to miss important technical details that make the collaboration’s analysis work. That said, having seen some conversations between these folks, I’m a bit more sympathetic. LIGO hadn’t been communicating very clearly initially, and it led to a lot of unnecessary confusion on both sides.

One thing that I don’t think has been emphasized enough is that there are two claims LIGO is making: that they detected gravitational waves, and that they detected gravitational waves from black holes of specific masses at a specific distance. The former claim could be supported by the existence of correlated events between the detectors, without many assumptions as to what the signals should look like. The team at NBI seem to have found a correlation of that sort, but I don’t know if they still think the argument in that paper holds given what they’ve said elsewhere.

The second claim, that the waves were from a collision of black holes with specific masses, requires more work. LIGO compares the signal to various models, or “templates”, of black hole events, trying to find one that matches well. This is what the group at NBI subtracts to get the noise contribution. There’s a lot of potential for error in this sort of template-matching. If two templates are quite similar, it may be that the experiment can’t tell the difference between them. At the same time, the individual template predictions have their own sources of uncertainty, coming from numerical simulations and “loops” in particle physics-style calculations. I haven’t yet found a clear explanation from LIGO of how they take these various sources of error into account. It could well be that even if they definitely saw gravitational waves, they don’t actually have clear evidence for the specific black hole masses they claim to have seen.

I’m sure we’ll hear more about this in the coming months, as both groups continue to talk through their disagreement. Hopefully we’ll get a clearer picture of what’s going on. In the meantime, though, Weiss, Barish, and Thorne have accomplished something impressive regardless, and should enjoy their Nobel.

Visiting Uppsala

I’ve been in Uppsala this week, visiting Henrik Johansson‘s group.

IMG_20170927_095605609

The Ångström Laboratory here is substantially larger than an ångström, a clear example of false advertising.

As such, I haven’t had time to write a long post about the recent announcement by the LIGO and VIRGO collaborations. Luckily, Matt Strassler has written one of his currently all-too-rare posts on the subject, so if you’re curious you should check out what he has to say.

Looking at the map of black hole collisions in that post, I’m struck by how quickly things have improved. The four old detections are broad slashes across the sky, the newest is a small patch. Now that there are enough detectors to triangulate, all detections will be located that precisely, or better. A future map might be dotted with precise locations of black hole collisions, but it would still be marred by those four slashes: relics of the brief time when only two machines in the world could detect gravitational waves.

Textbook Review: Exploring Black Holes

I’m bringing a box of textbooks with me to Denmark. Most of them are for work: a few Quantum Field Theory texts I might use, a Complex Analysis book for when I inevitably forget how to do contour integration.

One of the books, though, is just for fun.

418jsvew76l-_sx378_bo1204203200_

Exploring Black Holes is an introduction to general relativity for undergraduates. The book came out of a collaboration between Edwin F. Taylor, known for his contributions to physics teaching, and John Archibald Wheeler, who among a long list of achievements was responsible for popularizing the term “black hole”. The result is something quite unique: a general relativity course that requires no math more advanced than calculus, and no physics more advanced than special relativity.

It does this by starting, not with the full tensor-riddled glory of Einstein’s equations, but with specialized solutions to those equations, mostly the Schwarzschild solution that describes space around spherical objects (including planets, stars, and black holes). From there, it manages to introduce curved space in a way that is both intuitive and naturally grows out of what students learn about special relativity. It really is the kind of course a student can take right after their first physics course, and indeed as an undergrad that’s exactly what I did.

With just the Schwarzchild solution and its close relatives, you can already answer most of the questions young students have about general relativity. In a series of “projects”, the book explores the corrections GR demands of GPS satellites, the process of falling into a black hole, the famous measurement of the advance of the perihelion of mercury, the behavior of light in a strong gravitational field, and even a bit of cosmology. In the end the students won’t know the full power of the theory, but they’ll get a taste while building valuable physical intuition.

Still, I wouldn’t bring this book with me if it was just an excellent undergraduate textbook. Exploring Black Holes is a great introduction to general relativity, but it also has a hilarious not-so-hidden agenda: inspiring future astronauts to jump into black holes.

“Nowhere could life be simpler or more relaxed than in a free-float frame, such as an unpowered spaceship falling toward a black hole.” – pg. 2-31

The book is full of quotes like this. One of the book’s “projects” involves computing what happens to an astronaut who falls into a black hole. The book takes special care to have students calculate that “spaghettification”, the process by which the tidal forces of a black hole stretch infalling observers into spaghetti, is surprisingly completely painless: the amount of time you experience it is always less than the amount of time it takes light (and thus also pain) to go from your feet to your head, for any (sufficiently calm) black hole.

Why might Taylor and Wheeler want people of the future to jump into black holes? As the discussion on page B-3 of the book describes, the reason is on one level an epistemic one. As theorists, we’d like to reason about what lies inside the event horizon of black holes, but we face a problem: any direct test would be trapped inside, and we would never know the result, which some would argue makes such speculation unscientific. What Taylor and Wheeler point out is that it’s not quite true that no-one would know the results of such a test: if someone jumped into a black hole, they would be able to test our reasoning. If a whole scientific community jumped in, then the question of what is inside a black hole is from their perspective completely scientific.

Of course, I don’t think Taylor and Wheeler seriously thought their book would convince its readers to jump into black holes. For one, it’s unlikely anyone reading the book will get a chance. Still, I suspect that the idea that future generations might explore black holes gave Taylor and Wheeler some satisfaction, and a nice clean refutation of those who think physics inside the horizon is unscientific. Seeing as the result was an excellent textbook full of hilarious prose, I can’t complain.

More Travel

I’m visiting the Niels Bohr Institute this week, on my way back from Amplitudes.

IMG_20170719_152906

You might recognize the place from old conference photos.

Amplitudes itself was nice. There weren’t any surprising new developments, but a lot of little “aha” moments when one of the speakers explained something I’d heard vague rumors about. I figured I’d mention a few of the things that stood out. Be warned, this is going to be long and comparatively jargon-heavy.

The conference organizers were rather daring in scheduling Nima Arkani-Hamed for the first talk, as Nima has a tendency to arrive at the last minute and talk for twice as long as you ask him to. Miraculously, though, things worked out, if only barely: Nima arrived at the wrong campus and ran most of the way back, showing up within five minutes of the start of the conference. He also stuck to his allotted time, possibly out of courtesy to his student, Yuntao Bai, who was speaking next.

Between the two of them, Nima and Yuntao covered an interesting development, tying the Amplituhedron together with the string theory-esque picture of scattering amplitudes pioneered by Freddy Cachazo, Song He, and Ellis Ye Yuan (or CHY). There’s a simpler (and older) Amplituhedron-like object called the associahedron that can be thought of as what the Amplituhedron looks like on the surface of a string, and CHY’s setup can be thought of as a sophisticated map that takes this object and turns it into the Amplituhedron. It was nice to hear from both Nima and his student on this topic, because Nima’s talks are often high on motivation but low on detail, so it was great that Yuntao was up next to fill in the blanks.

Anastasia Volovich talked about Landau singularities, a topic I’ve mentioned before. What I hadn’t appreciated was how much they can do with them at this point. Originally, Juan Maldacena had suggested that these singularities, mathematical points that determine the behavior of amplitudes first investigated by Landau in the 60’s, might explain some of the simplicity we’ve observed in N=4 super Yang-Mills. They ended up not being enough by themselves, but what Volovich and collaborators are discovering is that with a bit of help from the Amplithedron they explain quite a lot. In particular, if they start with the Amplituhedron and do a procedure similar to Landau’s, they can find the simpler set of singularities allowed by N=4 super Yang-Mills, at least for the examples they’ve calculated. It’s still a bit unclear how this links to their previous investigations of these things in terms of cluster algebras, but it sounds like they’re making progress.

Dmitry Chicherin gave me one of those minor “aha” moments. One big useful fact about scattering amplitudes in N=4 super Yang-Mills is that they’re “dual” to different mathematical objects called Wilson loops, a fact which allows us to compare to the “POPE” approach of Basso, Sever, and Vieira. Chicherin asks the question: “What if you’re not calculating a scattering amplitude or a Wilson loop, but something halfway in between?” Interestingly, this has an answer, with the “halfway between” objects having a similar duality among themselves.

Yorgos Papathansiou talked about work I’ve been involved with. I’ll probably cover it in detail in another post, so now I’ll just mention that we’re up to six loops!

Andy Strominger talked about soft theorems. It’s always interesting seeing people who don’t traditionally work on amplitudes giving talks at Amplitudes. There’s a range of responses, from integrability people (who are basically welcomed like family) to work on fairly unrelated areas that have some “amplitudes” connection (met with yawns except from the few people interested in the connection). The response to Strominger was neither welcome nor boredom, but lively debate. He’s clearly doing something interesting, but many specialists worried he was ignorant of important no-go results in the field that could hamstring some of his bolder conjectures.

The second day focused on methods for more practical calculations, and had the overall effect of making me really want to clean up my code. Tiziano Peraro’s finite field methods in particular look like they could be quite useful. There were two competing bases of integrals on display, Von Manteuffel’s finite integrals and Rutger Boels’s uniform transcendental integrals later in the conference. Both seem to have their own virtues, and I ended up asking Rob Schabinger if it was possible to combine the two, with the result that he’s apparently now looking into it.

The more practical talks that day had a clear focus on calculations with two loops, which are becoming increasingly viable for LHC-relevant calculations. From talking to people who work on this, I get the impression that the goal of these calculations isn’t so much to find new physics as to confirm and investigate new physics found via other methods. Things are complicated enough at two loops that for the moment it isn’t feasible to describe what all the possible new particles might do at that order, and instead the goal is to understand the standard model well enough that if new physics is noticed (likely based on one-loop calculations) then the details can be pinned down by two-loop data. But this picture could conceivably change as methods improve.

Wednesday was math-focused. We had a talk by Francis Brown on his conjecture of a cosmic Galois group. This is a topic I knew a bit about already, since it’s involved in something I’ve been working on. Brown’s talk cleared up some things, but also shed light on the vagueness of the proposal. As with Yorgos’s talk, I’ll probably cover more about this in a future post, so I’ll skip the details for now.

There was also a talk by Samuel Abreu on a much more physical picture of the “symbols” we calculate with. This is something I’ve seen presented before by Ruth Britto, and it’s a setup I haven’t looked into as much as I ought to. It does seem at the moment that they’re limited to one loop, which is a definite downside. Other talks discussed elliptic integrals, the bogeyman that we still can’t deal with by our favored means but that people are at least understanding better.

The last talk on Wednesday before the hike was by David Broadhurst, who’s quite a character in his own right. Broadhurst sat in the front row and asked a question after nearly every talk, usually bringing up papers at least fifty years old, if not one hundred and fifty. At the conference dinner he was exactly the right person to read the Address to the Haggis, resurrecting a thick Scottish accent from his youth. Broadhurst’s techniques for handling high-loop elliptic integrals are quite impressively powerful, leaving me wondering if the approach can be generalized.

Thursday focused on gravity. Radu Roiban gave a better idea of where he and his collaborators are on the road to seven-loop supergravity and what the next bottlenecks are along the way. Oliver Schlotterer’s talk was another one of those “aha” moments, helping me understand a key difference between two senses in which gravity is Yang-Mills squared ( the Kawai-Lewellen-Tye relations and BCJ). In particular, the latter is much more dependent on specifics of how you write the scattering amplitude, so to the extent that you can prove something more like the former at higher loops (the original was only for trees, unlike BCJ) it’s quite valuable. Schlotterer has managed to do this at one loop, using the “Q-cut” method I’ve (briefly) mentioned before. The next day’s talk by Emil Bjerrum-Bohr focused more heavily on these Q-cuts, including a more detailed example at two loops than I’d seen that group present before.

There was also a talk by Walter Goldberger about using amplitudes methods for classical gravity, a subject I’ve looked into before. It was nice to see a more thorough presentation of those ideas, including a more honest appraisal of which amplitudes techniques are really helpful there.

There were other interesting topics, but I’m already way over my usual post length, so I’ll sign off for now. Videos from all but a few of the talks are now online, so if you’re interested you should watch them on the conference page.

You Can’t Smooth the Big Bang

As a kid, I was fascinated by cosmology. I wanted to know how the universe began, possibly disproving gods along the way, and I gobbled up anything that hinted at the answer.

At the time, I had to be content with vague slogans. As I learned more, I could match the slogans to the physics, to see what phrases like “the Big Bang” actually meant. A large part of why I went into string theory was to figure out what all those documentaries are actually about.

In the end, I didn’t end up working on cosmology due my ignorance of a few key facts while in college (mostly, who Vilenkin was). Thus, while I could match some of the old popularization stories to the science, there were a few I never really understood. In particular, there were two claims I never quite saw fleshed out: “The universe emerged from nothing via quantum tunneling” and “According to Hawking, the big bang was not a singularity, but a smooth change with no true beginning.”

As a result, I’m delighted that I’ve recently learned the physics behind these claims, in the context of a spirited take-down of both by Perimeter’s Director Neil Turok.

neil20turok_cropped_photo20credit20jens20langen

My boss

Neil held a surprise string group meeting this week to discuss the paper I linked above, “No smooth beginning for spacetime” with Job Feldbrugge and Jean-Luc Lehners, as well as earlier work with Steffen Gielen. In it, he talked about problems in the two proposals I mentioned: Hawking’s suggestion that the big bang was smooth with no true beginning (really, the Hartle-Hawking no boundary proposal) and the idea that the universe emerged from nothing via quantum tunneling (really, Vilenkin’s tunneling from nothing proposal).

In popularization-speak, these two proposals sound completely different. In reality, though, they’re quite similar (and as Neil argues, they end up amounting to the same thing). I’ll steal a picture from his paper to illustrate:

neilpaperpic

The picture on the left depicts the universe under the Hartle-Hawking proposal, with time increasing upwards on the page. As the universe gets older, it looks like the expanding (de Sitter) universe we live in. At the beginning, though, there’s a cap, one on which time ends up being treated not in the usual way (Lorentzian space) but on the same footing as the other dimensions (Euclidean space). This lets space be smooth, rather than bunching up in a big bang singularity. After treating time in this way the result is reinterpreted (via a quantum field theory trick called Wick rotation) as part of normal space-time.

What’s the connection to Vilenkin’s tunneling picture? Well, when we talk about quantum tunneling, we also end up describing it with Euclidean space. Saying that the universe tunneled from nothing and saying it has a Euclidean “cap” then end up being closely related claims.

Before Neil’s work these two proposals weren’t thought of as the same because they were thought to give different results. What Neil is arguing is that this is due to a fundamental mistake on Hartle and Hawking’s part. Specifically, Neil is arguing that the Wick rotation trick that Hartle and Hawking used doesn’t work in this context, when you’re trying to calculate small quantum corrections for gravity. In normal quantum field theory, it’s often easier to go to Euclidean space and use Wick rotation, but for quantum gravity Neil is arguing that this technique stops being rigorous. Instead, you should stay in Lorentzian space, and use a more powerful mathematical technique called Picard-Lefschetz theory.

Using this technique, Neil found that Hartle and Hawking’s nicely behaved result was mistaken, and the real result of what Hartle and Hawking were proposing looks more like Vilenkin’s tunneling proposal.

Neil then tried to see what happens when there’s some small perturbation from a perfect de Sitter universe. In general in physics if you want to trust a result it ought to be stable: small changes should stay small. Otherwise, you’re not really starting from the right point, and you should instead be looking at wherever the changes end up taking you. What Neil found was that the Hartle-Hawking and Vilenkin proposals weren’t stable. If you start with a small wiggle in your no-boundary universe you get, not the purple middle drawing with small wiggles, but the red one with wiggles that rapidly grow unstable. The implication is that the Hartle-Hawking and Vilenkin proposals aren’t just secretly the same, they also both can’t be the stable state of the universe.

Neil argues that this problem is quite general, and happens under the following conditions:

  1. A universe that begins smoothly and semi-classically (where quantum corrections are small) with no sharp boundary,
  2. with a positive cosmological constant (the de Sitter universe mentioned earlier),
  3. under which the universe expands many times, allowing the small fluctuations to grow large.

If the universe avoids one of those conditions (maybe the cosmological constant changes in the future and the universe stops expanding, for example) then you might be able to avoid Neil’s argument. But if not, you can’t have a smooth semi-classical beginning and still have a stable universe.

Now, no debate in physics ends just like that. Hartle (and collaborators) don’t disagree with Neil’s insistence on Picard-Lefschetz theory, but they argue there’s still a way to make their proposal work. Neil mentioned at the group meeting that he thinks even the new version of Hartle’s proposal doesn’t solve the problem, he’s been working out the calculation with his collaborators to make sure.

Often, one hears about an idea from science popularization and then it never gets mentioned again. The public hears about a zoo of proposals without ever knowing which ones worked out. I think child-me would appreciate hearing what happened to Hawking’s proposal for a universe with no boundary, and to Vilenkin’s proposal for a universe emerging from nothing. Adult-me certainly does. I hope you do too.

Thoughts from the Winter School

There are two things I’d like to talk about this week.

First, as promised, I’ll talk about what I worked on at the PSI Winter School.

Freddy Cachazo and I study what are called scattering amplitudes. At first glance, these are probabilities that two subatomic particles scatter off each other, relevant for experiments like the Large Hadron Collider. In practice, though, they can calculate much more.

For example, let’s say you have two black holes circling each other, like the ones LIGO detected. Zoom out far enough, and you can think of each one as a particle. The two particle-black holes exchange gravitons, and those exchanges give rise to the force of gravity between them.

bhmerger_ligo_3600

In the end, it’s all just particle physics.

 

Based on that, we can use our favorite scattering amplitudes to make predictions for gravitational wave telescopes like LIGO.

There’s a bit of weirdness to this story, though, because these amplitudes don’t line up with predictions in quite the way we’re used to. The way we calculate amplitudes involves drawing diagrams, and those diagrams have loops. Normally, each “loop” makes the amplitude more quantum-mechanical. Only the diagrams with no loops (“tree diagrams”) come from classical physics alone.

(Here “classical physics” just means “not quantum”: I’m calling general relativity “classical”.)

For this problem, we only care about classical physics: LIGO isn’t sensitive enough to see quantum effects. The weird thing is, despite that, we still need loops.

(Why? This is a story I haven’t figured out how to tell in a non-technical way. The technical explanation has to do with the fact that we’re calculating a potential, not an amplitude, so there’s a Fourier transformation, and keeping track of the dimensions entails tossing around some factors of Planck’s constant. But I feel like this still isn’t quite the full story.)

So if we want to make predictions for LIGO, we want to compute amplitudes with loops. And as amplitudeologists, we should be pretty good at that.

As it turns out, plenty of other people have already had that idea, but there’s still room for improvement.

Our time with the students at the Winter School was limited, so our goal was fairly modest. We wanted to understand those other peoples’ calculations, and perhaps to think about them in a slightly cleaner way. In particular, we wanted to understand why “loops” are really necessary, and whether there was some way of understanding what the “loops” were doing in a more purely classical picture.

At this point, we feel like we’ve got the beginning of an idea of what’s going on. Time will tell whether it works out, and I’ll update you guys when we have a more presentable picture.


 

Unfortunately, physics wasn’t the only thing I was thinking about last week, which brings me to my other topic.

This blog has a fairly strong policy against talking politics. This is for several reasons. Partly, it’s because politics simply isn’t my area of expertise. Partly, it’s because talking politics tends to lead to long arguments in which nobody manages to learn anything. Despite this, I’m about to talk politics.

Last week, citizens of Iran, Iraq, Libya, Somalia, Sudan, Syria and Yemen were barred from entering the US. This included not only new visa applicants, but also those who already have visas or green cards. The latter group includes long-term residents of the US, many of whom were detained in airports and threatened with deportation when their flights arrived shortly after the ban was announced. Among those was the president of the Graduate Student Organization at my former grad school.

A federal judge has blocked parts of the order, and the Department of Homeland Security has announced that there will be case-by-case exceptions. Still, plenty of people are stuck: either abroad if they didn’t get in in time, or in the US, afraid that if they leave they won’t be able to return.

Politics isn’t in my area of expertise. But…

I travel for work pretty often. I know how terrifying and arbitrary border enforcement can be. I know how it feels to risk thousands of dollars and months of planning because some consulate or border official is having a bad day.

I also know how essential travel is to doing science. When there’s only one expert in the world who does the sort of work you need, you can’t just find a local substitute.

And so for this, I don’t need to be an expert in politics. I don’t need a detailed case about the risks of terrorism. I already know what I need to, and I know that this is cruel.

And so I stand in solidarity with the people who were trapped in airports, and those still trapped abroad and trapped in the US. You have been treated cruelly, and you shouldn’t have been. Hopefully, that sort of message can transcend politics.

 

One final thing: I’m going to be a massive hypocrite and continue to ban political comments on this blog. If you want to talk to me about any of this (and you think one or both of us might actually learn something from the exchange) please contact me in private.