Tag Archives: science communication

Shades of Translation

I was playing Codenames with some friends, a game about giving one-word clues to multi-word answers. I wanted to hint at “undertaker” and “march”, so I figured I’d do “funeral march”. Since that’s two words, I needed one word that meant something similar. I went with dirge, then immediately regretted it as my teammates spent the better part of two minutes trying to figure out what it meant. In the end they went with “slug”.

lesma_slug

A dirge in its natural habitat.

If I had gone for requiem instead, we would have won. Heck, if I had just used “funeral”, we would have had a fighting chance. I had assumed my team knew the same words I did: they were also native English speakers, also nerds, etc. But the words they knew were still a shade different from the words I knew, and that made the difference.

When communicating science, you have to adapt to your audience. Knowing this, it’s still tempting to go for a shortcut. You list a few possible audiences, like “physicists”, or “children”, and then just make a standard explanation for each. This works pretty well…until it doesn’t, and your audience assumes a “dirge” is a type of slug.

In reality, each audience is different. Rather than just memorizing “translations” for a few specific groups, you need to pay attention to the shades of understanding in between.

On Wednesdays, Perimeter holds an Interdisciplinary Lunch. They cover a table with brown paper (for writing on) and impose one rule: you can’t sit next to someone in the same field.

This week, I sat next to an older fellow I hadn’t met before. He asked me what I did, and I gave my “standard physicist explanation”. This tends to be pretty heavy on jargon: while I don’t go too deep into my sub-field’s lingo, I don’t want to risk “talking down” to a physicist I don’t know. The end result is that I have to notice those “shades” of understanding as I go, hoping to get enough questions to change course if I need to.

Then I asked him what he did, and he patiently walked me through it. His explanation was more gradual: less worried about talking down to me, he was able to build up the background around his work, and the history of who worked on what. It was a bit humbling, to see the sort of honed explanation a person can build after telling variations on the same story for years.

In the end, we both had to adapt to what the other understood, to change course when our story wasn’t getting through. Neither of us could stick with the “standard physicist explanation” all the way to the end. Both of us had to shift from one shade to another, improving our translation.

Popularization as News, Popularization as Signpost

Lubos Motl has responded to my post from last week about the recent Caltech short, Quantum is Calling. His response is pretty much exactly what you’d expect, including the cameos by Salma Hayek and Kaley Cuoco.

The only surprise was his lack of concern for accuracy. Quantum is Calling got the conjecture it was trying to popularize almost precisely backwards. I was expecting that to bother him, at least a little.

Should it bother you?

That depends on what you think Quantum is Calling is trying to do.

Science popularization, even good science popularization, tends to get things wrong. Some of that is inevitable, a result of translating complex concepts to a wider audience.

Sometimes, though, you can’t really chalk it up to translation. Interstellar had some extremely accurate visualizations of black holes, but it also had an extremely silly love-powered tesseract. That wasn’t their attempt to convey some subtle scientific truth, it was just meant to sound cool.

And the thing is, that’s not a bad thing to do. For a certain kind of piece, sounding cool really is the point.

Imagine being an explorer. You travel out into the wilderness and find a beautiful waterfall.

south_falls_silver_falls_state_park

Example:

How do you tell people about it?

One option is the press. The news can cover your travels, so people can stay up to date with the latest in waterfall discoveries. In general, you’d prefer this sort of thing to be fairly accurate: the goal here is to inform people, to give them a better idea of the world around them.

Alternatively, you can advertise. You put signposts up around town pointing toward the waterfall, complete with vivid pictures. Here, accuracy matters a lot less: you’re trying to get people excited, knowing that as they get closer they can get more detailed information.

In science popularization, the “news” here isn’t just news. It’s also blog posts, press releases, and public lectures. It’s the part of science popularization that’s supposed to keep people informed, and it’s one that we hope is mostly accurate, at least as far as possible.

The “signposts”, meanwhile, are things like Interstellar. Their audience is as wide as it can possibly be, and we don’t expect them to get things right. They’re meant to excite people, to get them interested in science. The expectation is that a few students will find the imagery interesting enough to go further, at which point they can learn the full story and clear up any remaining misconceptions.

Quantum is Calling is pretty clearly meant to be a signpost. The inaccuracy is one way to tell, but it should be clear just from the context. We’re talking about a piece with Hollywood stars here. The relative star-dom of Zoe Saldana and Keanu Reeves doesn’t matter, the presence of any mainstream film stars whatsoever means they’re going for the broadest possible audience.

(Of course, the fact that it’s set up to look like an official tie-in to the Star Trek films doesn’t hurt matters either.)

They’re also quite explicit about their goals. The piece’s predecessor has Keanu Reeves send a message back in time, with the goal of inspiring a generation of young scientists to build a future paradise. They’re not subtle about this.

Ok, so what’s the problem? Signposts are allowed to be inaccurate, so the inaccuracy shouldn’t matter. Eventually people will climb up to the waterfall and see it for themselves, right?

What if the waterfall isn’t there?

wonder_mountain_dry_backside_waterfall

Like so:

The evidence for ER=EPR (the conjecture that Quantum is Calling is popularizing) isn’t like seeing a waterfall. It’s more like finding it via surveying. By looking at the slope of nearby terrain and following the rivers, you can get fairly confident that there should be a waterfall there, even if you can’t yet see it over the next ridge. You can then start sending scouts, laying in supplies, and getting ready for a push to the waterfall. You can alert the news, telling journalists of the magnificent waterfall you expect to find, so the public can appreciate the majesty of your achievement.

What you probably shouldn’t do is put up a sign for tourists.

As I hope I made clear in my last post, ER=EPR has some decent evidence. It hasn’t shown that it can handle “foot traffic”, though. The number of researchers working on it is still small. (For a fun but not especially rigorous exercise, try typing “ER=EPR” and “AdS/CFT” into physics database INSPIRE.) Conjectures at this stage are frequently successful, but they often fail, and ER=EPR still has a decent chance of doing so. Tying your inspiring signpost to something that may well not be there risks sending tourists up to an empty waterfall. They won’t come down happy.

As such, I’m fine with “news-style” popularizations of ER=EPR. And I’m fine with “signposts” for conjectures that have shown they can handle some foot traffic. (A piece that sends Zoe Saldana to the holodeck to learn about holography could be fun, for example.) But making this sort of high-profile signpost for ER=EPR feels irresponsible and premature. There will be plenty of time for a Star Trek tie-in to ER=EPR once it’s clear the idea is here to stay.

What’s in a Conjecture? An ER=EPR Example

A few weeks back, Caltech’s Institute of Quantum Information and Matter released a short film titled Quantum is Calling. It’s the second in what looks like will become a series of pieces featuring Hollywood actors popularizing ideas in physics. The first used the game of Quantum Chess to talk about superposition and entanglement. This one, featuring Zoe Saldana, is about a conjecture by Juan Maldacena and Leonard Susskind called ER=EPR. The conjecture speculates that pairs of entangled particles (as investigated by Einstein, Podolsky, and Rosen) are in some sense secretly connected by wormholes (or Einstein-Rosen bridges).

The film is fun, but I’m not sure ER=EPR is established well enough to deserve this kind of treatment.

At this point, some of you are nodding your heads for the wrong reason. You’re thinking I’m saying this because ER=EPR is a conjecture.

I’m not saying that.

The fact of the matter is, conjectures play a very important role in theoretical physics, and “conjecture” covers a wide range. Some conjectures are supported by incredibly strong evidence, just short of mathematical proof. Others are wild speculations, “wouldn’t it be convenient if…” ER=EPR is, well…somewhere in the middle.

Most popularizers don’t spend much effort distinguishing things in this middle ground. I’d like to talk a bit about the different sorts of evidence conjectures can have, using ER=EPR as an example.

octopuswormhole_v1

Our friendly neighborhood space octopus

The first level of evidence is motivation.

At its weakest, motivation is the “wouldn’t it be convenient if…” line of reasoning. Some conjectures never get past this point. Hawking’s chronology protection conjecture, for instance, points out that physics (and to some extent logic) has a hard time dealing with time travel, and wouldn’t it be convenient if time travel was impossible?

For ER=EPR, this kind of motivation comes from the black hole firewall paradox. Without going into it in detail, arguments suggested that the event horizons of older black holes would resemble walls of fire, incinerating anything that fell in, in contrast with Einstein’s picture in which passing the horizon has no obvious effect at the time. ER=EPR provides one way to avoid this argument, making event horizons subtle and smooth once more.

Motivation isn’t just “wouldn’t it be convenient if…” though. It can also include stronger arguments: suggestive comparisons that, while they could be coincidental, when put together draw a stronger picture.

In ER=EPR, this comes from certain similarities between the type of wormhole Maldacena and Susskind were considering, and pairs of entangled particles. Both connect two different places, but both do so in an unusually limited way. The wormholes of ER=EPR are non-traversable: you cannot travel through them. Entangled particles can’t be traveled through (as you would expect), but more generally can’t be communicated through: there are theorems to prove it. This is the kind of suggestive similarity that can begin to motivate a conjecture.

(Amusingly, the plot of the film breaks this in both directions. Keanu Reeves can neither steal your cat through a wormhole, nor send you coded messages with entangled particles.)

rjxhfqj

Nor live forever as the portrait in his attic withers away

Motivation is a good reason to investigate something, but a bad reason to believe it. Luckily, conjectures can have stronger forms of evidence. Many of the strongest conjectures are correspondences, supported by a wealth of non-trivial examples.

In science, the gold standard has always been experimental evidence. There’s a reason for that: when you do an experiment, you’re taking a risk. Doing an experiment gives reality a chance to prove you wrong. In a good experiment (a non-trivial one) the result isn’t obvious from the beginning, so that success or failure tells you something new about the universe.

In theoretical physics, there are things we can’t test with experiments, either because they’re far beyond our capabilities or because the claims are mathematical. Despite this, the overall philosophy of experiments is still relevant, especially when we’re studying a correspondence.

“Correspondence” is a word we use to refer to situations where two different theories are unexpectedly computing the same thing. Often, these are very different theories, living in different dimensions with different sorts of particles. With the right “dictionary”, though, you can translate between them, doing a calculation in one theory that matches a calculation in the other one.

Even when we can’t do non-trivial experiments, then, we can still have non-trivial examples. When the result of a calculation isn’t obvious from the beginning, showing that it matches on both sides of a correspondence takes the same sort of risk as doing an experiment, and gives the same sort of evidence.

Some of the best-supported conjectures in theoretical physics have this form. AdS/CFT is technically a conjecture: a correspondence between string theory in a hyperbola-shaped space and my favorite theory, N=4 super Yang-Mills. Despite being a conjecture, the wealth of nontrivial examples is so strong that it would be extremely surprising if it turned out to be false.

ER=EPR is also a correspondence, between entangled particles on the one hand and wormholes on the other. Does it have nontrivial examples?

Some, but not enough. Originally, it was based on one core example, an entangled state that could be cleanly matched to the simplest wormhole. Now, new examples have been added, covering wormholes with electric fields and higher spins. The full “dictionary” is still unclear, with some pairs of entangled particles being harder to describe in terms of wormholes. So while this kind of evidence is being built, it isn’t as solid as our best conjectures yet.

I’m fine with people popularizing this kind of conjecture. It deserves blog posts and press articles, and it’s a fine idea to have fun with. I wouldn’t be uncomfortable with the Bohemian Gravity guy doing a piece on it, for example. But for the second installment of a star-studded series like the one Caltech is doing…it’s not really there yet, and putting it there gives people the wrong idea.

I hope I’ve given you a better idea of the different types of conjectures, from the most fuzzy to those just shy of certain. I’d like to do this kind of piece more often, though in future I’ll probably stick with topics in my sub-field (where I actually know what I’m talking about 😉 ). If there’s a particular conjecture you’re curious about, ask in the comments!

Have You Given Your Kids “The Talk”?

If you haven’t seen it yet, I recommend reading this delightful collaboration between Scott Aaronson (of Shtetl-Optimized) and Zach Weinersmith (of Saturday Morning Breakfast Cereal). As explanations of a concept beyond the standard popular accounts go, this one is pretty high quality, correcting some common misconceptions about quantum computing.

I especially liked the following exchange:

ontology

I’ve complained before about people trying to apply ontology to physics, and I think this gets at the root of one of my objections.

People tend to think that the world should be describable with words. From that perspective, mathematics is just a particular tool, a system we’ve created. If you look at the world in that way, mathematics looks unreasonably effective: it’s ability to describe the real world seems like a miraculous coincidence.

Mathematics isn’t just one tool though, or just one system. It’s all of them: not just numbers and equations, but knots and logic and everything else. Deep down, mathematics is just a collection of all the ways we’ve found to state things precisely.

Because of that, it shouldn’t surprise you that we “put complex numbers in our ontologies”. Complex numbers are just one way we’ve found to make precise statements about the world, one that comes in handy when talking about quantum mechanics. There doesn’t need to be a “correct” description in words: the math is already stating things as precisely as we know how.

That doesn’t mean that ontology is a useless project. It’s worthwhile to develop new ways of talking about things. I can understand the goal of building up a philosophical language powerful enough to describe the world in terms of words, and if such a language was successful it might well inspire us to ask new scientific questions.

But it’s crucial to remember that there’s real work to be done there. There’s no guarantee that the project will work, that words will end up sufficient. When you put aside our best tools to make precise statements, you’re handicapping yourself, making the problem harder than it needed to be. It’s your responsibility to make sure you’re getting something worthwhile out of it.

Words, Words, Words

If there’s one thing the Center for Communicating Science drummed into me at Stony Brook, it’s to be careful with words. You can teach your audience new words, but only a few: effectively, you have a vocabulary budget.

Sometimes, the risk is that your audience will misunderstand you. If you’re a biologist who talks about treating disease in a model, be careful: the public is more likely to think of mannequins than mice.

220px-harvey_front

NOT what you’re talking about

Sometimes, though, the risk is subtler. Even if the audience understands you, you might still be using up your vocabulary budget.

Recently, Perimeter’s monthly Public Lecture was given by an expert on regenerative medicine. When talking about trying to heal eye tissue, she mentioned looking for a “pupillary response”.

Now, “pupillary response” isn’t exactly hard to decipher. It’s pretty clearly a response by the pupil of the eye. From there, you can think about how eyes respond to bright light, or to darkness, and have an idea of what she’s talking about.

So nobody is going to misunderstand “pupillary response”. Nonetheless, that chain of reasoning? It takes time, and it takes effort. People do have to stop and think, if only for a moment, to know what you mean.

That adds up. Every time your audience has to take a moment to think back and figure out what you just said? That eats into your vocabulary budget. Enough moments like that, and your audience won’t have the energy to follow what you’re saying: you’ll lose them.

The last few Public Lectures haven’t had as much online engagement as they used to. Lots of people still watch them, but fewer have been asking questions on twitter, for example. I have a few guesses about why this is…but I wonder if this kind of thing is part of it. The last few speakers have been more free with technical terms, more lax with their vocabulary budget. I worry that, while people still show up for the experience, they aren’t going away with any understanding.

We don’t need to dumb things down to be understood. (Or not very much anyway.) We do need to be careful with our words. Use our vocabulary budget sparingly, and we can really teach people. Spend it too fast…and we lose them.

“Maybe” Isn’t News

It’s been published several places, but you’ve probably seen this headline:

expansionheadlineIf you’ve been following me for a while, you know where this is going:

No, these physicists haven’t actually shown that the Universe isn’t expanding at an accelerated rate.

What they did show is that the original type of data used to discover that the universe was accelerating back in the 90’s, measurements of supernovae, doesn’t live up to the rigorous standards that we physicists use to evaluate discoveries. We typically only call something a discovery if the evidence is good enough that, in a world where the discovery wasn’t actually true, we’d only have a one in 3.5 million chance of getting the same evidence (“five sigma” evidence). In their paper, Nielsen, Guffanti, and Sarkar argue that looking at a bigger collection of supernovae leads to a hazier picture: the chance that we could get the same evidence in a universe that isn’t accelerating is closer to one in a thousand, giving “three sigma” evidence.

This might sound like statistical quibbling: one in a thousand is still pretty unlikely, after all. But a one in a thousand chance still happens once in a thousand times, and there’s a long history of three sigma evidence turning out to just be random noise. If the discovery of the accelerating universe was new, this would be an important objection, a reason to hold back and wait for more data before announcing a discovery.

The trouble is, the discovery isn’t new. In the twenty years since it was discovered that the universe was accelerating, people have built that discovery into the standard model of cosmology. They’ve used that model to make other predictions, explaining a wide range of other observations. People have built on the discovery, and their success in doing so is its own kind of evidence.

So the objection, that one source of evidence isn’t as strong as people thought, doesn’t kill cosmic acceleration. What it is is a “maybe”, showing that there is at least room in some of the data for a non-accelerating universe.

People publish “maybes” all the time, nothing bad about that. There’s a real debate to be had about how strong the evidence is, and how much it really establishes. (And there are already voices on the other side of that debate.)

But a “maybe” isn’t news. It just isn’t.

Science journalists (and university press offices) have a habit of trying to turn “maybes” into stories. I’ve lost track of the times I’ve seen ideas that were proposed a long time ago (technicolor, MOND, SUSY) get new headlines not for new evidence or new ideas, but just because they haven’t been ruled out yet. “SUSY hasn’t been ruled out yet” is an opinion piece, perhaps a worthwhile one, but it’s no news article.

The thing is, I can understand why journalists do this. So much of science is building on these kinds of “maybes”, working towards the tipping point where a “maybe” becomes a “yes” (or a “no”). And journalists (and university press offices, and to some extent the scientists themselves) can’t just take time off and wait for something legitimately newsworthy. They’ve got pages to fill and careers to advance, they need to say something.

I post once a week. As a consequence, a meaningful fraction of my posts are garbage. I’m sure that if I posted every day, most of my posts would be garbage.

Many science news sites post multiple times a day. They’ve got multiple writers, sure, and wider coverage…but they still don’t have the luxury of skipping a “maybe” when someone hands it to them.

I don’t know if there’s a way out of this. Maybe we need a new model for science journalism, something that doesn’t try to ape the pace of the rest of the news cycle. For the moment, though, it’s publish or perish, and that means lots and lots of “maybes”.

EDIT: More arguments against the paper in question, pointing out that they made some fairly dodgy assumptions.

EDIT: The paper’s authors respond here.

Ingredients of a Good Talk

It’s one of the hazards of physics that occasionally we have to attend talks about other people’s sub-fields.

Physics is a pretty heavily specialized field. It’s specialized enough that an otherwise perfectly reasonable talk can be totally incomprehensible to someone just a few sub-fields over.

I went to a talk this week on someone else’s sub-field, and was pleasantly surprised by how much I could follow. I thought I’d say a bit about what made it work.

In my experience, a good talk tells me why I should care, what was done, and what we know now.

Most talks start with a Motivation section, covering the why I should care part. If a talk doesn’t provide any motivation, it’s assuming that everyone finds the point of the research self-evident, and that’s a risky assumption.

Even for talks with a Motivation section, though, there’s a lot of variety. I’ve been to plenty of talks where the motivation presented is very sketchy: “this sort of thing is important in general, so we’re going to calculate one”. While that’s technically a motivation, all it does for an outsider is to tell them which sub-field you’re part of. Ideally, a motivation section does more: for a good talk, the motivation should not only say why you’re doing the work, but what question you’re asking and how your work can answer it.

The bulk of any talk covers what was done, but here there’s also varying quality. Bad talks often make it unclear how much was done by the presenter versus how much was done before. This is important not just to make sure the right people get credit, but because it can be hard to tell how much progress has been made. A good talk makes it clear not only what was done, but why it wasn’t done before. The whole point of a talk is to show off something new, so it should be clear what the new thing is.

If those two parts are done well, it becomes a lot easier to explain what we know now. If you’re clear on what question you were asking and what you did to answer it, then you’ve already framed things in those terms, and the rest is just summarizing. If not, you have to build it up from scratch, ending up with the important information packed in to the last few minutes.

This isn’t everything you need for a good talk, but it’s important, and far too many people neglect it. I’ll be giving a few talks next week, and I plan to keep this structure in mind.