Category Archives: Science Communication

Book Review: We Have No Idea

I have no idea how I’m going to review this book.

Ok fine, I have some idea.

Jorge Cham writes Piled Higher and Deeper, a webcomic with possibly the most accurate depiction of grad school available. Daniel Whiteson is a professor at the University of California, Irvine, and a member of the ATLAS collaboration (one of the two big groups that make measurements at the Large Hadron Collider). Together, they’ve written a popular science book covering everything we don’t know about fundamental physics.

Writing a book about what we don’t know is an unusual choice, and there was a real risk it would end up as just a superficial gimmick. The pie chart on the cover presents the most famous “things physicists don’t know”, dark matter and dark energy. If they had just stuck to those this would have been a pretty ordinary popular physics book.

Refreshingly, they don’t do that. After blazing through dark matter and dark energy in the first three chapters, the rest of the book focuses on a variety of other scientific mysteries.

The book contains a mix of problems that get serious research attention (matter-antimatter asymmetry, high-energy cosmic rays) and more blue-sky “what if” questions (does matter have to be made out of particles?). As a theorist, I’m not sure that all of these questions are actually mysterious (we do have some explanation of the weird “1/3” charges of quarks, and I’d like to think we understand why mass includes binding energy), but even in these cases what we really know is that they follow from “sensible assumptions”, and one could just as easily ask “what if” about those assumptions instead. Overall, these “what if” questions make the book unique, and it would be a much weaker book without them.

“We Have No Idea” is strongest when the authors actually have some idea, i.e. when Whiteson is discussing experimental particle physics. It gets weaker on other topics, where the authors seem to rely more on others’ popular treatments (their discussion of “pixels of space-time” motivated me to write this post). Still, they at least seem to have asked the right people, and their accounts are on the more accurate end of typical pop science. (Closer to Quanta than IFLScience.)

The book’s humor really ties it together, often in surprisingly subtle ways. Each chapter has its own running joke, initially a throwaway line that grows into metaphors for everything the chapter discusses. It’s a great way to help the audience visualize without introducing too many new concepts at once. If there’s one thing cartoonists can teach science communicators, it’s the value of repetition.

I liked “We Have No Idea”. It could have been more daring, or more thorough, but it was still charming and honest and fun. If you’re looking for a Christmas present to explain physics to your relatives, you won’t go wrong with this book.

Advertisements

Underdetermination of Theory by Metaphor

Sometimes I explain science in unconventional ways. I’ll talk about quantum mechanics without ever using the word “measurement”, or write the action of the Standard Model in legos.

Whenever I do this, someone asks me why. Why use a weird, unfamiliar explanation? Why not just stick to the tried and true, metaphors that have been tested and honed in generations of popular science books?

It’s not that I have a problem with the popular explanations, most of the time. It’s that, even when the popular explanation does a fine job, there can be good reason to invent a new metaphor. To demonstrate my point, here’s a new metaphor to explain why:

In science, we sometimes talk about underdetermination of a theory by the data. We want to find a theory whose math matches the experimental results, but sometimes the experiments just don’t tell us enough. If multiple theories match the data, we say that the theory is underdetermined, and we go looking for more data to resolve the problem.

What if you’re not a scientist, though? Often, that means you hear about theories secondhand, from some science popularizer. You’re not hearing the full math of the theory, you’re not seeing the data. You’re hearing metaphors and putting together your own picture of the theory. Metaphors are your data, in some sense. And just as scientists can find their theories underdetermined by the experimental data, you can find them underdetermined by the metaphors.

This can happen if a metaphor is consistent with two very different interpretations. If you hear that time runs faster in lower gravity, maybe you picture space and time as curved…or maybe you think low gravity makes you skip ahead, so you end up in the “wrong timeline”. Even if the popularizer you heard it from was perfectly careful, you base your understanding of the theory on the metaphor, and you can end up with the wrong understanding.

In science, the only way out of underdetermination of a theory is new, independent data. In science popularization, it’s new, independent metaphors. New metaphors shake you out of your comfort zone. If you misunderstood the old metaphor, now you’ll try to fit that misunderstanding with the new metaphor too. Often, that won’t work: different metaphors lead to different misunderstandings. With enough different metaphors, your picture of the theory won’t be underdetermined anymore: there will be only one picture, one understanding, that’s consistent with every metaphor.

That’s why I experiment with metaphors, why I try new, weird explanations. I want to wake you up, to make sure you aren’t sticking to the wrong understanding. I want to give you more data to determine your theory.

Journalists Need to Adapt to Preprints, Not Ignore Them

Nature has an article making the rounds this week, decrying the dangers of preprints.

On the surface, this is a bit like an article by foxes decrying the dangers of henhouses. There’s a pretty big conflict of interest when a journal like Nature, that makes huge amounts of money out of research scientists would be happy to publish for free, gets snippy about scientists sharing their work elsewhere. I was expecting an article about how “important” the peer review process is, how we can’t just “let anyone” publish, and the like.

Instead, I was pleasantly surprised. The article is about a real challenge, the weakening of journalistic embargoes. While this is still a problem I think journalists can think their way around, it’s a bit subtler than the usual argument.

For the record, peer review is usually presented as much more important than it actually is. When a scientific article gets submitted to a journal, it gets sent to two or three experts in the field for comment. In the best cases, these experts read the paper carefully and send criticism back. They don’t replicate the experiments, they don’t even (except for a few heroic souls) reproduce the calculations. That kind of careful reading is important, but it’s hardly unique: it’s something scientists do on their own when they want to build off of someone else’s paper, and it’s what good journalists get when they send a paper to experts for comments before writing an article. If peer review in a journal is important, it’s to ensure that this careful reading happens at least once, a sort of minimal evidence that the paper is good enough to appear on a scientist’s CV.

The Nature article points out that peer review serves another purpose, specifically one of delay. While a journal is preparing to publish an article they can send it out to journalists, after making them sign an agreement (an embargo) that they won’t tell the public until the journal publishes. This gives the journalists a bit of lead time, so the more responsible ones can research and fact-check before publishing.

Open-access preprints cut out the lead time. If the paper just appears online with no warning and no embargoes, journalists can write about it immediately. The unethical journalists can skip fact-checking and publish first, and the ethical ones have to follow soon after, or risk publishing “old news”. Nobody gets the time to properly vet, or understand, a new paper.

There’s a simple solution I’ve seen from a few folks on Twitter: “Don’t be an unethical journalist!” That doesn’t actually solve the problem though. The question is, if you’re an ethical journalist, but other people are unethical journalists, what do you do?

Apparently, what some ethical journalists do is to carry on as if preprints didn’t exist. The Nature article describes journalists who, after a preprint has been covered extensively by others, wait until a journal publishes it and then cover it as if nothing had happened. The article frames this as virtuous, but doomed: journalists sticking to their ethics even if it means publishing “old news”.

To be 100% clear here, this is not virtuous. If you present a paper’s publication in a journal as news, when it was already released as a preprint, you are actively misleading the public. I can’t count the number of times I’ve gotten messages from readers, confused because they saw a scientific result covered again months later and thought it was new. It leads to a sort of mental “double-counting”, where the public assumes that the scientific result was found twice, and therefore that it’s more solid. Unless the publication itself is unexpected (something that wasn’t expected to pass peer review, or something controversial like Mochizuki’s proof of the ABC conjecture) mere publication in a journal of an already-public result is not news.

What science journalists need to do here is to step back, and think about how their colleagues cover stories. Current events these days don’t have embargoes, they aren’t fed through carefully managed press releases. There’s a flurry of initial coverage, and it gets things wrong and misses details and misleads people, because science isn’t the only field that’s complicated, real life is complicated. Journalists have adapted to this schedule, mostly, by specializing. Some journalists and news outlets cover breaking news as it happens, others cover it later with more in-depth analysis. Crucially, the latter journalists don’t present the topic as new. They write explicitly in the light of previous news, as a response to existing discussion. That way, the public isn’t misled, and their existing misunderstandings can be corrected.

The Nature article brings up public health, and other topics where misunderstandings can do lasting damage, as areas where embargoes are useful. While I agree, I would hope many of these areas would figure out embargoes on their own. My field certainly does: the big results of scientific collaborations aren’t just put online as preprints, they’re released only after the collaboration sets up its own journalistic embargoes, and prepares its own press releases. In a world of preprints, this sort of practice needs to happen for important controversial public health and environmental results as well. Unethical scientists might still release too fast, to keep journalists from fact-checking, but they could do that anyway, without preprints. You don’t need a preprint to call a journalist on the phone and claim you cured cancer.

As open-access preprints become the norm, journalists will have to adapt. I’m confident they will be able to, but only if they stop treating science journalism as unique, and start treating it as news. Science journalism isn’t teaching, you’re not just passing down facts someone else has vetted. You’re asking the same questions as any other journalist: who did what? And what really happened? If you can do that, preprints shouldn’t be scary.

Citations Are Reblogs

Last week we had a seminar from Nadav Drukker, a physicist who commemorates his papers with pottery.

At the speaker dinner we got to chatting about physics outreach, and one of my colleagues told an amusing story. He was explaining the idea of citations to someone at a party, and the other person latched on to the idea of citations as “likes” on Facebook. She was then shocked when he told her that a typical paper of his got around fifty citations.

“Only fifty likes???”

Ok, clearly the metaphor of citations as “likes” is more than a little silly. Liking a post is easy and quick, while citing a paper requires a full paper of your own. Obviously, citations are not “likes”.

No, citations are reblogs.

Citations are someone engaging with your paper, your “post” in this metaphor, and building on it, making it part of their own work. That’s much closer to a “reblog” (or in Facebook terms a “share”) than a “like”. More specifically, it’s a “reblog-with-commentary”, taking someone’s content and adding your own, in a way that acknowledges where the original idea came from. And while fifty “likes” on a post may seem low, fifty reblogs with commentary (not just “LOL SMH”, but actual discussion) is pretty reasonable.

The average person doesn’t know much about academia, but there are a lot of academia-like communities out there. People who’ve never written a paper might know what it’s like to use characters from someone else’s fanfiction, or sew a quilt based on a friend’s pattern. Small communities of creative people aren’t so different from each other, whether they’re writers or gamers or scientists. Each group has traditions of building on each other’s work, acknowledging where your inspiration came from, and using that to build standing in the community. Citations happen to be ours.

Seeing the Wires in Science Communication

Recently, I’ve been going to Science and Cocktails, a series of popular science lectures in Freetown Christiania. The atmosphere is great fun, but I’ve been finding the lectures themselves a bit underwhelming. It’s mostly my fault, though.

There’s a problem, common to all types of performing artists. Once you know the tricks that make a performance work, you can’t un-see them. Do enough theater and you can’t help but notice how an actor interprets their lines, or how they handle Shakespeare’s dirty jokes. Play an instrument, and you think about how they made that sound, or when they pause for breath. Work on action movies, and you start to see the wires.

This has been happening to me with science communication. Going to the Science and Cocktails lectures, I keep seeing the tricks the speaker used to make the presentation easier. I notice the slides that were probably copied from the speaker’s colloquiums, sometimes without adapting them to the new audience. I notice when an example doesn’t really fit the narrative, but is wedged in there anyway because the speaker wants to talk about it. I notice filler, like a recent speaker who spent several slides on the history of electron microscopes, starting with Hooke!

I’m not claiming I’m a better speaker than these people. The truth is, I notice these tricks because I’ve been guilty of them myself! I reuse slides, I insert pet topics, I’ve had talks that were too short until I added a long historical section.

And overall, it doesn’t seem to matter. The audience doesn’t notice our little shortcuts, just like they didn’t notice the wires in old kung-fu movies. They’re there for the magic of the performance, they want to be swept away by a good story.

I need to reconnect with that. It’s still important to avoid using blatant tricks, to cover up the wires and make things that much more seamless. But in the end, what matters is whether the audience learned something, and whether they had a good time. I need to watch not just the tricks, but the magic: what makes the audience’s eyes light up, what makes them laugh, what makes them think. I need to stop griping about the wires, and start seeing the action.

We Didn’t Deserve Hawking

I don’t usually do obituaries. I didn’t do one when Joseph Polchinksi died, though his textbook is sitting an arm’s reach from me right now. I never collaborated with Polchinski, I never met him, and others were much better at telling his story.

I never met Stephen Hawking, either. When I was at Perimeter, I’d often get asked if I had. Visitors would see his name on the Perimeter website, and I’d have to disappoint them by explaining that he hadn’t visited the institute in quite some time. His health, while exceptional for a septuagenarian with ALS, wasn’t up to the travel.

Was his work especially relevant to mine? Only because of its relevance to everyone who does gravitational physics. The universality of singularities in general relativity, black hole thermodynamics and Hawking radiation, these sharpened the questions around quantum gravity. Without his work, string theory wouldn’t have tried to answer the questions Hawking posed, and it wouldn’t have become the field it is today.

Hawking was unique, though, not necessarily because of his work, but because of his recognizability. Those visitors to Perimeter were a cross-section of the Canadian public. Some of them didn’t know the name of the speaker for the lecture they came to see. Some, arriving after reading Lee Smolin’s book, could only refer to him as “that older fellow who thinks about quantum gravity”. But Hawking? They knew Hawking. Without exception, they knew Hawking.

Who was the last physicist the public knew, like that? Feynman, at the height of his popularity, might have been close. You’d have to go back to Einstein to find someone who was really solidly known like that, who you could mention in homes across the world and expect recognition. And who else has that kind of status? Bohr might have it in Denmark. Go further back, and you’ll find people know Newton, they know Gaileo.

Einstein changed our picture of space and time irrevocably. Newton invented physics as we know it. Galileo and Copernicus pointed up to the sky and shouted that the Earth moves!

Hawking asked questions. He told us what did and didn’t make sense, he showed what we had to take into account. He laid the rules of engagement, and the rest of quantum gravity came and asked alongside him.

We live in an age of questions now. We’re starting to glimpse the answers, we have candidates and frameworks and tools, and if we’re feeling very optimistic we might already be sitting on a theory of everything. But we haven’t turned that corner yet, from asking questions to changing the world.

These ages don’t usually get a household name. Normally, you need an Einstein, a Newton, a Galileo, you need to shake the foundations of the world.

Somehow, Hawking gave us one anyway. Somehow, in our age of questions, we put a face in everyone’s mind, a figure huddled in a wheelchair with a snarky, computer-generated voice. Somehow Hawking reached out and reminded the world that there were people out there asking, that there was a big beautiful puzzle that our field was trying to solve.

Deep down, I’m not sure we deserved that. I hope we deserve it soon.

Shades of Translation

I was playing Codenames with some friends, a game about giving one-word clues to multi-word answers. I wanted to hint at “undertaker” and “march”, so I figured I’d do “funeral march”. Since that’s two words, I needed one word that meant something similar. I went with dirge, then immediately regretted it as my teammates spent the better part of two minutes trying to figure out what it meant. In the end they went with “slug”.

lesma_slug

A dirge in its natural habitat.

If I had gone for requiem instead, we would have won. Heck, if I had just used “funeral”, we would have had a fighting chance. I had assumed my team knew the same words I did: they were also native English speakers, also nerds, etc. But the words they knew were still a shade different from the words I knew, and that made the difference.

When communicating science, you have to adapt to your audience. Knowing this, it’s still tempting to go for a shortcut. You list a few possible audiences, like “physicists”, or “children”, and then just make a standard explanation for each. This works pretty well…until it doesn’t, and your audience assumes a “dirge” is a type of slug.

In reality, each audience is different. Rather than just memorizing “translations” for a few specific groups, you need to pay attention to the shades of understanding in between.

On Wednesdays, Perimeter holds an Interdisciplinary Lunch. They cover a table with brown paper (for writing on) and impose one rule: you can’t sit next to someone in the same field.

This week, I sat next to an older fellow I hadn’t met before. He asked me what I did, and I gave my “standard physicist explanation”. This tends to be pretty heavy on jargon: while I don’t go too deep into my sub-field’s lingo, I don’t want to risk “talking down” to a physicist I don’t know. The end result is that I have to notice those “shades” of understanding as I go, hoping to get enough questions to change course if I need to.

Then I asked him what he did, and he patiently walked me through it. His explanation was more gradual: less worried about talking down to me, he was able to build up the background around his work, and the history of who worked on what. It was a bit humbling, to see the sort of honed explanation a person can build after telling variations on the same story for years.

In the end, we both had to adapt to what the other understood, to change course when our story wasn’t getting through. Neither of us could stick with the “standard physicist explanation” all the way to the end. Both of us had to shift from one shade to another, improving our translation.