Tag Archives: science communication

A Newtonmas Present of Internet Content

I’m lazy this Newtonmas, so instead of writing a post of my own I’m going to recommend a few other people who do excellent work.

Quantum Frontiers is a shared blog updated by researchers connected to Caltech’s Institute for Quantum Information and Matter. While the whole blog is good, I’m going to be more specific and recommend the posts by Nicole Yunger Halpern. Nicole is really a great writer, and her posts are full of vivid imagery and fun analogies. If she’s not as well-known, it’s only because she lacks the attention-grabbing habit of getting into stupid arguments with other bloggers. Definitely worth a follow.

Recommending Slate Star Codex feels a bit strange, because it seems like everyone I’ve met who would enjoy the blog already reads it. It’s not a physics blog by any stretch, so it’s also an unusual recommendation to give here. Slate Star Codex writes about a wide variety of topics, and while the author isn’t an expert in most of them he does a lot more research than you or I would. If you’re interested in up-to-date meta-analyses on psychology, social science, and policy, pored over by someone with scrupulous intellectual honesty and an inexplicably large amount of time to indulge it, then Slate Star Codex is the blog for you.

I mentioned Piled Higher and Deeper a few weeks back, when I reviewed the author’s popular science book We Have No Idea. Piled Higher and Deeper is a webcomic about life in grad school. Humor is all about exaggeration, and it’s true that Piled Higher and Deeper exaggerates just how miserable and dysfunctional grad school can be…but not by as much as you’d think. I recommend that anyone considering grad school read Piled Higher and Deeper, and take it seriously. Grad school can really be like that, and if you don’t think you can deal with spending five or six years in the world of that comic you should take that into account.

Advertisements

This Week, at Scientific American

I’ve written an article for Scientific American! It went up online this week, the print versions go out on the 25th. The online version is titled “Loopy Particle Math”, the print one is “The Particle Code”, but they’re the same article.

For those who don’t subscribe to Scientific American, sorry about the paywall!

“The Particle Code” covers what will be familiar material to regulars on this blog. I introduce Feynman diagrams, and talk about the “amplitudeologists” who try to find ways around them. I focus on my corner of the amplitudes field, how the work of Goncharov, Spradlin, Vergu, and Volovich introduced us to “symbology”, a set of tricks for taking apart more complicated integrals (or “periods”) into simple logarithmic building blocks. I talk about how my collaborators and I use symbology, using these building blocks to compute amplitudes that would have been impossible with other techniques. Finally, I talk about the frontier of the field, the still-mysterious “elliptic polylogarithms” that are becoming increasingly well-understood.

(I don’t talk about the even more mysterious “Calabi-Yau polylogarithms“…another time for those!)

Working with Scientific American was a fun experience. I got to see how the professionals do things. They got me to clarify and explain, pointing out terms I needed to define and places I should pause to summarize. They took my rough gel-pen drawings and turned them into polished graphics. While I’m still a little miffed about them removing all the contractions, overall I learned a lot, and I think they did a great job of bringing the article to the printed page.

Book Review: We Have No Idea

I have no idea how I’m going to review this book.

Ok fine, I have some idea.

Jorge Cham writes Piled Higher and Deeper, a webcomic with possibly the most accurate depiction of grad school available. Daniel Whiteson is a professor at the University of California, Irvine, and a member of the ATLAS collaboration (one of the two big groups that make measurements at the Large Hadron Collider). Together, they’ve written a popular science book covering everything we don’t know about fundamental physics.

Writing a book about what we don’t know is an unusual choice, and there was a real risk it would end up as just a superficial gimmick. The pie chart on the cover presents the most famous “things physicists don’t know”, dark matter and dark energy. If they had just stuck to those this would have been a pretty ordinary popular physics book.

Refreshingly, they don’t do that. After blazing through dark matter and dark energy in the first three chapters, the rest of the book focuses on a variety of other scientific mysteries.

The book contains a mix of problems that get serious research attention (matter-antimatter asymmetry, high-energy cosmic rays) and more blue-sky “what if” questions (does matter have to be made out of particles?). As a theorist, I’m not sure that all of these questions are actually mysterious (we do have some explanation of the weird “1/3” charges of quarks, and I’d like to think we understand why mass includes binding energy), but even in these cases what we really know is that they follow from “sensible assumptions”, and one could just as easily ask “what if” about those assumptions instead. Overall, these “what if” questions make the book unique, and it would be a much weaker book without them.

“We Have No Idea” is strongest when the authors actually have some idea, i.e. when Whiteson is discussing experimental particle physics. It gets weaker on other topics, where the authors seem to rely more on others’ popular treatments (their discussion of “pixels of space-time” motivated me to write this post). Still, they at least seem to have asked the right people, and their accounts are on the more accurate end of typical pop science. (Closer to Quanta than IFLScience.)

The book’s humor really ties it together, often in surprisingly subtle ways. Each chapter has its own running joke, initially a throwaway line that grows into metaphors for everything the chapter discusses. It’s a great way to help the audience visualize without introducing too many new concepts at once. If there’s one thing cartoonists can teach science communicators, it’s the value of repetition.

I liked “We Have No Idea”. It could have been more daring, or more thorough, but it was still charming and honest and fun. If you’re looking for a Christmas present to explain physics to your relatives, you won’t go wrong with this book.

Pan Narrans Scientificus

As scientists, we want to describe the world as objectively as possible. We try to focus on what we can establish conclusively, to leave out excessive speculation and stick to cold, hard facts.

Then we have to write application letters.

Stick to the raw, un-embellished facts, and an application letter would just be a list: these papers in these journals, these talks and awards. Though we may sometimes wish applications worked that way, we don’t live in that kind of world. To apply for a job or a grant, we can’t just stick to the most easily measured facts. We have to tell a story.

The author Terry Pratchett called humans Pan Narrans, the Storytelling Ape. Stories aren’t just for fun, they’re how we see the world, how we organize our perceptions and actions. Without a story, the world doesn’t make sense. And that applies even to scientists.

Applications work best when they tell a story: how did you get here, and where are you going? Scientific papers, similarly, require some sort of narrative: what did you do, and why did you do it? When teaching or writing about science, we almost never just present the facts. We try to fit it into a story, one that presents the facts but also makes sense, in that deliciously human way. A story, more than mere facts, lets us project to the future, anticipating what you’ll do with that grant money or how others will take your research in new directions.

It’s important to remember, though, that stories aren’t actually facts. You can’t get too attached to one story, you have to be willing to shift as new facts come in. Those facts can be scientific measurements, but they can also be steps in your career. You aren’t going to tell the same story when applying to grad school as when you’re trying for tenure, and that’s not just because you’ll have more to tell. The facts of your life will be organized in new ways, rearranging in importance as the story shifts.

Keep your stories in mind as you write or do science. Think about your narrative, the story you’re using to understand the world. Think about what it predicts, how the next step in the story should go. And be ready to start a new story when you need to.

Underdetermination of Theory by Metaphor

Sometimes I explain science in unconventional ways. I’ll talk about quantum mechanics without ever using the word “measurement”, or write the action of the Standard Model in legos.

Whenever I do this, someone asks me why. Why use a weird, unfamiliar explanation? Why not just stick to the tried and true, metaphors that have been tested and honed in generations of popular science books?

It’s not that I have a problem with the popular explanations, most of the time. It’s that, even when the popular explanation does a fine job, there can be good reason to invent a new metaphor. To demonstrate my point, here’s a new metaphor to explain why:

In science, we sometimes talk about underdetermination of a theory by the data. We want to find a theory whose math matches the experimental results, but sometimes the experiments just don’t tell us enough. If multiple theories match the data, we say that the theory is underdetermined, and we go looking for more data to resolve the problem.

What if you’re not a scientist, though? Often, that means you hear about theories secondhand, from some science popularizer. You’re not hearing the full math of the theory, you’re not seeing the data. You’re hearing metaphors and putting together your own picture of the theory. Metaphors are your data, in some sense. And just as scientists can find their theories underdetermined by the experimental data, you can find them underdetermined by the metaphors.

This can happen if a metaphor is consistent with two very different interpretations. If you hear that time runs faster in lower gravity, maybe you picture space and time as curved…or maybe you think low gravity makes you skip ahead, so you end up in the “wrong timeline”. Even if the popularizer you heard it from was perfectly careful, you base your understanding of the theory on the metaphor, and you can end up with the wrong understanding.

In science, the only way out of underdetermination of a theory is new, independent data. In science popularization, it’s new, independent metaphors. New metaphors shake you out of your comfort zone. If you misunderstood the old metaphor, now you’ll try to fit that misunderstanding with the new metaphor too. Often, that won’t work: different metaphors lead to different misunderstandings. With enough different metaphors, your picture of the theory won’t be underdetermined anymore: there will be only one picture, one understanding, that’s consistent with every metaphor.

That’s why I experiment with metaphors, why I try new, weird explanations. I want to wake you up, to make sure you aren’t sticking to the wrong understanding. I want to give you more data to determine your theory.

Journalists Need to Adapt to Preprints, Not Ignore Them

Nature has an article making the rounds this week, decrying the dangers of preprints.

On the surface, this is a bit like an article by foxes decrying the dangers of henhouses. There’s a pretty big conflict of interest when a journal like Nature, that makes huge amounts of money out of research scientists would be happy to publish for free, gets snippy about scientists sharing their work elsewhere. I was expecting an article about how “important” the peer review process is, how we can’t just “let anyone” publish, and the like.

Instead, I was pleasantly surprised. The article is about a real challenge, the weakening of journalistic embargoes. While this is still a problem I think journalists can think their way around, it’s a bit subtler than the usual argument.

For the record, peer review is usually presented as much more important than it actually is. When a scientific article gets submitted to a journal, it gets sent to two or three experts in the field for comment. In the best cases, these experts read the paper carefully and send criticism back. They don’t replicate the experiments, they don’t even (except for a few heroic souls) reproduce the calculations. That kind of careful reading is important, but it’s hardly unique: it’s something scientists do on their own when they want to build off of someone else’s paper, and it’s what good journalists get when they send a paper to experts for comments before writing an article. If peer review in a journal is important, it’s to ensure that this careful reading happens at least once, a sort of minimal evidence that the paper is good enough to appear on a scientist’s CV.

The Nature article points out that peer review serves another purpose, specifically one of delay. While a journal is preparing to publish an article they can send it out to journalists, after making them sign an agreement (an embargo) that they won’t tell the public until the journal publishes. This gives the journalists a bit of lead time, so the more responsible ones can research and fact-check before publishing.

Open-access preprints cut out the lead time. If the paper just appears online with no warning and no embargoes, journalists can write about it immediately. The unethical journalists can skip fact-checking and publish first, and the ethical ones have to follow soon after, or risk publishing “old news”. Nobody gets the time to properly vet, or understand, a new paper.

There’s a simple solution I’ve seen from a few folks on Twitter: “Don’t be an unethical journalist!” That doesn’t actually solve the problem though. The question is, if you’re an ethical journalist, but other people are unethical journalists, what do you do?

Apparently, what some ethical journalists do is to carry on as if preprints didn’t exist. The Nature article describes journalists who, after a preprint has been covered extensively by others, wait until a journal publishes it and then cover it as if nothing had happened. The article frames this as virtuous, but doomed: journalists sticking to their ethics even if it means publishing “old news”.

To be 100% clear here, this is not virtuous. If you present a paper’s publication in a journal as news, when it was already released as a preprint, you are actively misleading the public. I can’t count the number of times I’ve gotten messages from readers, confused because they saw a scientific result covered again months later and thought it was new. It leads to a sort of mental “double-counting”, where the public assumes that the scientific result was found twice, and therefore that it’s more solid. Unless the publication itself is unexpected (something that wasn’t expected to pass peer review, or something controversial like Mochizuki’s proof of the ABC conjecture) mere publication in a journal of an already-public result is not news.

What science journalists need to do here is to step back, and think about how their colleagues cover stories. Current events these days don’t have embargoes, they aren’t fed through carefully managed press releases. There’s a flurry of initial coverage, and it gets things wrong and misses details and misleads people, because science isn’t the only field that’s complicated, real life is complicated. Journalists have adapted to this schedule, mostly, by specializing. Some journalists and news outlets cover breaking news as it happens, others cover it later with more in-depth analysis. Crucially, the latter journalists don’t present the topic as new. They write explicitly in the light of previous news, as a response to existing discussion. That way, the public isn’t misled, and their existing misunderstandings can be corrected.

The Nature article brings up public health, and other topics where misunderstandings can do lasting damage, as areas where embargoes are useful. While I agree, I would hope many of these areas would figure out embargoes on their own. My field certainly does: the big results of scientific collaborations aren’t just put online as preprints, they’re released only after the collaboration sets up its own journalistic embargoes, and prepares its own press releases. In a world of preprints, this sort of practice needs to happen for important controversial public health and environmental results as well. Unethical scientists might still release too fast, to keep journalists from fact-checking, but they could do that anyway, without preprints. You don’t need a preprint to call a journalist on the phone and claim you cured cancer.

As open-access preprints become the norm, journalists will have to adapt. I’m confident they will be able to, but only if they stop treating science journalism as unique, and start treating it as news. Science journalism isn’t teaching, you’re not just passing down facts someone else has vetted. You’re asking the same questions as any other journalist: who did what? And what really happened? If you can do that, preprints shouldn’t be scary.

Citations Are Reblogs

Last week we had a seminar from Nadav Drukker, a physicist who commemorates his papers with pottery.

At the speaker dinner we got to chatting about physics outreach, and one of my colleagues told an amusing story. He was explaining the idea of citations to someone at a party, and the other person latched on to the idea of citations as “likes” on Facebook. She was then shocked when he told her that a typical paper of his got around fifty citations.

“Only fifty likes???”

Ok, clearly the metaphor of citations as “likes” is more than a little silly. Liking a post is easy and quick, while citing a paper requires a full paper of your own. Obviously, citations are not “likes”.

No, citations are reblogs.

Citations are someone engaging with your paper, your “post” in this metaphor, and building on it, making it part of their own work. That’s much closer to a “reblog” (or in Facebook terms a “share”) than a “like”. More specifically, it’s a “reblog-with-commentary”, taking someone’s content and adding your own, in a way that acknowledges where the original idea came from. And while fifty “likes” on a post may seem low, fifty reblogs with commentary (not just “LOL SMH”, but actual discussion) is pretty reasonable.

The average person doesn’t know much about academia, but there are a lot of academia-like communities out there. People who’ve never written a paper might know what it’s like to use characters from someone else’s fanfiction, or sew a quilt based on a friend’s pattern. Small communities of creative people aren’t so different from each other, whether they’re writers or gamers or scientists. Each group has traditions of building on each other’s work, acknowledging where your inspiration came from, and using that to build standing in the community. Citations happen to be ours.