An Elliptical Workout

I study scattering amplitudes, probabilities that particles scatter off each other.

In particular, I’ve studied them using polylogarithmic functions. Polylogarithmic functions can be taken apart into “logs”, which obey identities much like logarithms do. They’re convenient and nice, and for my favorite theory of N=4 super Yang-Mills they’re almost all you need.

Well, until ten particles get involved, anyway.

That’s when you start needing elliptic integrals, and elliptic polylogarithms. These integrals substitute one of the “logs” of a polylogarithm with an integration over an elliptic curve.

And with Jacob Bourjaily, Andrew McLeod, Marcus Spradlin, and Matthias Wilhelm, I’ve now computed one.

tenpointimage

This one, to be specific

Our paper, The Elliptic Double-Box Integral, went up on the arXiv last night.

The last few weeks have been a frenzy of work, finishing up our calculations and writing the paper. It’s the fastest I’ve ever gotten a paper out, which has been a unique experience.

Computing this integral required new, so far unpublished tricks by Jake Bourjaily, as well as some rather powerful software and Mark Spradlin’s extensive expertise in simplifying polylogarithms. In the end, we got the integral into a “canonical” form, one other papers had proposed as the right way to represent it, with the elliptic curve in a form standardized by Weierstrass.

One of the advantages of fixing a “canonical” form is that it should make identities obvious. If two integrals are actually the same, then writing them according to the same canonical rules should make that clear. This is one of the nice things about polylogarithms, where these identities are really just identities between logs and the right form is comparatively easy to find.

Surprisingly, the form we found doesn’t do this. We can write down an integral in our “canonical” form that looks different, but really is the same as our original integral. The form other papers had suggested, while handy, can’t be the final canonical form.

What the final form should be, we don’t yet know. We have some ideas, but we’re also curious what other groups are thinking. We’re relatively new to elliptic integrals, and there are other groups with much more experience with them, some with papers coming out soon. As far as we know they’re calculating slightly different integrals, ones more relevant for the real world than for N=4 super Yang-Mills. It’s going to be interesting seeing what they come up with. So if you want to follow this topic, don’t just watch for our names on the arXiv: look for Claude Duhr and Falko Dulat, Luise Adams and Stefan Weinzierl. In the elliptic world, big things are coming.

Advertisements

Interesting Work at the IAS

I’m visiting the Institute for Advanced Study this week, on the outskirts of Princeton’s impressively Gothic campus.

IMG_20171127_192657307

A typical Princeton reading room

The IAS was designed as a place for researchers to work with minimal distraction, and we’re taking full advantage of it. (Though I wouldn’t mind a few more basic distractions…dinner closer than thirty minutes away for example.)

The amplitudes community seems to be busily working as well, with several interesting papers going up on the arXiv this week, four with some connection to the IAS.

Carlos Mafra and Oliver Schlotterer’s paper about one-loop string amplitudes mentions visiting the IAS in the acknowledgements. Mafra and Schlotterer have found a “double-copy” structure in the one-loop open string. Loosely, “double-copy” refers to situations in which one theory can be described as two theories “multiplied together”, like how “gravity is Yang-Mills squared”. Normally, open strings would be the “Yang-Mills” in that equation, with their “squares”, closed strings, giving gravity. Here though, open strings themselves are described as a “product” of two different pieces, a Yang-Mills part and one that takes care of the “stringiness”. You may remember me talking about something like this and calling it “Z theory”. That was at “tree level”, for the simplest string diagrams. This paper updates the technology to one-loop, where the part taking care of the “stringiness” has a more sophisticated mathematical structure. It’s pretty nontrivial for this kind of structure to survive at one loop, and it suggests something deeper is going on.

Yvonne Geyer (IAS) and Ricardo Monteiro (non-IAS) work on the ambitwistor string, a string theory-like setup for calculating particle physics amplitudes. Their paper shows how this setup can be used for one-loop amplitudes in a wide range of theories, in particular theories without supersymmetry. This makes some patterns that were observed before quite a bit clearer, and leads to a fairly concise way of writing the amplitudes.

Nima-watchers will be excited about a paper by Nima Arkani-Hamed and his student Yuntao Bai (IAS) and Song He and his student Gongwang Yan (non-IAS). This paper is one that has been promised for quite some time, Nima talked about it at Amplitudes last summer. Nima is famous for the amplituhedron, an abstract geometrical object that encodes amplitudes in one specific theory, N=4 super Yang-Mills. Song He is known for the Cachazo-He-Yuan (or CHY) string, a string-theory like picture of particle scattering in a very general class of theories that is closely related to the ambitwistor string. Collaborating, they’ve managed to link the two pictures together, and in doing so take the first step to generalizing the amplituhedron to other theories. In order to do this they had to think about the amplituhedron not in terms of some abstract space, but in terms of the actual momenta of the particles they’re colliding. This is important because the amplituhedron’s abstract space is very specific to N=4 super Yang-Mills, with supersymmetry in some sense built in, while momenta can be written down for any particles. Once they had mastered this trick, they could encode other things in this space of momenta: colors of quarks, for example. Using this, they’ve managed to find amplituhedron-like structure in the CHY string, and in a few particular theories. They still can’t do everything the amplituhedron can, in particular the amplituhedron can go to any number of loops while the structures they’re finding are tree-level. But the core trick they’re using looks very powerful. I’ve been hearing hints about the trick from Nima for so long that I had forgotten they hadn’t published it yet, now that they have I’m excited to see what the amplitudes community manages to do with it.

Finally, last night a paper by Igor Prlina, Marcus Spradlin, James Stankowicz, Stefan Stanojevic, and Anastasia Volovich went up while three of the authors were visiting the IAS. The paper deals with Landau equations, a method to classify and predict the singularities of amplitudes. By combining this method with the amplituhedron they’ve already made substantial progress, and this paper serves as a fairly thorough proof of principle, using the method to comprehensively catalog the singularities of one-loop amplitudes. In this case I’ve been assured that they have papers at higher loops in the works, so it will be interesting to see how powerful this method ends up being.

Post-Scarcity Academia

Anyone will tell you that academia is broken.

The why varies, of course: some blame publication pressure, or greedy journals. Some think it’s the fault of grant committees, or tenure committees, or grad admission committees. Some argue we’re driving away the wrong people, others that we’re letting in the wrong people. Some place the fault with the media, or administrators, or the government, or the researchers themselves. Some believe the problem is just a small group of upstarts, others want to tear the whole system down.

If there’s one common theme to every “academia is broken” take, it’s limited resources. There are only so many people who can make a living doing research. Academia has to pick and choose who these people are and what they get to do, and anyone who thinks the system is broken thinks those choices could be made better.

As I was writing my version of the take, I started wondering. What if we didn’t have to choose? What would academia look like in a world without limited resources, where no-one needed to work for a living? Can we imagine what that world might look like?

Then I realized I didn’t need to imagine it. I’d already seen it.

giantitpscreenshot

And it was glorious

Let me tell you a bit about Dungeons and Dragons.

Dungeons and Dragons doesn’t have “pro gamers”, nobody makes money playing it. It isn’t even really the kind of game you can win or lose. It’s collaborative storytelling, backed up with a pile of dice and rulebooks. Nonetheless, Dungeons and Dragons has an active community dedicated to thinking about the game. They call themselves “optimizers”, and they focus on figuring out the best way the rules allow to do what they want to do.

Sometimes, the goal is practical: “what’s the best archer I can make?” “how can I make a character that has something useful to do no matter what?” Sometimes it’s more farfetched: “can I deal infinite damage?” “how can I make a god at level one?” Optimizing for these goals requires seeking out obscure rules, debating loopholes and the meaning of the text, and calculating probabilities.

I like to joke that Dungeons and Dragons was my first academic community, and that isn’t too far from the truth. These are people obsessed with understanding a complex system, who “publish” their research in forum posts , who collaborate and compete and care about finding the truth. While these people do have day jobs, that wasn’t a real limit. Dungeons and Dragons, I am forced to admit, is easier than theoretical physics. Even with day jobs or school, most of the D&D optimization community had plenty of time to do all the “research” they wanted. In a very real sense, they’re a glimpse at a post-scarcity academia.

There’s another parallel, one relevant to the current situation in theoretical physics. When I was most active in optimization, we played an edition of the game that was out of print. Normally there’s a sort of feedback between game designers and optimizers. As new expansions and errata are released, debates in the optimization community get resolved or re-ignited. With an out of print edition though, that feedback isn’t available. The optimization community was left by itself, examining whatever evidence it already had. This feels a lot like the current situation in physics, when so many experiments are just confirming the Standard Model. Without much feedback, the community has to evolve on its own.

 

So what did post-scarcity academia look like?

First, the good: this was a community highly invested in education. The best way to gain status wasn’t to build the strongest character, or discover a new trick. Instead, the most respected members of the community were the handbook writers, people who wrote long, clearly written forum posts summarizing optimization knowledge for newer players. I’m still not at the point where I read physics textbooks for fun, but back when I was active I would absolutely read optimization handbooks for fun. For those who wanted to get involved, the learning curve was about as well-signposted as it could be.

It was a community that could display breathtaking creativity, as well as extreme diligence. Some optimization was off-the-cuff and easy, but a lot of it took real work or real insight, and it showed. People would write short stories about the characters they made, or spend weeks cataloging every book that mentioned a particular rule. Despite not having to do their “research” for a living, motivation was never in short supply.

All that said, I think people yearning for a post-scarcity academia would be disappointed. If you think people do derivative, unoriginal work just because of academic careers, then I regret to inform you that a lot of optimization was unoriginal. There were a lot of posts that were just remixes of old ideas, packaged into a “new” build. There were also plenty of repetitive, pointless arguments, to the point that we’d joke about “Monkday” and “Wizard Wednesday”.

There was also a lot of attention-seeking behavior. There’s no optimization media, no optimization jobs that look for famous candidates, but people still cared about being heard, and pitched their work accordingly. We’d get a lot of overblown posts: “A Fighter that can beat any Wizard!” (because he’s been transformed by a spell into an all-powerful shapeshifter), “A Sorceror that can beat any Wizard!” (using houserules which change every time someone points out a flaw in the idea).

(Wizards, as you may be noticing, were kind of the String Theory of that community.)

 

Some problems in academia are caused by bad incentives, by the structure of academic careers. Some, though, are caused because academics are human beings. If we didn’t have to work for a living, academics would probably have different priorities, and we might work on a wider range of projects. But I suspect we’d still have good days and bad, that we’d still puff ourselves up for attention and make up dubious solutions to famous problems.

Of course, Dungeons and Dragons optimizers aren’t the only example of “post-scarcity academia”, or even a perfect example. They’ve got their own pressures, due to the structure of the community, that shape them in particular ways. I’d be interested to learn about other “amateur academics”, and how they handle things. My guess is that the groups whose work is closer to “real academia” (for example, the Society for Creative Anachronism) are more limited by their day jobs, but otherwise might be more informative. If there’s a “post-scarcity academia” you’re familiar with, mention it in the comments!

The Quantum Kids

I gave a pair of public talks at the Niels Bohr International Academy this week on “The Quest for Quantum Gravity” as part of their “News from the NBIA” lecture series. The content should be familiar to long-time readers of this blog: I talked about renormalization, and gravitons, and the whole story leading up to them.

(I wanted to title the talk “How I Learned to Stop Worrying and Love Quantum Gravity”, like my blog post, but was told Danes might not get the Doctor Strangelove reference.)

I also managed to work in some history, which made its way into the talk after Poul Damgaard, the director of the NBIA, told me I should ask the Niels Bohr Archive about Gamow’s Thought Experiment Device.

“What’s a Thought Experiment Device?”

einsteinbox

This, apparently

If you’ve heard of George Gamow, you’ve probably heard of the Alpher-Bethe-Gamow paper, his work with grad student Ralph Alpher on the origin of atomic elements in the Big Bang, where he added Hans Bethe to the paper purely for an alpha-beta-gamma pun.

As I would learn, Gamow’s sense of humor was prominent quite early on. As a research fellow at the Niels Bohr Institute (essentially a postdoc) he played with Bohr’s kids, drew physics cartoons…and made Thought Experiment Devices. These devices were essentially toy experiments, apparatuses that couldn’t actually work but that symbolized some physical argument. The one I used in my talk, pictured above, commemorated Bohr’s triumph over one of Einstein’s objections to quantum theory.

Learning more about the history of the institute, I kept noticing the young researchers, the postdocs and grad students.

h155

Lev Landau, George Gamow, Edward Teller. The kids are Aage and Ernest Bohr. Picture from the Niels Bohr Archive.

We don’t usually think about historical physicists as grad students. The only exception I can think of is Feynman, with his stories about picking locks at the Manhattan project. But in some sense, Feynman was always a grad student.

This was different. This was Lev Landau, patriarch of Russian physics, crowning name in a dozen fields and author of a series of textbooks of legendary rigor…goofing off with Gamow. This was Edward Teller, father of the Hydrogen Bomb, skiing on the institute lawn.

These were the children of the quantum era. They came of age when the laws of physics were being rewritten, when everything was new. Starting there, they could do anything, from Gamow’s cosmology to Landau’s superconductivity, spinning off whole fields in the new reality.

On one level, I envy them. It’s possible they were the last generation to be on the ground floor of a change quite that vast, a shift that touched all of physics, the opportunity to each become gods of their own academic realms.

I’m glad to know about them too, though, to see them as rambunctious grad students. It’s all too easy to feel like there’s an unbridgeable gap between postdocs and professors, to worry that the only people who make it through seem to have always been professors at heart. Seeing Gamow and Landau and Teller as “quantum kids” dispels that: these are all-too-familiar grad students and postdocs, joking around in all-too-familiar ways, who somehow matured into some of the greatest physicists of their era.

Our Bargain

Sabine Hossenfelder has a blog post this week chastising particle physicists and cosmologists for following “upside-down Popper”, or assuming a theory is worth working on merely because it’s falsifiable. She describes her colleagues churning out one hypothesis after another, each tweaking an old idea just enough to make it falsifiable in the next experiment, without caring whether the hypothesis is actually likely to be true.

Sabine is much more of an expert in this area of physics (phenomenology) than I am, and I don’t presume to tell her she’s wrong about that community. But the problem she’s describing is part of something bigger, something that affects my part of physics as well.

There’s a core question we’d all like to answer: what should physicists work on? What criteria should guide us?

Falsifiability isn’t the whole story. The next obvious criterion is a sense of simplicity, of Occam’s Razor or mathematical elegance. Sabine has argued against the latter, which prompted a friend of mine to comment that between rejecting falsifiability and elegance, Sabine must want us to stop doing high-energy physics at all!

That’s more than a little unfair, though. I think Sabine has a reasonably clear criterion in mind. It’s the same criterion that most critics of the physics mainstream care about. It’s even the same criterion being used by the “other side”, the sort of people who criticize anything that’s not string/SUSY/inflation.

The criterion is quite a simple one: physics research should be productive. Anything we publish, anything we work on, should bring us closer to understanding the real world.

And before you object that this criterion is obvious, that it’s subjective, that it ignores the very real disagreements between the Sabines and the Luboses of the world…before any of that, please let me finish.

We can’t achieve this criterion. And we shouldn’t.

We can’t demand that all physics be productive without breaking a fundamental bargain, one we made when we accepted that science could be a career.

1200px-13_portrait_of_robert_hooke

The Hunchback of Notre Science

It wasn’t always this way. Up until the nineteenth century, “scientist” was a hobby, not a job.

After Newton published his theory of gravity, he was famously accused by Robert Hooke of stealing the idea. There’s some controversy about this, but historians agree on a few points: that Hooke did write a letter to Newton suggesting a 1/r^2 force law, and that Hooke, unlike Newton, never really worked out the law’s full consequences.

Why not? In part, because Hooke, unlike Newton, had a job.

Hooke was arguably the first person for whom science was a full-time source of income. As curator of experiments for the Royal Society, it was his responsibility to set up demonstrations for each Royal Society meeting. Later, he also handled correspondence for the Royal Society Journal. These responsibilities took up much of his time, and as a result, even if he was capable of following up on the consequences of 1/r^2 he wouldn’t have had time to focus on it. That kind of calculation wasn’t what he was being paid for.

We’re better off than Hooke today. We still have our responsibilities, to journals and teaching and the like, at various stages of our careers. But in the centuries since Hooke expectations have changed, and real original research is no longer something we have to fit in our spare time. It’s now a central expectation of the job.

When scientific research became a career, we accepted a kind of bargain. On the positive side, you no longer have to be independently wealthy to contribute to science. More than that, the existence of professional scientists is the bedrock of technological civilization. With enough scientists around, we get modern medicine and the internet and space programs and the LHC, things that wouldn’t be possible in a world of rare wealthy geniuses.

We pay a price for that bargain, though. If science is a steady job, then it has to provide steady work. A scientist has to be able to go in, every day, and do science.

And the problem is, science doesn’t always work like that. There isn’t always something productive to work on. Even when there is, there isn’t always something productive for you to work on.

Sabine blames “upside-down Popper” on the current publish-or-perish environment in physics. If physics careers weren’t so cut-throat and the metrics they are judged by weren’t so flawed, then maybe people would have time to do slow, careful work on deeper topics rather than pumping out minimally falsifiable papers as fast as possible.

There’s a lot of truth to this, but I think at its core it’s a bit too optimistic. Each of us only has a certain amount of expertise, and sometimes that expertise just isn’t likely to be productive at the moment. Because science is a job, a person in that position can’t just go work at the Royal Mint like Newton did. (The modern-day equivalent would be working for Wall Street, but physicists rarely come back from that.) Instead, they keep doing what they know how to do, slowly branching out, until they’ve either learned something productive or their old topic becomes useful once more. You can think of it as a form of practice, where scientists keep their skills honed until they’re needed.

So if we slow down the rate of publication, if we create metrics for universities that let them hire based on the depth and importance of work and not just number of papers and citations, if we manage all of that then yes we will improve science a great deal. But Lisa Randall still won’t work on Haag’s theorem.

In the end, we’ll still have physicists working on topics that aren’t actually productive.

img_0622

A physicist lazing about unproductively under an apple tree

So do we have to pay physicists to work on whatever they want, no matter how ridiculous?

No, I’m not saying that. We can’t expect everyone to do productive work all the time, but we can absolutely establish standards to make the work more likely to be productive.

Strange as it may sound, I think our standards for this are already quite good, or at least better than many other fields.

First, there’s falsifiability itself, or specifically our attitude towards it.

Physics’s obsession with falsifiability has one important benefit: it means that when someone proposes a new model of dark matter or inflation that they tweaked to be just beyond the current experiments, they don’t claim to know it’s true. They just claim it hasn’t been falsified yet.

This is quite different from what happens in biology and the social sciences. There, if someone tweaks their study to be just within statistical significance, people typically assume the study demonstrated something real. Doctors base treatments on it, and politicians base policy on it. Upside-down Popper has its flaws, but at least it’s never going to kill anybody, or put anyone in prison.

Admittedly, that’s a pretty low bar. Let’s try to set a higher one.

Moving past falsifiability, what about originality? We have very strong norms against publishing work that someone else has already done.

Ok, you (and probably Sabine) would object, isn’t that easy to get around? Aren’t all these Popper-flippers pretending to be original but really just following the same recipe each time, modifying their theory just enough to stay falsifiable?

To some extent. But if they were really following a recipe, you could beat them easily: just write the recipe down.

Physics progresses best when we can generalize, when we skip from case-by-case to understanding whole swaths of cases at once. Over time, there have been plenty of cases in which people have done that, where a number of fiddly hand-made models have been summarized in one parameter space. Once that happens, the rule of originality kicks in: now, no-one can propose another fiddly model like that again. It’s already covered.

As long as the recipe really is just a recipe, you can do this. You can write up what these people are doing in computer code, release the code, and then that’s that, they have to do something else. The problem is, most of the time it’s not really a recipe. It’s close enough to one that they can rely on it, close enough to one that they can get paper after paper when they need to…but it still requires just enough human involvement, just enough genuine originality, to be worth a paper.

The good news is that the range of “recipes” we can code up increases with time. Some spaces of theories we might never be able to describe in full generality (I’m glad there are people trying to do statistics on the string landscape, but good grief it looks quixotic). Some of the time though, we have a real chance of putting a neat little bow on a subject, labeled “no need to talk about this again”.

This emphasis on originality keeps the field moving. It means that despite our bargain, despite having to tolerate “practice” work as part of full-time physics jobs, we can still nudge people back towards productivity.

 

One final point: it’s possible you’re completely ok with the idea of physicists spending most of their time “practicing”, but just wish they wouldn’t make such a big deal about it. Maybe you can appreciate that “can I cook up a model where dark matter kills the dinosaurs” is an interesting intellectual exercise, but you don’t think it should be paraded in front of journalists as if it were actually solving a real problem.

In that case, I agree with you, at least up to a point. It is absolutely true that physics has a dysfunctional relationship with the media. We’re too used to describing whatever we’re working on as the most important thing in the universe, and journalists are convinced that’s the only way to get the public to pay attention. This is something we can and should make progress on. An increasing number of journalists are breaking from the trend and focusing not on covering the “next big thing”, but in telling stories about people. We should do all we can to promote those journalists, to spread their work over the hype, to encourage the kind of stories that treat “practice” as interesting puzzles pursued by interesting people, not the solution to the great mysteries of physics. I know that if I ever do anything newsworthy, there are some journalists I’d give the story to before any others.

At the same time, it’s important to understand that some of the dysfunction here isn’t unique to physics, or even to science. Deep down the reason nobody can admit that their physics is “practice” work is the same reason people at job interviews claim to love the company, the same reason college applicants have to tell stirring stories of hardship and couples spend tens of thousands on weddings. We live in a culture in which nothing can ever just be “ok”, in which admitting things are anything other than exceptional is akin to calling them worthless. It’s an arms-race of exaggeration, and it goes far beyond physics.

(I should note that this “culture” may not be as universal as I think it is. If so, it’s possible its presence in physics is due to you guys letting too many of us Americans into the field.)

 

We made a bargain when we turned science into a career. We bought modernity, but the price we pay is subsidizing some amount of unproductive “practice” work. We can negotiate the terms of our bargain, and we should, tilting the field with incentives to get it closer to the truth. But we’ll never get rid of it entirely, because science is still done by people. And sometimes, despite what we’re willing to admit, people are just “ok”.

Amplitudes Papers I Haven’t Had Time to Read

Interesting amplitudes papers seem to come in groups. Several interesting papers went up this week, and I’ve been too busy to read any of them!

Well, that’s not quite true, I did manage to read this paper, by James Drummond, Jack Foster, and Omer Gurdogan. At six pages long, it wasn’t hard to fit in, and the result could be quite useful. The way my collaborators and I calculate amplitudes involves building up a mathematical object called a symbol, described in terms of a string of “letters”. What James and collaborators have found is a restriction on which “letters” can appear next to each other, based on the properties of a mathematical object called a cluster algebra. Oddly, the restriction seems to have the same effect as a more physics-based condition we’d been using earlier. This suggests that the abstract mathematical restriction and the physics-based restriction are somehow connected, but we don’t yet understand how. It also could be useful for letting us calculate amplitudes with more particles: previously we thought the number of “letters” we’d have to consider there was going to be infinite, but with James’s restriction we’d only need to consider a finite number.

I didn’t get a chance to read David Dunbar, John Godwin, Guy Jehu, and Warren Perkins’s paper. They’re computing amplitudes in QCD (which unlike N=4 super Yang-Mills actually describes the real world!) and doing so for fairly complicated arrangements of particles. They claim to get remarkably simple expressions: since that sort of claim was what jump-started our investigations into N=4, I should probably read this if only to see if there’s something there in the real world amenable to our technique.

I also haven’t read Rutger Boels and Hui Lui’s paper yet. From the abstract, I’m still not clear which parts of what they’re describing is new, or how much it improves on existing methods. It will probably take a more thorough reading to find out.

I really ought to read Burkhard Eden, Yunfeng Jiang, Dennis le Plat, and Alessandro Sfondrini’s paper. They’re working on a method referred to as the Hexagon Operator Product Expansion, or HOPE. It’s related to an older method, the Pentagon Operator Product Expansion (POPE), but applicable to trickier cases. I’ve been keeping an eye on the HOPE in part because my collaborators have found the POPE very useful, and the HOPE might enable something similar. It will be interesting to find out how Eden et al.’s paper modifies the HOPE story.

Finally, I’ll probably find the time to read my former colleague Sebastian Mizera’s paper. He’s found a connection between the string-theory-like CHY picture of scattering amplitudes and some unusual mathematical structures. I’m not sure what to make of it until I get a better idea of what those structures are.

The opposite of Witches

On Halloween I have a tradition of posts about spooky topics, whether traditional Halloween fare or things that spook physicists. This year it’s a little of both.

Mage: The Ascension is a role-playing game set in a world in which belief shapes reality. Players take the role of witches and warlocks, casting spells powered by their personal paradigms of belief. The game allows for pretty much any modern-day magic-user you could imagine, from Wiccans to martial artists.

wicked_witch

Even stereotypical green witches, probably

Despite all the options, I was always more interested in the game’s villains, the witches’ opposites, the Technocracy.

The Technocracy answer an inevitable problem with any setting involving modern-day magic: why don’t people notice? If reality is powered by belief, why does no-one believe in magic?

In the Technocracy’s case, the answer is a vast conspiracy of mages with a scientific bent, manipulating public belief. Much like the witches and warlocks of Mage are a grab-bag of every occult belief system, the Technocracy combines every oppressive government conspiracy story you can imagine, all with the express purpose of suppressing the supernatural and maintaining scientific consensus.

This quote is from another game by the same publisher, but it captures the attitude of the Technocracy, and the magnitude of what is being claimed here:

Do not believe what the scientists tell you. The natural history we know is a lie, a falsehood sold to us by wicked old men who would make the world a dull gray prison and protect us from the dangers inherent to freedom. They would have you believe our planet to be a lonely starship, hurtling through the void of space, barren of magic and in need of a stern hand upon the rudder.

Close your mind to their deception. The time before our time was not a time of senseless natural struggle and reptilian rage, but a time of myth and sorcery. It was a time of legend, when heroes walked Creation and wielded the very power of the gods. It was a time before the world was bent, a time before the magic of Creation lessened, a time before the souls of men became the stunted, withered things they are today.

It can be a fun exercise to see how far doubt can take you, how much of the scientific consensus you can really be confident of and how much could be due to a conspiracy. Believing in the Technocracy would be the most extreme version of this, but Flat-Earthers come pretty close. Once you’re doubting whether the Earth is round, you have to imagine a truly absurd conspiracy to back it up.

On the other extreme, there are the kinds of conspiracies that barely take a conspiracy at all. Big experimental collaborations, like ATLAS and CMS at the LHC, keep a tight handle on what their members publish. (If you’re curious how much of one, here’s a talk by a law professor about, among other things, the Constitution of CMS. Yes, it has one!) An actual conspiracy would still be outed in about five minutes, but you could imagine something subtler, the experiment sticking to “safe” explanations and refusing to publish results that look too unusual, on the basis that they’re “probably” wrong. Worries about that sort of thing can make actual physicists spooked.

There’s an important dividing line with doubt: too much and you risk invoking a conspiracy more fantastical than the science you’re doubting in the first place. The Technocracy doesn’t just straddle that line, it hops past it off into the distance. Science is too vast, and too unpredictable, to be controlled by some shadowy conspiracy.

519tdxeawyl

Or maybe that’s just what we want you to think!