Trust Your Notation as Far as You Can Prove It

Calculus contains one of the most famous examples of physicists doing something silly that irritates mathematicians. See, there are two different ways to write down a derivative, both dating back to the invention of calculus: Newton’s method, and Leibniz’s method.

Newton cared a lot about rigor (enough that he actually published his major physics results without calculus because he didn’t think calculus was rigorous enough, despite inventing it himself). His notation is direct and to the point: if you want to take the derivative of a function f of x, you write,

f'(x)

Leibniz cared a lot less about rigor, and a lot more about the scientific community. He wanted his notation to be useful and intuitive, to be the sort of thing that people would pick up and run with. To write a derivative in Leibniz notation, you write,

\frac{df}{dx}

This looks like a fraction. It’s really, really tempting to treat it like a fraction. And that’s the point: it’s to tell you that treating it like a fraction is often the right thing to do. In particular, you can do something like this,

y=\frac{df}{dx}

y dx=df

\int y dx=\int df

and what you did actually makes a certain amount of sense.

The tricky thing here is that it doesn’t always make sense. You can do these sorts of tricks up to a point, but you need to be aware that they really are just tricks. Take the notation too seriously, and you end up doing things you aren’t really allowed to do. It’s always important to stay aware of what you’re really doing.

There are a lot of examples of this kind of thing in physics. In quantum field theory, we use path integrals. These aren’t really integrals…but a lot of the time, we can treat them as such. Operators in quantum mechanics can be treated like numbers and multiplied…up to a point. A friend of mine was recently getting confused by operator product expansions, where similar issues crop up.

I’ve found two ways to clear up this kind of confusion. One is to unpack your notation: go back to the definitions, and make sure that what you’re doing really makes sense. This can be tedious, but you can be confident that you’re getting the right answer.

The other option is to stop treating your notation like the familiar thing it resembles, and start treating it like uncharted territory. You’re using this sort of notation to remind you of certain operations you can do, certain rules you need to follow. If you take those rules as basic, you can think about what you’re doing in terms of axioms rather than in terms of the suggestions made by your notation. Follow the right axioms, and you’ll stay within the bounds of what you’re actually allowed to do.

Either way, familiar-looking notation can help your intuition, making calculations more fluid. Just don’t trust it farther than you can prove it.

1 thought on “Trust Your Notation as Far as You Can Prove It

  1. Lubos Motl

    Well, I think that it’s better to trust it up to the point when you can prove that it no longer works. The extrapolation of such rules into realms where one is “uncertain” whether it works is still an excellent working hypothesis.

    Liked by 1 person

    Reply

Leave a comment! If it's your first time, it will go into moderation.