# Nothing New: Productive Reframing

In a comment on my last post, someone deplored the tendency of LW bloggers to rediscover ideas of famous philosophers and pretend that they discovered it first. This made me think of an interesting question of epistemology: is there value in reformulating and/or rediscovering things?

A naive answer would be no— after all, we already have the knowledge. But a look at the history of science brings up many examples showing otherwise..

In physics, the most obvious case comes from the formulations of classical mechanics. Newton gave the core of the theory in his original formulation, but what gets applied in modern science and played a part in the revolutions of quantum mechanics and quantum electrodynamics among others, are the other two formulations: Lagrangian and Hamiltonian. And what these two, particularly the Lagrangian, pushed and pioneered the least action principle in physics, but capturing every system through an action that was maximized or minimized for every trajectory. This principle is one of the core building blocks of modern physics, and came out of a pure compression and reformulation of Newtonian mechanics!

Similarly, potential energy started as a mathematical trick, the potential function of Lagrange which captured all gravitational forces acting on a system of N-bodies, which were the coefficients of its partial derivatives. Once again, there was nothing new here; even more than the Lagrangian and the action, this was a computational trick that sped up annoying calculations. And yet. And yet it’s hard to envision modern physics, even classical mechanics at all, without potential.

In modern evolutionary biology, many advances like exaptation can be brought back to Darwin’s seminal work. And yet they often reveal different aspects of the idea, clarifying and bringing new intuitions to bear that make our models of life and its constraints that much richer.

Again in theoretical computer science, reframing of the same core ideas is at the center of so many productive inquiries:

all the models of computation unified by the Church Turing thesis, all equivalent yet providing different angles on questions of computations

the non-deterministic complexity classes like NP reframed as proving/certifying classes, leading to far better intuitions and to interactive proofs and IP=PSPACE

the Curry-Howard isomorphism shows that proof-calculi like Natural Deduction and type systems for computation models like the lambda calculus are actually the same thing. But this makes the two stronger, rather than weakening them for being the same thing.

All in all, this pattern of rediscovering the same thing from a different perspective, of reframing the known, has been particularly productive in the history of science. Compressions, new formulations, new framing have delivered to us more than just the original insight.

Why?

Because we’re not logically omniscient — we don’t instantaneously know all the consequences of what we discover. Similarly, we’re not bayesian-omniscient — we don’t update on all the evidence available in the data. In both cases we are limited by what we can compute, what we can figure out. Reframings then are like shortcuts in the vastness of logical consequences, which highlight particularly interesting aspects of the evidence we originally unearthed.

And I believe that one big reason for the misguided and simplistic models of science that so many share lies in the non-obviousness of these reframings’ power. We don’t remember them very well, we tend instead to ascribe the clarity and the power to the original thinker, even when we never used or touched their original formulation. We take for granted the intuitions and instincts that have been patiently built and baked into the concepts we use.

As Oliver Darrigol writes in World of Flow, his history of hydrodynamics:

There is, however, a puzzling contrast between the conciseness and ease of the modem treatment of these topics, and the long, difficult struggles of nineteenth-century physicists with them. For example, a modern reader of Poisson's old memoir on waves finds a bewildering accumulation of complex calculations where he would expect some rather elementary analysis. The reason for this difference is not any weakness of early nineteenth century mathematicians, but our overestimation of the physico-mathematical tools that were available in their times. It would seem, for instance, that all that Poisson needed to solve his particular wave problem was Fourier analysis, which Joseph Fourier had introduced a few years earlier. In reality, Poisson only knew a raw, algebraic version of Fourier analysis, whereas modem physicists have unconsciously assimilated a physically 'dressed' Fourier analysis, replete with metaphors and intuitions borrowed from the concrete wave phenomena of optics, acoustics, and hydrodynamics. In our mind, a Fourier component is no longer a mere coefficient in an algebraic development, it is a periodic wave that may interfere with other waves in a manner we can easily imagine. The transition from a dry mathematical analysis to a genuinely physico-mathematical analysis occurred gradually in the nineteenth century, through reversible analogies between different domains of physics. It concerned not only Fourier analysis, but also the theory of ordinary differential equations, potential theory, perturbative methods, Cauchy's method of residues, etc. The modern recourse to such mathematical techniques involves a great deal of implicit knowledge that only becomes apparent in comparisons with older usage