Framing Infrastructure
Framing is the first step of epistemic infrastructure: setting up the problem that needs to be solved. What counts as a good problem set up?
First we need a problem that is part of problem-space, not solution-space.
Problem-space, not solution-space
This distinction is made explicit by most good books on product development: great product designers look at the underlying need in problem-space, rather than at the specific details of existing alternatives in solution-space. This is because solution-space is already a projection into a kind of solution, a subset of models, which makes search simpler but more limited.
In general epistemic infrastructure, this means expressing the problem based on what we need, rather than the shape we expect the answer to take. Notably we don't want to introduce biases towards a certain formulation or paradigm in the problem itself.
Note that historically, scientists and engineers definitely do that. It's exceedingly rare for an historical actor to not assume from the start the shape of the solution, especially once we moved into times with existing sciences and paradigms (post Newton for example).
Laplacean physicists capture the dangers of such solution-space thinking perfectly: worshippers of Newton, they only accepted into their physics the quantitative formalisms based on the master's forces acting at a distance.
Poisson illustrates both the power of the Laplacean program and its limitations, for within that program, research outlook was shaped not only by the general goals of quantification and mathematical formalization but also by a very specific set of substantive assumptions and its precisely tuned mathematical toolbox. Such tools were suited only to very specific questions, while others were lost to view, and not necessarily because they were deemed uninteresting but simply because there was no clear way to deal with them using existing procedures. Furthermore, the emphasis on mathematical formalization and precise measurement brought with it the disparagement or exclusion of broad-based qualitative experimental research.
- Exploratory Experiments, Friedrich Steinle
Which means they missed Fourier's work on heat, Fresnel's work on waves, Ampère's work on electromagnetism, at the level of alternative French formalisms. And because they despised qualitative research, the one who truly solved electromagnetism was Faraday, not them.
How to avoid this pitfall? As explained again and again in the product development literature, by careful extracting the need out of the issues and complaints, and keeping it first in mind. We should always be thinking of the problem, and approaches should merely be attempts at tackling the problem, with an acknowledgment that if the approach is unsatisfactory, it will simply be replaced.
In other word, avoid at all cost entangling and wedding any approach into the problem itself!
Epistemic Infrastructure Needs
Let's thus turn into the concrete process of extracting from a situation an underlying problem, located in problem space.
The initial question to ask is: what do we want to use a solution for? What is the concrete, practical application that we're searching for?
I'm stressing application and practicality because without them, we are lost from the start. Any focus on understanding as an end in itself is doomed to helpless flailing, not because understanding is not a worthwhile pursuit, but because it is completely ungrounded. Our only way of checking if we "understand" is to internally ask ourselves if we do, which is a mediocre and adversarial signal that leads us astray again and again.
So instead, we should focus on applications that can be more concretely grounded. Thankfully, literally anything relevant and useful ever addressed by human ingenuity can be grounded in such a way.
As an initial typology, I'm proposing the following three categories:
Predicting
Given a concrete object of study, we aim to predict some property or behavior of it.
Examples:
Gas in a closed system, predict its equilibrium state
Computational problem, predict its complexity class
Historical regularity (such as the convergent emergence of the same kind of spear), predict under which circumstances it emerges.
Intervening
Given a concrete object of study that can be intervened on, we aim to find an intervention that leads to a desired result.
Examples:
Snowy mountain range, minimize risk of avalanches
Untractable computational problem, make it into a tractable variation
Economy, minimize unemployment rate
Engineering
Given underlying building blocks, we aim to find a design that combine these building blocks to create a system that reliably produces desired result.
Examples:
Parsing a programming language
Detecting new subatomic particules
Curing cancer
These categories of problems capture a natural progression in our needs:
Figuring out relevant details about the object of study, so we can react accordingly
Figuring out how to intervene on the object of study, so that we can get the results we want.
So in addition to prediction, we have a search over interventions
Figuring out how to create new objects of study that reliably produce the results we want
So in addition to prediction and interventions, we have a search over designs
In each case we don't just "look" for something, we have a concrete application in mind. And without details of this application, it's impossible to move further in framing
For the next step is to capture the resolution criteria of the new problem.
What makes for a good solution?
Despite the natural tendency of humans to debate the intrinsic merits of kind of solutions (action at a distance vs field, phenomenological vs mechanistic,...), this is obviously a type error: there is no such thing as a kind of solution with no limitations. As such, what matters are the tradeoffs between different kinds of solutions.
But how are we supposed to know which tradeoff to strike for? Well, that's where the underlying need comes back again. Because it drives which tradeoffs are acceptable and which are not.
In aerospatial engineering for example, the precision required, along with the need for speed and approximation, make classical mechanics far more relevant than general relativity, even though the latter is strictly more accurate. And for similar reason, the frameworks of classical mechanics used there are more often the Lagrangian and Hamiltonian ones than the Newtonian – because it eases calculations.
The following guidelines are a good place to start in establishing a solution criteria:
Relevance
Which aspect of the object of study are the relevant ones?
This is typically one aspect of framing that can change a lot based on reconceptualization of the object of study. See for example Shannon's reconceptualization of information channels, as discussed here (although John uses "framing" in a broader sense than I do, integrating elements of my grounding and solving in it)
Accuracy
How much precision do we need?
Cost
How costly is it to get the solution?
Includes the cost of computing a solution (for a prediction), but also the more concrete costs of intervening and building (social cost, resource cost...)
Range
What is the spectrum of objects of study/situations in which our solutions is supposed to work?
This is one aspect where mechanistic is better than phenomenological, as crystallised in the Lucas critique.
Perpetual reframing
Let's say we have found a satisfactory framing: we have a concrete need, instantiated in problem space, and we built concrete solvability criteria from it.
What next?
Well, at one level, the next steps are the next ones in the epistemic infrastructure decomposition: grounding and solving.
But what is bound to happen, especially as we actually work on the problem, is that the framing itself shifts.
Indeed, problem space is not some objective underlying platonic reality; it is just as constructed by us than solution space. And as such, it is bound to evolve as we change our understanding of what the problem was. We might add or remove things to the need, because we actually confronted it. The need itself might change, although it was a good characterization at the beginning. We might discover new ways to frame the problem itself, and integrate these in our solvability criteria, particularly the relevance one.
All in all, we don't have to get framing perfectly right. Heck, we definitely won't get it right the first time. But by investing time and energy into framing as much as possible our need into problem space, we can reap the benefits enjoyed the world over by product designers: quickly iterating towards tangible (epistemic) success.