To act effectively, we need to make sense of our situation1Marginnote act-effectively1Or, as Dave Snowden puts it, “We make sense of the world in order to act in it.” He describes sense-making as “a knowledge production activity” (Snowden et al. 2021). We think this activity is more helpfully described as the collective production and use of predictive models but share his insistence on the primacy of action. ↩ – which means we need models of the world we’re acting in. Models that explain why things happen and predict what happens next,2Marginnote prediction2More technically, we use models that predict the causes of what we perceive, which we validate against future perceptions, which we in turn predict based on causal models constructed from the predicted causes. See Clark (2013) for an introduction to this, by now paradigmatic, “predictive processing” account of cognition. ↩ giving us a sense of coherence and enabling the situational awareness3Marginnote situational-awareness3The term is borrowed from Wardley (2018). ↩ needed for developing good strategies.
Sensemaking and Strategy
Strategy is a set of choices about the use of a system’s resources to maximise its chances to fulfil its purpose in a given environment, be that survival, serving a need, or gaining a position of power4Marginnote strategy4Understood as the capacity of a system to influence, shape or even determine the behaviour of other systems in order to further its own goals. ↩.
In a good strategy, the choices make a real difference for resource use, move us closer to the purpose, and are based on sufficient situational awareness. Under conditions of uncertainty, a good strategy helps build a position of strength first from which one can create options for action.5Marginnote good-strategy5See Rumelt (2011) on these points. ↩
We have been socialised within Modernity to look at the world and its contents as if it were a complicated system, a machine.6Marginnote complicated6See the Cynefin framework for the distinction between complicated and complex systems and its consequences for the suitability of different epistemologies and methodologies (Snowden et al. 2021). ↩ Therefore, we tend to think about strategy in terms of linear causality and predictable outcomes. However, the world is inherently complex, and so strategising always means reacting to uncertainty and changing environments. Even if we’re not aware of it, strategy is de facto always an iterative learning process, going explicitly or implicitly through the steps perceive – make sense – decide – act.
Figure 2: Strategy cycle (based on John Boyd’s OODA loop)7Marginnote ooda7Boyd (1996) ↩
In our experience, we often fail to recognise the cyclical nature of strategising – we don’t learn. We’re so focused on deciding and acting that we neglect how we sense and gather information from others, our experiments, and the world writ large, and how we make sense of this information.
Sensemaking is central because it translates perception into appropriate decisions and thus enables effective action, and it scaffolds action and frames perception. It does that by generating context awareness and enabling a situational assessment that identifies and assesses salient context features. As Richard Rumelt puts it:
The diagnosis for the situation should replace the overwhelming complexity of reality with a simpler story, a story that calls attention to its crucial aspects. This simplified model of reality allows one to make sense of the situation and engage in further problem solving.8Marginnote diagnosis8Rumelt (2011) ↩
We perceive the outcomes of our actions as well as the process of decision making itself. Making sense of these perceptions is what enables us to learn as we move forward.
The Anatomy of Sensemaking
Sensemaking doesn’t have to be a conscious act. Most of the time, we find our way around our environment without actively thinking about it. In such a case, we use an implicit model of our environment.
An implicit model is a non-conceptual, embodied representation of our environment – we navigate it safely because our body, our senses, our social norms, our tools and artefacts guide us through it.
When our implicit models and their predictions fail, we are surprised and question our prior understanding.9Marginnote surprise9This phrasing is borrowed from Klein et al. (2007). ↩ We wonder: “What’s going on here?” To answer this question, we engage in conscious sensemaking – we build and test explicit models.
An explicit model is a purposeful description of our environment, often in the form of a story about causes and effects (a causal model), sometimes expressed in mathematical terms or as a simulation.
An implicit model breaking down is the exception, not the rule, though. Most of our implicit models, heuristics that have evolved biologically and culturally and that we often share with others, are quite resilient and deal well with unexpected situations – except when the environment changes in a way that fundamentally breaks them.
Explicit models, on the other hand, can be constructed and tested specifically in response to such changes. But they are often either brittle or unproductive;10Marginnote explicit-models10Or, more technically, overfitted, i.e. they “correspond too closely or exactly to a particular set of data, and may therefore fail to fit to additional data or predict future observations reliably”, or underfitted, i.e. they “cannot adequately capture the underlying structure of the data [and] tend to have poor predictive performance”. (Wikipedia, “Overfitting”) ↩ the linear causal stories they tend to tell in the West are ill-equipped to capture the world’s interconnectedness and complexity; and, most importantly, they can easily be distorted by ideology.11Marginnote ideology11Understood, following Sally Haslanger, as a “network of social meanings, tools, scripts, schemas, heuristics, principles, and the like” (Haslanger 2017, 155), which is distorted in specific ways so that it hides aspects of the world the perception of which would question or threaten systems of power and the social order they impose. This conception goes back to Althusser and the Marxist understanding of ideology as false consciousness. ↩
As a result, our explicit models fail more often than the implicit ones they are meant to replace. They frequently don’t create coherence or, worse, only give an appearance of coherence where there really is none, turning into conspiracy narratives and delusions.
This is exactly what seems to be happening at scale right now: our implicit models are breaking down – and the explicit ones we use to replace them are all failing in their own ways.
A Crisis of Sensemaking
Making sense means establishing coherence: Our models help us understand how the details of our situation are connected and how they fit into the broader context. This is precisely what is generally difficult right now: Understanding how things, from pandemics and the climate crisis to Big Tech and the far right, are connected.
At the same time, there are some decidedly weird connections emerging. There are strange new alliances: COVID-19 protests unite far-right extremists, hippies, and working-class citizens. Germany’s newly established most left-wing party, the so-called BSW, is just as pro-Russia and anti-immigration as the AfD, its most right-wing one. There is accelerating climate change and a global rollback of climate action. Tech billionaires align with the far right and the Republican Party becomes the party of the working class. Donald Trump is a convicted felon and president of one of the world’s superpowers.
We don’t see what we expected, and we didn’t expect what we see. In other words, the expectations generated by our existing implicit models – our biology, our habits and norms – fail.
To make matters worse, traditional sensemaking institutions are losing legitimacy and effectiveness. Historically, the political system, mainstream media, and science have provided shared implicit and explicit models for understanding societal and global changes. But public trust in them has significantly eroded over time12Marginnote trust12This has happened to different degrees for different institutions: Trust in parliaments and political parties is at the bottom of the range in many Western countries, with governments ranking not much higher (Duffy 2023, Our World in Data 2022). Mainstream media also rank very low (Kleis Nielsen & Fletcher 2024). Science has significantly lost trust since the COVID pandemic (Kennedy & Tyson 2023); lower trust is correlated with right-leaning and conservative political views (Cologna et al. 2025) as well as conspiracy beliefs and populist attitudes (Reif et al. 2024). ↩, leaving a sensemaking vacuum that a multitude of emerging explicit models is filling.
This creates what we could call the Great Decoherence: an era in which once-reliable narratives no longer seem to “fit” together, leaving a fragmented and, at times, senseless reality in their wake.13Marginnote decoherence13This conceptualisation was originally inspired by the – as always – more complex, but also more ad hoc framing of the sensemaking crisis by Venkatesh Rao (Rao 2020). ↩
The Great Decoherence is enabled and amplified by a few factors:
- Deliberate misinformation strategies, particularly in online spaces, muddy the waters, creating echo chambers where memes proliferate and facts lose their meaning.
- Our digital environments per se – designed to maximise engagement through algorithms – amplify extreme or polarising content because it is the most engaging, reinforcing and deepening existing biases and divisions.
- Confusion surrounds political alliances and ideologies as society enters a hegemonic crisis, during which political constellations shift unpredictably, leaving society without a clear direction.14Marginnote hegemonic-crisis14Italian Marxist theorist Antonio Gramsci introduced this concept to describe periods of ideological instability when dominant systems of power lose credibility and no coherent new system has yet replaced them. See our analysis of the recent hegemonic crisis for more on this. ↩
As traditional sensemaking fails, people turn to over-coherence: overly simplified explicit models that aim to explain the complex changes around us in everyday terms, suggesting a level of explanatory coherence our understanding of complex systems can never realistically attain.
Conspiracy narratives, for example, offer an extremely coherent but often very naïve representation of reality: They (most of the time falsely) attribute complex societal dynamics to individual malevolent actors and secret agendas and have a simple, if weird, causal structure.15Marginnote conspiracies15Of course there are real conspiracies, and more often than not they involve powerful agents whose power and strategies we should expose and dismantle. This is why in the past, uncovering (real and imagined) conspiracies was often driven by progressive and emancipatory ideals and interests. This, like so many other things, has been co-opted by the Right. (Again, see our analysis of the recent hegemonic crisis for more on this.) ↩ Though often deeply flawed, these narratives respond to a psychological need for order in chaotic times.
The structural problem of these narratives is that they offer explanations on the wrong level of abstraction: complex systems (e.g. societies) can’t be understood by looking at individual components (e.g. alleged conspirators).
But even modelling more and more components to understand why the system behaves like it does won’t help us. As Paul Cilliers once stated, “No matter how we construct the model, it will be flawed, and what is more, we do not know in which way it is flawed.”16Marginnote flawed16Cilliers (2001), 137 ↩ This is why we, as he says, “cannot know complex things completely”.17Marginnote complete-knowledge17Cilliers (2002), 77. This is also why, as David Snowden never tires of pointing out, Systems Thinking, which relies heavily on modelling, is deeply flawed. ↩ The question, then, is: How can we understand complex systems in a more adequate way?
Improving Our Capacity for Sensemaking
Our reply is: We go up in abstraction!
More specifically, we stop trying to build an understanding of what’s happening piece by piece from the bottom up. Instead, we do what complexity researcher Yaneer Bar-Yam recommends:
When considering interventions that affect the large-scale properties of [a] system, rather than accumulating details about the system, we should start with the largest-scale pattern of behaviour and add additional information only as needed 18Marginnote scale18Bar-Yam (2017), 1 ↩
So we should begin identifying large-scale patterns in system behaviour. What could help us with that?
There are quite a few sensemaking frameworks out there that offer such help. They are what Venkatesh Rao calls “grey lore” – “generative, internally consistent, learnable system[s] of thought that take some skill and time to master”, but in the end, amount to not much more than “usable bullshit”.19Marginnote rao19Rao (2022) ↩ They are large and unwieldy, and because of their size and rigidity, they afford only little variation in and, thus, evolution of concepts. This means reduced adaptability and resilience of sensemaking processes.
If we generally want to maximise not only scope and detail but also cognitive efficiency and evolvability of sensemaking, then we should make heavy use of shorthand abstractions instead: concepts that enable us to build high-level, abstract models of complex systems by compressing the lower-level models they stand in for.20Marginnote flynn20Flynn (2007) ↩
We explore this approach in detail in our article on scale-free abstractions.
References
- Bar-Yam (2017): “Why Complexity is Different”
- Boyd (1996): “The Essence of Winning and Losing”
- Cilliers (2001): “Boundaries, Hierarchies and Networks in Complex Systems”
- — (2002): “Why We Cannot Know Complex Things Completely”
- Clark (2013): “Whatever next? Predictive brains, situated agents, and the future of cognitive science”
- Cologna et al. (2025): “Trust in scientists and their role in society across 68 countries”
- Duffy (2023): “Trust in trouble? UK and international confidence in institutions”
- Flynn (2007): What Is Intelligence?
- Haslanger (2017): “Culture and Critique”
- Kennedy & Tyson (2023): “Americans’ Trust in Scientists, Positive Views of Science Continue to Decline”
- Klein et al. (2007): “A Data-Frame Theory of Sensemaking”
- Kleis Nielsen & Fletcher (2024): “Public perspectives on trust in news”
- Our World in Data (2022): “Trust in institutions, United States, 2022”
- Rao (2020): “Weirding Diary: 11”
- — (2022): “Dark, Gray, and Light Lore”
- Reif et al. (2024): “The Public Trust in Science Scale: A Multilevel and Multidimensional Approach”
- Rumelt (2011): Good Strategy/Bad Strategy
- Snowden et al. (2021): Cynefin: Weaving Sense-Making into the Fabric of Our World
- Wardley (2018): Wardleymaps
- Wikipedia: “Overfitting”