Skip to main content

Macro beats micro

· 5 min read
Bruno Felix
Digital plumber, organizational archaeologist and occasional pixel pusher

Science, engineering and to some extent management are permeated by the ideas of Cartesian reductionism. Treating phenomena as machines that can be decomposed into their constituent parts layed the foundation to our modern world. Once analyzed and understood, assembling the constituents back together would yield a complete understanding of the phenomena. This is both enticing and arguably successful, however this approach struggles when faced with emergent properties and behaviors of systems.

The assumption that knowing how each component of a system works (micro level) is sufficient to fully understand how a system works as a whole (macro level) is still the default approach in many fields. The paper "Quantifying causal emergence shows that macro can beat micro"1 provides an interesting model for reasoning about the optimal level of granularity to observe a system. It uses the notion of causal emergence, which is the "zoom level" at which the the causal relationships between the various parts of a system have a stronger correlation - choosing the right "zoom level" will yield the most information. This is interesting because it supplements the idea that focusing only on the study of micro states (which by the way may be more sensitive to noise and subject to degeneracy) does not give the best results in all cases.

If we narrow the scope to the world of technology, distributed systems and the surrounding organizations the observations in the paper make a lot of sense.

After having been through a few incidents I've gained an intuition that knowing how the individual technical pieces of a system work is not sufficient to understand how a system works at a macro scale and how it comes to exhibit certain behaviors. The most interesting incidents actually happen when components and people interact in weird and wonderful ways that take teams completely by surprise.

A system is a whole which cannot be divided into independent parts - Russel Ackoff

The excellent "How Complex Systems Fail"2 offers a brief, but very insightful framing to the nature of failure, its evaluation and proximate cause attribution. The core assertion is that a system comprises of people and the technical artifacts that are deployed to achieve a certain objective, and that safety is an emergent property of the system via the interactions of the social and technical elements at play. Failure can seldom be attributed to any single component - clearly odds with a reductionist view of systems (and why exercises like teh 5 Whys are limited).

Dr. Russel Ackoff has a wonderful description of the limitations of reductionism and makes a very compelling argument for systems-first thinking34:

If you have ever built a non-trivial system in an organization you probably know this already: the technical elements may be well defined, however they operate in a messy human organization, that is part of a larger, and even messier market/society. This results in all sorts of interesting, unforeseen and seemingly unreasonable "asks" from the system (AKA stressors in Residuality theory speak).

Technical systems are hyperliminal5, its components may exhibit hyperliminal coupling6, resulting in the inability of the system's designer to even realize the coupling exists if each component is analyzed separately. Key properties of a system are indeed properties of the whole, and these are lost if the system is taken apart.

The street finds its own uses for things - William Gibson

As we keep building more ambitious technical systems, sometimes in a blissful vacuum7, this serves as a reminder for the need to consider the environment where technical systems operate. Failing to do so will result and fragility and negative outcomes.


Footnotes

  1. Quantifying causal emergence shows that macro can beat micro

  2. How complex systems fail

  3. There are other longer videos of him that are quite interesting. For example here or here

  4. Scientific positivism was perhaps the most maximalist variation of Cartesian reductionism. Two world wars, quantum theory and and increasing pace of technological change discredited positivism - after all even the atom is not able to be fully measured in all its properties and quantum mechanics defy normal cause and effect expectations, can we really understand the world by understanding the behavior of its smallest components? Don Schon has some very interesting material on society's needs for stability and the predicament we find ourselves in.

  5. "Hyperliminality describes an ordered system inside a disordered system. The architect is forced to constantly move between these two worlds, with ordered software and disordered enterprise contexts which require entirely different tools and epistemologies to understand." - source

  6. "If two nodes in a network each have a relationship with a third node, then those two nodes are very likely to have a relationship. Therefore, if a stressor in the wider hyperliminal system interacts with two software components, then those two components can be considered coupled. Since architects are unaware of the stressor, this coupling is invisible to the system's designer until the stressor is realized" - source

  7. One could argue that the deployment of LLMs across many organizations fits this category. The business case and social benefits are still a bit shaky at the moment, but a few things are already clear: a) the technology was built on a foundation of unlicensed scrapping of artistic and copyrighted work; b) hallucinations are a logical consequence of the architecture and there is no solution for that currently; c) it can easily be exploited by malicious actors for criminal or disinformation purposes.