One reader questioned my reasoning that we ought to think and act faster than the enemy, by saying that we sometimes ought to think slower, not faster. He was referring to the two systems that drive human thinking as described by Daniel Kahneman some years ago in his best-selling book ‘Thinking Fast and Slow’. I will argue that Khaneman’s analogy of the two systems is not only incorrect, but that it is also possible to think faster whereas one might think we only can or ought to think slower. This post will be a follow-up on the article I recently wrote with the title: How to think and act faster than the enemy, where I reinterpret the OODA-loop[1] through the lens of some crucial new neuroscientific discoveries. But readers beware, seat yourselves before you start reading because this will be a dense post with some scientific and philosophic considerations, something that perhaps requires you to think more slowly.
First a short recap of the book ‘Thinking Fast and Slow’. In short: System 1 is fast, automatic, and intuitive, handling the decisions we make instinctively. This is the mode that allows us to quickly recognize faces or avoid dangers without conscious thought. In contrast, system 2 is slow, deliberate, and analytical, used for more complex and thoughtful decisions, like solving math problems or comparing financial options. Kahneman illustrates how these systems interact and often conflict, leading to cognitive biases. They highlight that while system 1 can be efficient, it’s also prone to errors and biases, such as overconfidence, loss aversion, and the ‘anchoring effect’[2]. System 2, although more reliable, is energy-intensive and can lead to mental fatigue if overused. Kahneman’s goal is to make readers aware of the errors and pitfalls in human reasoning by showing how biases shape our decisions, often without us being aware, and advocating for being more aware of our own cognitive processes and biases.
Kahneman has clarified in interviews and discussions surrounding the book that the system 1 and system 2 framework is a simplified model or a metaphor, rather than a literal depiction of how the brain works. He emphasizes that these systems do not correspond to distinct structures in the brain but are conceptual tools meant to help people understand different types of cognitive processes.[3] [4]
The neuroscientists whose work I used for my article, Jeff Hawkins, Lisa Feldman Barett, Andy Clark and Anil Seth, argue that there is just one modus operandi for the brain, not two distinct modes.[5] However, what my reader (as mentioned earlier) was probably getting at is that it is sometimes useful to stop and think twice. In the armed forces we are accustomed to say, “sit down at a soft and comfortable rock and think it about it” or in other words, take your time[6] (of course, there are never soft and comfortable rocks to be found, so it is also a joke). But if our brain does not have distinct modes of thought, be it faster or slower, how come that we experience these thought processes as being different? How come that it is still common sense to sit down and first think?
How your brain picks the correct model
As I explained in my article, our brains create models of the world based upon reference frames. Not just one model of every object, but a thousand ones for each object. But how does the brain know how to pick one model and not the other? How does it choose? Hawkins and his team expect that the brain ‘votes’. When it compares different models, it votes which reference frames are closest to that what is being observed. How this exactly works is still part of ongoing research.[7] Reason would say that the voting resembles what is called corroboration. Corroboration (in science) consists of all the evidence that confirms your theory. Every model that is refuted by the observation will be left out of the vote and every associating sensory input that has a location in a reference frame, a model, will have a say in the vote. The voting mechanism also explains why we have a singular non-distorted perception of the world. Your eyes are continuously moving very rapidly in different directions. If each observation was transferred and translated directly in the brain, you would have a very distorted view of the world. What we in contrary experience is a stable picture of the world where our brain does not literally observes everything but makes a model of it.[8] The brain also continuously makes predictions based upon this model. When you are sitting at your desk, you possibly know that there is a door at your back, and this is what your models expects there to be when you turn your head and look at it.
This also explains how you can recognize something with one sensory organ and make predictions in other sensory modalities. There are these memes where you see a picture of a movie scene with in it a famous actor with a distinctive voice. And when you read the subtitles in the meme, you automatically (internally) read the text in a similar voice because your brain predicts what it will sound like.
Another example is the fact that you can navigate in your home even when it is dark, since you likely have a model of your home’s interior, complete with accurate distances and locations of the various objects in it. Some time ago my wife changed the position of the towel in the kitchen, it is now located somewhat to the left. I hopelessly kept reaching for the towel in the place where it used to hang for about 20 times. My brain unconsciously still figured the towel was hanging somewhere else, although I was perfectly capable of observing the actual place of the towel.
Although at first it might seem very inefficient to have a thousand different models that by voting pick the model that fits the best reference frame, it actually has a lot of advantages. First of all, it would be very stressful and tiresome if you consciously and continuously had to update each singular model of an object when you encounter information that might corroborate or refute parts of your earlier model. Instead, your brain does not have one singular model of this object but a thousand. This makes the overall model of the world, which consists of thousands of sub models itself, very flexible. Second, voting which model is most applicable is a safer strategy. If you had only one model to choose from and your model is somewhat off, you might not recognize it as an applicable model and as a consequence you will make erroneous predictions. Know that models and their reference frames consist of actual neurons and cells and that these are not easily changed or rewired.[9] Updating models is likely about adding new reference frames (experiences) instead of actually changing existing ones.[10]
When you have in contrast many (similar yet still different) models to choose from you have a better chance of incorporating new information and choosing the best applicable model. This becomes important from an evolutionary viewpoint when you have to deal with predators. What if you only saw just a small portion of the predator, that did not fit the (only) model present in your brain? It is safer to compare your observation of this possible predator with a thousand different models of predators than with just one model.
Evolution has also ensured that ‘voting’ can be achieved very fast. When you arrive at work, you don’t go through a list of all your colleagues faces before recognizing them. When you encounter someone that doesn’t belong there, like your mother in law, you feel a sudden surge that something is wrong. The context in which your brain makes predictions matters.
Another advantage of being capable of immediately voting between a thousand models is that it will likely see the event from a thousand different (possible) viewpoints. A singular model clearly limits the amount of different viewpoints. Let’s look at a simple example. Children likely have less models of the world than an adult since they have less experience (and thus references) to draw upon, and more to the point, less neurons that make up reference frames and models. When a 4 year old child observes a 35 year old women walking with a small child, they will likely assess that the women is the mother, since the mother-child relationship is one of their core models of the world. A grown-up will likely intuitively also reach this conclusion at first, but then introduce other models known to them. The women can for example also be the caretaker or the schoolteacher for instance. The adult will try to look for indications that this women is not the mother of this child, perhaps by looking at their physiology or behavior. Or perhaps the adult will look at the context and environment in which this observation takes place.[11] In other words, the adult has different theories (models) to rely on and thereby knows what to look for to either verify or falsify each theory available to him or her. The brain therefore needs a thousand models of each and every object or concept.
“Intuition is nothing more and nothing less than recognition” – Daniel Kahneman
The voting mechanism explains why we sometimes experience thinking more slowly, but what is actually happening is that our brains have a difficulty with voting which models is applicable. The biases Kahneman talks about are instances where we pick the wrong model, as the example above about the women and child illustrate. So how come that our brains are sometimes better and faster, or worse and slower, at picking the correct model? There are three possible factors that make voting more difficult and thus seemingly slower.
Familiarity
The first factor has to do with familiarity. The fewer amount of models you have available, the more difficult it becomes to select a corresponding one. The distribution of models that are up for a vote can be visualized as a bell-curve. The larger the amount of models (n), the more accurate your prediction (centre point of the bell-curve) will likely be. This is why experts are better predictors and why they have a better ‘intuition’ than non-experts.
Complexity
The second factor has to do with complexity, meaning that something is composed of many interconnected parts. To illustrate this problem try to imagine a bench. Picture it in your mind. You likely imagined multiple different images of different types of benches and had to pick one of them to function as your imagined bench. These are just superficial models that only depict the major outline of the image of a bench. Note that all the sub elements of a bench also consist of models of their own. Take the fabric or the construction methods. If I would ask you to imagine a bench that is already 2000 years old, many of these sub elements will likely change, because you have a model of what technologies were available 2000 years ago and how people would construct such a bench back in those days. This second task requires you to think harder, and likely slower, because your brain had to retrieve models of ‘how things were 2000 years ago’. You don’t have any personal experiences and likely little familiarity with benches that are over 2000 years old. Therefore, you need to construct the model which takes time. This is also the reason why statistics always require careful consideration. You cannot learn answers to statistical problems the way you can learn the calculating tables.
Delineation
The third factor has to do with delineation of the reference frames themselves. Physical objects in a three dimensional space are very discrete, meaning that you can pinpoint an exact location, like a coordinate. But what about less discrete models or reference frames, like ideas or concepts that seem to have a dualistic nature, such as good or bad? How do you pinpoint these models? Where do they begin or end? The sorites paradox pinpoints why it is so difficult to delineate these models. The Sorites paradox is thousands of years old and is called the ‘sorites’ (derived from the Greek word Soros meaning ‘heap’) argument. The argument goes as follows[12]:
1. A single grain of sand is not a heap;
2. Adding a grain to a single grain does not make it a heap;
3. You can’t make a non-heap into a heap by adding a single grain;
4. Therefore, there are no such things as heaps.
The conclusion is that heaps do not exist. This is a paradox, since in our imaginations heaps seem to exist. Do you want to know how to solve this paradox? Read it at my earlier post: how to solve the sorites paradox.
“Etwas ist nur in seine grense, und durch seine grense, das et etwas ist” (Translated as: Something is only within its borders something and because of its border that it is something.) Hegel (1770- 1831)[13]
Now back to why it is more difficult to make ‘vote’ or simply to come up with an answer. If I would ask you to decide how many grains of sand are needed to make something into a heap, you would likely need some time to come up with an answer. In other words, the models themselves are too fuzzy to grasp and instantly recollect. Therefore your brain will need more time to process and is thus slower.
So what?
If we want to think faster, without succumbing to the biases explained by Kahneman, we need to take into account the three factors explained above that make it more difficult to pick a model: familiarity, complexity and delineation. You can try to the tackle the delineation problem (partially) by using clear and distinct language or military doctrines. As explained in the three posts on ‘sharpening our military command’, we need to use short and clear orders. However, some concepts will likely remain fuzzy, difficult to delineate and therefore less easily accessible when making decisions. Complexity is linked with familiarity, because the more familiar you are with certain problems, the more easily your brain can retrieve models that correspond to reality or the problems at hand. Yet, the only true remedy for thinking faster is being familiar with the problems you might encounter. This means: training, training and training.
You don’t become a good chess-player by only learning the rules very thoroughly. It definitely helps to analyse games of other players, but at one point, you need to play it yourself in order to become an expert. The expert chess-player ‘sees solutions’ on the chess board, whereas you and me (likely) need to think hard and slow to see the possible solutions. The same holds for rifle squads that need to train how to fight in urban areas or trenches. We all seem to understand these simple examples, but the same is true for military leaders that need to command large formations. Not just the commander but also the staff needs to train and familiarize themselves with as many different problem-sets as possible.
This insight provides an explanation for why training is so immensely important and how our brains process and use these experiences. It also explains why trainings should be dynamic and adversarial, meaning that you always have to solve novel problems, because expanding your repertoire of models is just as important as being very good at solving one particular set of problems. It also explains why it is useful to study military history: better learn from the mistakes of others. Or why tactical decision games are good instruments to train your commanders or staff. What we call ‘experience’ is nothing more than continuously updating and expanding your brain’s repertoire of models.
To conclude, thinking fast or slow is not some default option within our brains, like a system we can actively choose to use or not. Moreover, you can influence how fast you are able to think. Thinking faster is something you can train and most importantly, it is something you should aspire to.
[1] OODA: Observe-Orient-Decide-Act. A decision-making model created by John Boyd.
[2] The anchoring effect is a cognitive bias in which people rely too heavily on the first piece of information they encounter (the ‘anchor’) when making decisions.
[3] https://www.researchgate.net/publication/269738601_Review_of_Thinking_Fast_and_Slow_by_Daniel_Kahneman
[4] https://fs.blog/daniel-kahneman-the-two-systems/
[5] See bibliography for these books.
[6] Dutch: Zoek een zachte steen op.
[7] Jeff Hawkins. A Thousand Brains: A New Theory on Intelligence, P100
[8] Jeff Hawkins. A Thousand Brains: A New Theory on Intelligence, P108
[9] Jeff Hawkins. A Thousand Brains: A New Theory on Intelligence,
[10] This is only an hypothesis and not part of the research by Jeff Hawkins. Yet, I still think this is a good explanation for why it is so difficult to change existing models. The only thing we can do is overwhelm them, in numbers or intensity, with experiences (reference frames) that counter the initial model.
[11] These are the indicators mentioned in the article : How to think and act faster than the enemy’
[12] Daniel Dennet, Intuition Pumps: and Other Tools for Thinking, P395
[13] https://hegel.de/werke_frei/hw108016.htm
Quote Kahneman: https://bigthink.com/articles/kahnemans-mind-clarifying-biases/
Bibliography:
Andy Clark. The Experience Machine: How our Minds Predict and Shape Reality (New York: Pantheon Books, 2023)
Daniel C. Dennett. Intuition Pumps: and Other Tools for Thinking (London: Penguin Books, 2013)
Lisa Feldman Barret. How Emotions Are Made: The Secret Life of the Brain (London: Pan Books, 2018)
Jeff Hawkins. A Thousand Brains: A New Theory on Intelligence (New York: Basic Books, 2021)
Daniel Kahneman, Ons feilbare denken: Thinking Fast and Slow (Amsterdam: Uitgeverij Business Contact, 2011)
Anil Seth. Being you: A New Science of Consciousness (London: Faber & Faber Ltd, 2021)
Images:
Image 1: Created with AI, Microsoft Designer
Image 2: Daniel Kahneman, from: https://scotsimcentre.blogspot.com/2014/03/book-of-month-thinking-fast-and-slow-by.html