“Our brains are the measure of all things”
How our metaphysical understanding is biologically limited by our brains.
We humans have biological limitations that influence how we acquire, process and create knowledge. The physicist David Deutsch aptly summarized why we humans have these biological limitations: “Most of what happens is too fast or too slow, too big or too small, or too remote, or hidden by opaque barriers, or operates on principles too different from anything that influenced our evolution.”[1] If only we had different glasses, we could see the world for what it really is, as in arguing for example that our eyes can’t for example see into the infra-red spectrum of light. What Deutsch is saying is that it does not only depend on our faulty senses such as our eyes. The statement of Deutsch goes way beyond that, our life spans are just too short. Perhaps even the entire history of mankind is too short to even notice certain changes, let alone adapt to them on evolutionary standards. We have to start with the premises that we, and specifically our brains, evolved in a very parochial context. It is common knowledge that our senses are limited and that they influence the information that we acquire before it enters our brain. What is often overlooked is the way our brains processes and creates knowledge, and how this knowledge in turn influences our sensory input.
Up until recently there haven’t been many theories that explain how knowledge is processed and created in the brain, and honestly, there is still much we don’t know. Recently some very promising and strikingly similar theories have been proposed.[2] One of them is the brilliant theory proposed by Jeff Hawkins in his book ‘A Thousand Brains: a new theory on intelligence’. His main focus lies in understanding the brain in order to understand intelligence and eventually create intelligent machines.[3]
This post will give a short summary of the primary ideas Hawkins describes in his book. However, instead of understanding the brain in order to create intelligent machines or artificial intelligence, I seek to understand what this theory on intelligence tells us about how we view and interpret the world. I will argue that this theory has a profound impact on metaphysics. If our brains are physically limited to observe or imagine only a limited range of things, how then can we decide what metaphysics is about? Is metaphysics then not equal to that what are brains are capable of observing, imagining or conceiving? Read this post with this question in the back of your mind.
How our brains create models (of the world).
Let’s start with how our senses pick up information. When we see light, feel some touch or hear a sound, none of these perceptions actually enters the brain. The only thing our nerves send to the brain are spikes, yet we do not perceive spikes in the brain but mental experiences of light, touch and sound. This means that everything we perceive must be fabricated in the brain. These spikes only exist in our brains’ model of the world. Of course our sensory organs convert properties such as light to specific nerve spikes which are then converted into our perception of light. Yet our sensory organs can only sense a subset of the real world. Our eyes for example are only sensitive to a subset of all the possible frequencies light can have.[4] Hawkins states that our reality is similar to the brain-in-a-vat hypothesis. In this hypothesis we live in a by computers simulated world wherein our brains are placed in a vat and the inputs are controlled by a computer. The difference with reality is that our brains are placed in a skull and not a vat. These received inputs in the form of nerve spikes could just as easily have come from a computer for that matter. In other words, our brains do not perceive the world directly.[5] This argument is actually not novel and long ago philosophers were already skeptical about the reliability of our perceptions. This has been one of the main arguments by rationalists why empiricism was not to be trusted. It is also why skeptics such as the philosopher Descartes claimed they only knew one thing for certain; cogito ergo sum, I think, therefore I am. I think that it is useless to go as far as Descartes. Our senses may be faulty, but they are the only possible way we can learn something about the world we inhabit and the people we share our lives with.
The novel idea that Hawkins presents in his book is that we should not as much doubt our senses, but also the manner in which our brain processes and creates information. First of all, Hawkins explains that every part of the neocortex generates movement and that thinking is therefore a form of movement. Earlier on it was believed that sensory input entered the neocortex in an hierarchal manner via the ‘sensory regions’ through other regions before descending to the ‘motor regions’. This description is misleading and Hawkins provides evidence that neurons and cells everywhere in the neocortex perform a sensory-motor task.[6] This is logical from an evolutionary point of view since we survive because we are able to move around in order to find food, reproduce or because we need to escape from becoming food ourselves. Everything we (used to) do involves movement. It is only until recently that we are using abstract concepts more frequently that do not necessarily involve movement. Hawkins also argues that almost every organism on this planet that wants to inquire about its environment needs to move in order be able to acquire new information or manipulate its environment. That also means that most of the information we (used to) perceive is linked to a location or direction. You need to know where you can find food or shelter for example, but also how far the distance is from you to a predator or prey. This explanation also makes sense when you look at the meaning of movement in our language when describing abstract concepts. It is no coincidence that we find a problem ‘hard to grasp’ or ‘intangible’. We talk about a ‘career path’ because we envision ourselves moving in some direction. It is possibly also the reason why we often prefer to visualize an explanation. In the armed forces for example, they often still use small 3D models (eg. maquette) to simulate a battle or tactical maneuver. Or take for example the emergency procedure video shown in commercial aircrafts which depict people doing what you should do in case of an emergency. Where is your life jacket stored and where are the exits? Your brain resonates really well with these models of people and objects moving around.
The neocortex is the evolutionary most novel part of the brain which surrounds your ‘old brain’. The old brain is basically primarily responsible for your instinctive behaviours such as movement, emotions and reproduction. The cognitively more complex behaviours such as speech and rational thought are primarily located in the neocortex. What Hawkins and his team propose is that although the neocortex might be evolutionary more novel than the old brain, it is likely based upon the same neural mechanisms. A crucial scientific discovery in the old brain was the existence of so called ‘place’ and ‘grid’ cells in neurons that are being used to navigate and facilitate movement. More recent experiments suggest the existence of similar but not identical cells in the neocortex, likely about as much as 150.000 copies. Instead of just tracking the location of your body, what the old brain does, the neocortex likely has more than thousands of these circuits operating simultaneously and is therefore also able to track the location of more than thousand locations. Hawkins explains the use of these cells by analogy of a person navigating by means of a map. The short version goes like this. Imagine that you are somewhere in a town, but you don’t know where. You see a fountain and start recollecting what places in this particular town have a similar fountain. You then remember where you have seen this fountain and identify the town and grid, let’s say grid D3 (columns indicated in letters and rows in numbers). If you are not even sure what town you are in, you might want to start moving and see or predict what you will see when moving East to grid D4. If you predicted in town X to see a school where there is none, you can eliminate the possibility that you are in town X and go on to investigate the likelihood of being in town Y. If you have ever been lost in real life, this is what you would likely literally do, at least this is what I have done quite often; you start moving until you recognize where you are. This is also what you do when you have to reach with your hand into a black box and feel the object inside. Touching the object with just one finger won’t tell you much, but moving your finger around or using multiple fingers to touch the object will tell you what you are feeling. The relative position of each finger and it’s movement will need a reference frame to make sense of the data, since the positions of each finger and their movement are relative to the object. In the example of the map, it might seem as if we are looking at one map each time when comparing them. In the neocortex however, the neurons are able to search through thousands of maps simultaneously. This is why you never experience going through the list of possible options in your head, at least not often. Your brain just picks the correct one when you are seeing it, although it can sometimes make mistakes of course. When you go to work each day, you don’t have to compare all the faces of your colleagues in a list of possible faces and corresponding names, you instantly recognize them. However, when you would encounter someone in your workplace that doesn’t belong there, like your mother in law for example, you would double check if what you are seeing is correct and you will feel a surge as if something is wrong. This is because it does not fit your reference frame. Therefore, Hawkins concludes that all knowledge in the brain is stored in reference frames relative to locations, be they actual locations, or concepts that have an abstract location.
Reference frames are used to model everything we know. The brain does this by associating sensory input with locations in reference frames. First of all, a reference frame allows the brain to learn the structure of something. This is important because everything in the real world out there is composed of a set of features and surfaces relative to each other. Hawkins gives the example of a face, wherein a face is only face because a nose, eyes and mouth are arranged in relative positions. Second, once our brain has been able to learn an object by a reference frame, we can manipulate it in our brain, like what something would look like from another point of view or angle. Our brain does not compare exact pictures of faces with other pictures in order to verify if it is again looking at a face. It has learnt a model of what a face consists of (eg. nose, eyes, relative to each other in certain positions), and it does so by using a reference frame. Faces are a good example because they are likely evolutionary very important to us to recognize. That is possibly why we so often are able to recognize faces everywhere, from clouds to pizzas for example. Third, we need a reference frame to plan and create movements. Hawkins explains it like this: Say my finger is touching the front of my phone and I want to press the power button at the top. If my brain knows the current location of my finger and the location of the power button, then it can calculate the movement needed to get my finger from its current location to the desired new one. A reference frame relative to the phone is needed to make this calculation.[7] Reference frames in this context are already being used in many field such as robotics, why would our brain use a completely different approach? [8] Reference frames are not only relative but for a large part also subjective. When two different people see a fire truck, both of them will likely perceive these visual inputs differently. The redness of the fire truck is largely a fabrication of the brain, it is a property of the brains’ model of surfaces and not only a property of light. When two people perceive the same input differently, it means their models are different and therefore subjective.[9]
We can visualize these reference frames by way of points, as in points of reference, on a grid with lines that connect these points as in figure 1. The lines not only connect the two dots but also indicate the distance relative to each other.
The points of reference indicated in figure 1 represent the signals that our brain receives. Our brains immediately recognize that the distances between the four points seems to be equal and that this as a whole resembles a square. Since we live in a three dimensional world, you brain will likely deduce that this square is not entirely flat but possibly a cube as in figure 2. This is why our brains, or our ‘sensory organs’ as we often mistakenly use in spoken language, only need to have a few points of reference, relative to a reference frame, to create a model.

Reference frames are used to model everything we know and are not limited to physical objects. We can also have a reference frame for abstract concepts such as democracies, freedom or righteousness. The main difference is that the dimensions are different and not limited to only location or movement. Timelines and geography are for example ways to think differently about something as abstract as history. Moreover, I will argue that none of our models of the world, abstract concepts included, can exist without reference frames at all.
Yet how does the brain create reference frames? Hawkins and his team have found evidence that neurons and cells are clustered in what they call cortical columns. I will not delve into the exact workings of cortical columns, but what is important to know is that every column in the neocortex has cells that create reference frames. Simply explained, they have cells that tell you what is located where, which is the function of a reference frame.[10] Every column has multiple models of complete objects and can predict what should be observed or felt when seeing or touching an object at each and every location of that object. It is similar to the example wherein you are looking at a map of a town and want to predict what you will see if you start walking in a particular direction.
One would intuitively think that there is only one model for every object in the world. The Thousand Brains theory however flips this around and says that there are a thousand models for every object. There is no platonic ideal object of which there exists only one in the mind, or somewhere else objectively for that matter. In its place are a thousand models for possibly one simple object. Take for example a bench. Something is a bench when more than one person can sit on it. It can have armrest or a backrest. It can be made of wood, stone, leather or cotton. If you try to imagine a bench in your mind, how many images pop up in your mind? Try it. You likely imagined multiple different images of different types of benches and had to pick one of them to function as your imagined bench. These are just superficial models that only depict the major outline of the image of a bench. Note that all the sub elements of a bench also consist of models of their own. Take the fabric, or the construction methods for that part. If I would say that you have to imagine a bench that is already 2000 years old, many of these sub elements will likely change, because you have a model of what technologies were available 2000 years ago and how people would construct such a bench back in those days. What this simple example also shows is that models are relative to each other. Moreover, the possible benches you can imagine are dependent on your previous experiences with benches. In other words, what knowledge on benches is stored in your personal reference frames, thereby making the model of a bench not only relative but also subjective.
Choosing the correct model.
But how does the brain know how to pick one model and not the other? How does it choose? Hawkins and his team expect that the brain ‘votes’. When it compares different models, it votes which reference frames are closest to that what is being observed. How this exactly works is still part of ongoing research. Reason would say that the voting resembles something like what we call corroboration in science. Corroboration consists of all the evidence that confirms your theory. Every model that was refuted by the observation will be left out of the vote and every associating sensory input that has a location in a reference frame, a model, will have a say in the vote. The voting mechanism also explains why we have a singular non-distorted perception of the world. Your eyes are continuously moving very rapidly in different directions. If each observation was transferred and translated directly in the brain, you would have a very distorted view of the world. What we in contrary experience is a stable picture of the world where our brain does not literally observes everything but makes a model of it. The brain also continuously makes predictions based upon this model. When you are sitting at your desk, you possibly know that there is a door at your back, and this is what your models expects there to be when you turn your head and look in that direction. This would also explain how you can recognize something with one sensory organ and make predictions in other sensory modalities. There are these memes where you see a picture of a famous actor that has a distinctive voice in a movie. And when you read the subtitles in the meme, your head automatically reads the text in a similar voice because it predicts what it will sound like. Another example is the fact that you can navigate in your home even when it is dark. You likely have a model of your home’s interior, complete with accurate distances and locations of the various objects in it. Some time ago my wife changed the position of the towel in the kitchen, it is now located some 30 centimeters to the left. I hopelessly kept reaching for the towel in the place where it used to hang for about 20 times. My brain unconsciously still figured the towel was hanging somewhere else although I was perfectly capable of observing the actual place of the towel.
Although at first it might seem very inefficient to have a thousand different models that by voting pick the model that fits the best reference frame, it actually has a lot of advantages. First of all, it would be very stressful and tiresome if you consciously and continuously had to update each singular model of an object when you encounter information that might enhance or refute parts of your earlier model. Instead, your brain does not have one singular model of this object but a thousand. This makes the overall model of the world, which consists of thousands of sub models itself, very flexible. Second, voting which model is most applicable is a safer strategy. If you had only one model to choose from and your model is somewhat off, you have nowhere to go but in error, and you will definitely need to update your model. Remember that a model consists of actual neurons and cells and that these are not easily changed or rewired. When you have in contrast many (similar yet still different) models to choose from you have a better chance of incorporating new information and choosing the best applicable model. This becomes important when you for example have to deal with predators. What if you only saw just a small portion of the predator, like the tail for example, but it did not fit the (only) model present in your brain? It is safer to compare your acquired information about this predator with a thousand different models of predators than with just one model.
Another way at looking at this theory is that it provides a loop for creating new models. What Jeff Hawkins does not elaborate on, but what I think is likely to be the case, is that models themselves influence how we acquire new information, new points of reference. Or as the philosopher Karl Popper put it in relation to science, ‘theory always comes before observations’. This is especially the case when our models predict or complete a picture (or any other sensory input) that is incomplete. Your model of the world (theory) likely decides for a large part what to look for and thus what you eventually observe. If you observe an event and your brain is capable of immediately voting between a thousand models, it will likely see the event from a thousand different (possible) viewpoints. A singular or fewer models clearly limit the amount of different viewpoints. Let’s look at a simple example. Children likely have less models of the world than an adult since they have less experience (and thus references) to draw upon, and more to the point, less neurons that make up reference frames and models. When a 4 year old child observes a 35 year old women walking with a small child, they will likely assess that the women is the mother since the mother-child relationship is one of their core models of the world. Grown-ups will likely intuitively also reach this conclusion at first, but then introduce other models known to them. The women can for example also be the caretaker or the schoolteacher. The adult will try to look for indications that this women is not the mother of this child, perhaps by looking at their physiology or behavior. Or perhaps the adult will look at the context and environment in which this observation takes place and then conclude that this is more likely the teacher and not the mother. In other words, the adult has different theories (models) to rely on and thereby knows what to look for to either verify or falsify each theory available to him or her. To conclude, the brain needs a thousand models of each and every object or concept. Hence the title of this amazing book, ‘A Thousand Brains: a new theory on intelligence’.
What can we know?
What the Thousand Brains theory shows is that our models are based upon reference frames. Reference frames are always subjective since they depend on every reference someone has ever observed and remembered at a certain place relative to other references. They can never be identical for two individuals, but since they rely on observations about the world there is likely a large overlap. But still, your reference frame is not mine. There is no such thing as the correct reference frame, and two individuals will arrange the observations or deductions differently. You can test this with someone close to you by doing a small experiment. Take (again) as an example a bench. Both of you should independently try to imagine a bench and draw it on some piece of paper. After you have done this, show your version of the bench to the other person and explain why you chose exactly this type of bench and on what memories you relied for imagining it. Perhaps you share some memories, which would make an overlap even more likely. Nonetheless, most if not all of you likely draw different benches. The main reason for this is that your model and the reference frames it relies on are different, but not too different to prevent any effective communication about ‘benches’. Even simple objects that exist independently of human beings only exist in our minds as subjective models.
Now imagine something more difficult like the concept of fairness. This will be much more difficult to visualize and thus imagine when compared to material objects like a bench. If you both had to explain independently what constitutes as being fair, you likely arrive at different conclusions. The word independently here is important, because if you would start discussing it, you might reach consensus on what constitutes as fairness. This effect, the consensus, is actually wonderful. What possibly happens is that being persuaded by others influences the voting of different models in your brain or is responsible for the creation of new points of reference. Your reference frame can (luckily) borrow from the experiences from others. Nonetheless, even with consensus our models will always be subjective. The amount of people that agree on something does not make something objective. Reference frames are also always relative since they consist of something (what) anchored to some location (where), which means this something is relative to some place or something else. When we talk about concepts such as fairness, we easily invoke the argument that something being fair is relative to the context or situation. Is it fair to share your income (money) with someone who has less income than you do? That depends on many factors, for example how dire the situation is of the person that needs help, or how large your income is relative to the income of the other person. Even our knowledge about physical objects can be considered relative sometimes. Someone living near the equator might for example infer that a tree without any leaves on its branches is dead, but he or she might never have encountered trees that shed their leaves during autumn. Their reference frames and thus models of trees are relative to their environment. Earlier on I quoted David Deutsch about the very parochial nature of our knowledge in a much broader sense and from such a perspective, all of our knowledge seems to be relative.
Next to models being subjective and relative, our models are often anthropocentric. The term anthropocentric can be explained in different ways. It can be interpreted as regarding the human being as the central fact or perhaps even aim and end of the universe. The way I interpret and use the term anthropocentric is the following: anthropocentric means that we view and interpret everything by means of our human brains, and everything that comes with that, like experiences and values. A very nice example of this is attributed to the ancient philosopher Xenophanes, “if horses could draw, they would draw their gods as horses”. His point being, so do humans. We human beings impose our human models of the world, and all the limitations that come with it, on the world. The earlier example of a bench is only a bench to human beings. My dog will not identify the piece of wood as a bench, perhaps only just as some elevated piece of the floor. Because many human beings count something as being a bench does not make this model objective knowledge. It is a very anthropocentric model, which also differs for each individual human being, although we easily agree on what is a bench or not. The Thousand Brains theory of Hawkins even suggests that our brain models everything in the world out there relatively to our physical body in order to facilitate movements and make predictions about the world around us. This makes sense from an evolutionary point of view, and is likely applicable to many models. However, we do know that we are able to construct models that are not necessarily relative to ourselves and our bodies, such as the models of mathematics or language. And yes, mathematics and language are also anthropocentric and do not consist of objective facts.
But if everything is subjective, relative and anthropocentric, how can we know what is real and discover the nature of the world out there? When can something be called objective if everything we observe and process is subjective and relative? Where does this leave science?
Bibliography:
- David Deutsch. The Beginning of Infinity: Explanations that Transform the World (London: Penguin Books, 2012)
- Jeff Hawkins. A Thousand Brains: A New Theory on Intelligence (New York: Basic Books, 2021)
[1] Deutsch, The beginning of Infinity, P37. Deutsch makes this quote while explaining that we hardly ever can make observations unaided, but this quote is also very applicable here, yet in another context.
[2] Jeff Hawkins (A Thousands Brains) Anil Seth (Being You) and Andy Clark (The Experience Machine)
[3] As Jeff Hawkins explains himself, there aren’t many theories in neuroscience. There is a lot of measuring going on, but there is little theory building or holistic explanations for how and why the brain works like it does.
[4] Hawkins, A Thousand Brains, P174.
[5] Hawkins, A Thousand Brains, P175
[6] Hawkins, A Thousand Brains, P19
[7] Exact quote: Hawkins, A Thousand Brains, P50
[8] Hawkins, A Thousand Brains, P50
[9] Hawkins, A Thousand Brains, P139
[10] Hawkins, A Thousand Brains, P66