← See all blog posts

The Power of Many-Model Thinking

Haggai Elkayam Shalem, PhD
September 29, 2021

Have you heard of the “Tetris effect”?


Younger readers may not be aware, but a few decades ago, almost everyone had experience playing the video game “Tetris,” and many had experienced a weird effect after: they started seeing patterns and shapes in the real world, wondering whether this object can fit into that space.

Why does this happen? Because we are used to thinking with models.


Models are shortcuts. They are ways to organize the vast amount of information that the world throws at us every second. Our mind can’t possibly deal with all of this information at once, so fortunately for us, it has strategies of summing up the information. These strategies require making assumptions: what do we expect to see in our environment, what usually happens when we take a certain action, and so on.


These strategies work well, but they are also a source for many mistakes (and many of these strategies are referred to as cognitive biases - which are mental shortcuts that help us most of the time, but can sometimes be the cause of many problems). Magicians capitalize on these strategies to misdirect and surprise us, using what we expect to happen to hide what actually happens. Companies rely on these strategies to make sure we see the value in their products while glossing over their shortcomings.


When we play Tetris, we engage with a specific model of the world - a model of combining, rotating, and fitting different shapes into one another. After we play Tetris, the game is gone, but the model remains: we are primed to look at everything in a specific way, focusing on properties of size, shape, and orientation, over other properties (such as color or purpose). We have internalized a new model, and, accordingly, have changed what type of information our mind notices and what gets ignored.


But while the Tetris effect is a known pathology, we have to understand that we are always projecting our internal models upon the external world. I am a political psychologist, which means that I see power and status dynamics in every conversation. My linguist friend can’t help but analyze turn-taking in everyday conversations. And my software engineer friends tend to think about everything in terms of algorithms. We do not see things as they are - it’s impossible. We see things only through the lenses of our internal models.


It’s easy to get lost in the perspective of our One Big Model; the one our life revolves around. But it’s also misleading. It causes us to miss out on important details and arrive at the wrong conclusions when trying to solve a problem or come up with a creative idea. It’s our own private version of tunnel vision. Luckily, there is a way out– Many Model Thinking.


Many Model Thinking is an approach that encourages us to expand our horizons with new ways of looking at the world. It requires us to study things outside of our comfort zone and to familiarize ourselves with new models of looking at the world. With these new models, we expand our perceptions: we make it possible for ourselves to notice new things around us and in the problems we are trying to solve.


The philosopher Isaiah Berlin wrote a famous essay called, “The Hedgehog and the Fox” based on a fragment attributed to the Greek poet Archilochus: “A fox knows many things, but a hedgehog knows one big thing.” Extending this concept to writers and thinkers, Berlin divides them into two categories, namely the hedgehogs and the foxes. 


The hedgehogs have One Big Theory through which they perceive everything. These are people like Karl Marx, whose theories are used to explain anything and everything in terms of class struggle. These people are the ones you want to invite to give a lecture or take part in a panel discussion because they are capable of telling very compelling stories and explaining everything through their Big Idea.


On the other side, we have the foxes. These are people that employ many different perspectives to understand the world. These are the people about whom US President Truman famously (...allegedly) said, “Give me a one-handed economist! All my economists say ‘on the one hand… but on the other hand…’”. These are people who defy One True Answer for every single question. They carry a basket of small ideas, and not just one big idea.


In his 2015 book “Superforecasting,” political scientist Philip Tetlock argued that while hedgehogs offer a more compelling vision of the world (unsurprisingly, since they have their entire life to polish their own vision of the world), foxes actually fare much better at making predictions. They avoid being overly certain that one specific theory will prove correct, and instead, they combine many different data sources and theories. Thus, they are better able to predict future events.


Why does this work? 


The best explanation comes from the famous story about “the wisdom of crowds.” In 1906, scientist Francis Galton visited a local fair. In this fair, a big ox was displayed, and participants were invited to guess the ox’s weight. With almost 800 participants, the results were astounding: the average guess was exactly the same as the ox’s weight of 1,197 pounds. 


How could that be?


The reason is that every participant had a different model of the world. One, an accountant, was familiar with the costs and profits of butchers, and estimated the weight based on the expected profits butchers make per ox. Another, a butcher, had a good eye for cows. A third just guessed based on the ox’s size, and a fourth guessed based on nothing at all.


Every specific guess was a bit off. Some overestimated the ox’s weight, others underestimated it. But the magic is in these mistakes because these mistakes were independent. As such, each participant’s guess was biased in a mostly random direction, and so, the mistakes canceled each other out making it so that the average guess was exactly on point.


This story exemplifies the reasoning and power behind most common statistical analyses, but it also demonstrates the power of Many Model Thinking. The aphorism attributed to statistician George Box, “All models are wrong, but some are useful” is key here. All models are wrong in some sense. But assuming these mistakes are independent, with each model making a mistake in a different direction, the use of many models helps us approach the truth. 

Ask us anything.
We’re in the business of finding answers!

Q@q-bt.co
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.