You’re sitting in front of a client. You’ve spent the last several weeks researching relevant and existing models for them to use to gauge their business’ success. You’re presenting the three indicators you’ve come up with, as demonstrated by previous studies to be the best predictors of future success. For example’s sake, let’s say they are: ratio of returning customers, customer satisfaction as indicated by customer surveys, and length of conversations with customer support.
Now, you have two options: The first is to tell the client, “These are the three things you should look at and focus on.”. The second is to tell them that measuring success follows an equation of 2.1R + 0.7S + 1.3L, where R is ratio of returning customers, S is satisfaction, and L is conversation length.
The problem appears. Most clients would prefer the second option because it is not intuitive, incorporates fractions, and is presented as a mathematical formula, so it looks to be “more serious.” These numbers have to have something behind them, for sure. Presenting this formula will impress the client and make your workappear more significant, complicated, and also shroud it in mystique.
However, unless you’ve conducted some original research relevant to this specific client in their specific business, it is more likely that the first option is actually the better choice, from a professional point of view.
This is a very old finding: We know that the best way to build a model is to test it and validate it against existing data. But when such validation is not possible - if we’re on a tight schedule, don’t have the budget, or have some other limitation - we know that relying on expert intuition to quantify the importance of indicators produces worse results than treating all indicators as being equally important.
Yet, when faced with a client’s demands, it is easy to fall into this trap. It is easy to just intuit a relevant model which will appear to be more reliable and “serious” just because it looks more complicated.
This practice is known as mathiness. It’s when we use math to obfuscate, mislead, or give our work an aura of scientism. And it is a severe honey trap for many people working as consultants – it emphasizes appearing smart over being precise. It focuses on dazzling clients instead of increasing their understanding. It makes clients dependent on consultants, instead of equipping clients with tools.
The problem doesn’t stop there. We can also think about a similar issue, which I will call “Modeliness.” It is widespread in my field of behavioral thinking and strategic consultancy. That is when each consultant feels the need to develop their own model, in the form of a new graph, table, map, flowchart, or another design element that provides the New True Way™ to understand reality.
Models are useful tools. We should all remember the phrase (usually attributed to the statistician George Box) that “all models are wrong, some are useful.” But, we need to remember that the models we are presenting are tools which can help improve people’s understanding of the world we are living in; they are not the world itself.
The author George Saunders, when discussing models which can help writers better understand stories, references a Buddhist idea about teaching. He shares that teaching is like “pointing at the moon.” The important thing is the moon, and our finger is only trying to direct our audience towards it.
In this same way, our models are tools meant to guide our clients to look in the right direction; they are not the destination. And so, we should stop trying to make our clients fall in love with our unique, pretty, over-complicated, “mathy,”and “sciency” models. Instead, we should focus on pointing them in the right direction. This is how we can take them over the moon.