Jim Manzi, in his book Uncontrolled, makes a very good point about analytical models and their shortcomings, in particular the need for experimentation (i.e. controlled interventions) to figure out what really happens in the real world (emphasis mine):
Cost changes often could be predicted reliably through engineering studies. But when it came to predicting how people would respond to interventions, I discovered that I could almost always use historical data, surveys, and other information to build competing analyses that would “prove” that almost any realistically proposed business program would succeed or fail, just by making tiny adjustments to analytical assumptions. And the more sophisticated the analysis, the more unavoidable this kind of subterranean model-tuning became. Even after executing some business program, debates about how much it really changed profit often would continue, because so many other things changed at the same time. Only controlled experiments could cut through the complexity and create a reliable foundation for predicting consumer response to proposed interventions.
I’ve been involved in my fair share of modelling, and can’t help but agree with what was written. It’s only too easy to tweak assumptions to suit the popular vote for a business case, telling what people what they want to hear.
Update (12 October 2013): I found a very insightful essay on modelling by Joshua Epstein that talks about the reason behind modelling. It’s worth the read, and makes a good case for modelling alongside experimentation.
I love to read and write. Professionally, data science, technology, and sales ops are my thing. In my non-professional life, I aspire quite simply to be a good person, and encourage others to do the same. For those who care, I test as INFJ/INTJ (55/45?) in the MBTI.