If you have a prediction and it turns out to be wrong, then that’s bad.
But if you have a forecast and it turns out to be wrong, that’s not necessarily bad, and may in fact be good.
Let’s say that you’re the captain of a ship and you see an iceberg one mile out. Based on your direction and speed, you forecast that within a couple of minutes you’ll hit the iceberg. So you slow down and change course, averting impact.
The forecast, in this sense, was wrong, but served the very purpose it was meant to serve: to aid in decision making. Now, imagine if you were rewarded on getting your forecasts right. Changing course after revealing your forecast of imminent impact would have been a bad move.
To satisfy your objective of forecast accuracy while still not sinking your ship, you could also decide to stay on course, but slow down enough to ensure impact was still made but with minimal damage. Doing this, you would have met both the “avoid sinking” objective and your “forecasting” objective.
But that doesn’t really make sense, does it? And yet, isn’t that what we often do in sales forecasting?
There’s this wonderful article I want to share on building prediction models using ensembles. “Ensembles” in this case simply means the combination of two or more prediction models.
I’d personally had great success bringing several (relatively) poorly performing models together into one ensemble model, with prediction accuracy far greater than any of the models individually.
Definitely something to check out if you’re into this sort of thing.
The following passage is taken from the beautiful book Master of the Three Ways by Hing Ying Ming (which libraries might classify as “Eastern Philosophy”):
The net is set for the fish,
But catches the swan in its mesh.
The praying mantis covets its prey,
While the sparrow approaches from the rear.
Within one contrivance hides another;
Beyond one change, another is born.
Are wisdom and skill enough to put your hopes on?
Just a little reminder for my future self on the uncertainty of life (r-squared never is 100%).
Update: For the uninitiated, my comment on “r-squared” above was just a little statistical quip. R-squared is a number between 0 and 1 that represents the amount of variability of a linear model, in percentage, that can be explained by the model. Anything outside of r-squared, so 1 less r-squared, is uncertainty.
Caught this magnificent optical illusion on kottke.org today. I’d say that is definitely this is worth a minute or two of your time.
Was in my “data” frame of mind when I watched this, and couldn’t help thinking that this is exactly how data works: control the content, control the angle (i.e. perception), and you can make a square block look like a cylinder.
They took our data, ran it through their software, and they got the answers that eluded us for so long.
I was told they were a big consulting company, which meant they probably had great, restrictively expensive software that could do the job. That’s why.
But I don’t buy that argument.
Great software needn’t be expensive.
I’ve lived and breathed great open-source, free technologies growing up. Linux; Apache; PHP; MySQL; WordPress; Python; R.
Are any of these free technologies inferior to their paid counterparts? In development (including data science) work, I don’t think so.
So why were they “successful”? Why could they come up with an answer we couldn’t?
My guess: they were a consulting company with less vested interest.
They came up with an answer. But would it have been better than the one we would have come up with if we were in their shoes? I don’t know.
As a consultant I’d have been much more liberal with my analyses. No matter how badly I mess up, the worst that would happen would be that my company would lose a contract. And chances are good I could push the blame to the data that was provided, or having been provided the wrong context, or information that was withheld.
When you’re part of the company, you have far more vested interest. Not just in your job, but your network, both social and professional. Consequences extend far beyond they would if you were an external consultant working on “just another project”. I’d be far more meticulous ensuring everything was covered and analyses properly done.