Forecasts are not predictions

If you have a prediction and it turns out to be wrong, then that’s bad.

But if you have a forecast and it turns out to be wrong, that’s not necessarily bad, and may in fact be good.

Let’s say that you’re the captain of a ship and you see an iceberg one mile out. Based on your direction and speed, you forecast that within a couple of minutes you’ll hit the iceberg. So you slow down and change course, averting impact.

The forecast, in this sense, was wrong, but served the very purpose it was meant to serve: to aid in decision making. Now, imagine if you were rewarded on getting your forecasts right. Changing course after revealing your forecast of imminent impact would have been a bad move.

To satisfy your objective of forecast accuracy while still not sinking your ship, you could also decide to stay on course, but slow down enough to ensure impact was still made but with minimal damage. Doing this, you would have met both the “avoid sinking” objective and your “forecasting” objective.

But that doesn’t really make sense, does it? And yet, isn’t that what we often do in sales forecasting?

 

Improving Forecasting Through Ensembles

There’s this wonderful article I want to share on building prediction models using ensembles. “Ensembles” in this case simply means the combination of two or more prediction models.

I’d personally had great success bringing several (relatively) poorly performing models together into one ensemble model, with prediction accuracy far greater than any of the models individually.

Definitely something to check out if you’re into this sort of thing.