I’m halfway through reading one of the best books I’ve read in a long while: Misbehaving, by Richard H. Thaler.
One of the things that most stuck with me was that of “supposedly irrelevant factors“, which refers to something that, in theory, should not affect or influence the thinking of a rational person but does.
Thaler has also written about this in an article for the New York Times. The example that Thaler shared in the article is that of the grading of the notoriously difficult midterm exam that he gives, which he uses to separate his really good students from the rest.
As per the usual practice in academia, the maximum marks you could get in that exam was a hundred. But this posed a problem. Because of the difficulty of the exam, his students were averaging only 72 out of a possible hundred.
Though it didn’t affect the overall grades the students got, since their relative scores were more important than their absolutes (see: bell curve), they didn’t quite like getting such low marks and many complained.
Thaler got worried that the complaints might eventually lead to the loss of his job. So he made a change: instead of having the exam be out of a hundred marks, he made out of 137.
This change had a couple of things going for it: firstly, it made it more difficult to calculate a percentage score; at the same time, it allowed him to give students higher marks, closer to what they would have got on the usual, less challenging exams.
Students were on average now scoring 96 instead of 72, with some delightfully achieving scores above a hundred. Despite the lower percentage scores his students were getting (70 now instead of 72!) his students were happier.
This wasn’t supposed to happen. But it did.
To him, test scores were a supposedly irrelevant factor. To his students, they were anything but.
The concept of supposedly irrational factors really appeals to me because it’s something we tend to forget, especially in a work setting (because everyone’s somehow less human!) and yet something that may have large consequences.
Imagine this conversation between you and your boss:
Boss: Run this report for them every day by noon.
You: Why? They shouldn’t need to see the numbers at that high frequency – no actions can be taken at this late stage that will change anything anyway.
Boss: Just do it.
How would you feel? Would it make you a little more likely to be unhappy? To consider quitting and doing work that feels more worthwhile?
The knowledge of why could be seen as a supposedly irrelevant factor.
Knowing why you’re doing it wouldn’t really change anything – doing what your boss tells you is part of your job. Maybe she knows something you don’t.
But still, it just doesn’t feel right and the knowledge that there might be a purpose that isn’t shared doesn’t make you any less unhappy.
But what if your boss said this instead after you’d asked “why?”:
Boss: Maybe it won’t change anything. And I know it seems pointless from an actionable point of view. But what I know is that it helps calm their nerves; it helps calm their boss’ nerves. It’s not easy being in their shoes – they’re currently under immense pressure, and I’m hoping to support them in whatever way we can.
How would you feel now? If it were me, I’d actually feel even more empowered than before, and that I was making a positive difference in people’s lives.
The simple knowledge of why changes things quite a bit, though it really shouldn’t.
We’re not quite the uber-rationals we think we are.
Some interesting “supposedly irrelevant factors” examples that I’ve come across:
- Choice architecture, and the default option – we go for default options more often than would be expected, even if the default’s the worst option available
- Decoy pricing – Classic experiment done by The Economist where the introduction of a third, obviously poor subscription option made the most expensive option much more appealing
- The Endowment Effect – Owning an item makes it seem much more valuable than it was before ownership came about