Deciphering Fake

This was supposed to be a post on radical transparency.

But an article bashing radical transparency just left me feeling so outraged with its lies and misleading statements that I just spent the last four hours of my life writing this warning to all of us media-consumers out there: Don’t trust all you see, even if it says “research”, links to academic papers, and cites its sources!


The first time I heard about radical transparency was from G.

And though I hadn’t heard the term before then, it was something I felt that I could really relate to; something I already did.

Not because I thought that it brought the best outcomes, but because my mind was just wired that way.

I’ll tell you why in the post on radical transparency I eventually do write (maybe next week?), but hint: it’s got to do with having an awful brain for lies.


For today, let’s talk about the article that enraged me.

I found it while reading up on radical transparency for the post I had intended to write: Radical transparency sounds great until you consider the research.

I looked forward to reading it just based on its title, as it was perhaps a warning I needed to heed: maybe I ought to be a little less transparent with my dealings with people?

The word “research” also appealed very much to the scientist in me, giving it more weight than it would have had without.

Almost immediately though, within the first paragraph, a red flag was raised.

Here’s what it said:

Radical transparency is an old management approach with new branding. Previously called micromanagement or snooping, this approach supposedly creates higher performance and trust by letting everyone know what’s on the table.

You see, I’m an amateur rhetorician (well, not really, but I am currently reading Jay Heinrichs‘ book Thank You for Arguing) and smelt a rat: I knew radical transparency wasn’t synonymous with “micromanagement” or “snooping”, or even remotely analogous.

My rhetoricsense tingled. Something was up but I didn’t quite know what. So I did a quick search on logical fallacies, and identified what was wrong: the author was guilty of a false comparison!

Snooping, micromanagement, and radical transparency were qualitatively very different things, and there was no “new branding” apparent to me whatsoever.

  • Snooping to me implies trying to find out information others deem to be private and not expect to share;
  • Micromanagement to me implies a person in authority dictating to a worker how to do a job without giving the worker much or any degree of autonomy;
  • Radical transparency to me implies making what may sometimes be deemed private open to everyone, but making sure everyone knows it is no longer private.

I could live with micromanagement, to a certain extent. I could live with radical transparency (I think). But I would probably not be able to take snooping very well.

You can’t really club them together.

Was the author trying to mislead his readers by saying they were the same except for rebranding?

Whatever the case, I continued, albeit with caution.


Then I came across this paragraph, which appeared filled with juicy insights:

But research about human judgement suggests that relying on such data is a mistake. People are terrible at assessing trustworthiness and most skills. Assessments are driven not by real actions, but by appearance and personal situation. On top of these potential inaccuracies, labeling someone as untrustworthy or poor in certain skills has a corrosive effect on collaboration and morale, perhaps one of the reasons why Bridgewater has in the past had very low retention rates that costed the company tens of millions of dollars a year.

The links in the quote above were found on the original article. I clicked on every single one of them to learn more.

(And boy did I learn. I learned that if you take an author’s word for it at face value, despite the authoritative-looking links you’d be hoodwinked quicker than you can say “radical transparency”.)

Here’s my commentary on each of the links in the paragraph shared above:

  • “terrible at assessing trustworthiness”
    • This link brings you to a paper talking about assessing trustworthiness from facial cues. The experiment involved asking strangers to play a game to see if people would invest more money in faces that appeared more trustworthy. If radical transparency involved asking you to rate your colleagues, an hour after you got to know them, on trustworthiness based on how their face looked, then yes, this is relevant.
  • “most skills”
    • This link brings you to a paper talking about the JDS or Job Diagnostic Survey tool, which basically assesses the fit between workers and their jobs. The paper surmises that the tool works, though warns that it is easily faked. But for it to support the premise that “people are terrible at assessing most skills” is ridiculous, because the paper actually doesn’t say that.
  • “appearance” and “personal situation”
    • These two links are paywalled, but based on the abstracts these are related to people assessing people in TV commercials (for the first link) and strangers (for the second). Like the experiment in the “assessing trustworthiness” link above, this is about assessments of people whom you know very little about. Radical transparency isn’t about assessing strangers one-off. Again, I don’t see the relevance.
  • “has a corrosive effect on collaboration and morale”
    • Paywalled. The first sentence of the abstract? “Four studies examined the relation between trust and loneliness.” I’m curious to know what the article is about, but given I don’t know enough I’m not going to judge on this one.
  • “very low retention rates”
    • This link brings you to an interview with an author who wrote about Bridgewater’s radical transparency. The author actually praised its implementation at Bridgewater and was extremely supportive of it. Though it was mentioned that there was a 25% turnover rate, there was no mention of it costing “the company tens of millions of dollars a year”. Also, assuming that it does cost the company tens of millions of dollars a year, could the benefits outweigh the costs? If being radically transparent brings in more than the “tens of millions of dollars a year” that it  hypothetically costs, it’d still be worth it.

I’d always been extremely curious as to the effect of knowing my peer’s salary, and them knowing mine.

I’d even considered moving to a company that did just that for just this reason because I personally thought it was a great idea.

So when I came across the following that the author wrote, it came as quite a surprise:

Publishing individual salaries has negative consequences. While companies should never prevent people from sharing their compensation (and in many states it’s illegal to do so), publishing these numbers for all to see psychologically harms people who are not at the top of the pay scale. Research shows that this directly reduces productivity by over 50% and increases absenteeism among lower paid employees by 13.5%, even when their pay is based exclusively on output.

The first link talks about income disparity and its negative effect on happiness, a common finding in psychological research.

That the author worded it in this way (i.e. “top of the pay scale”) seems deliberately misleading. There’s a lot of dependence on the “reference group” – e.g. a junior employee, despite earning far less than the CEO, would generally not be too concerned. Also, full individual salary disclosure isn’t necessary for radical transparency; compressed payscales and other forms of salary disclosure could be used instead.

The second link was the one that I was more interested in: could salary disclosure really lower productivity and increase absenteeism, even when pay was based on output?

The author said yes.

I read the paper and found otherwise.

What the study found was that it was perceived fairness that had the greatest negative effects, not the disclosure of salary information per se. Where there was wage disparity and output was not easily observable (i.e. there was no way to tell which worker “deserved” the most), those who were paid less than their peers were the most negatively affected, as they would have perceived it as unfair.

And in a world of radical transparency, I’d think that “output” information would also be something that would be freely shared, reducing any perceived unfairness.


I don’t know what led the author to write what he wrote. I was very close to just taking what he wrote at face value, and if it wasn’t for me being a little perplexed and curious at some of the claims that were cited I’d never have uncovered the deceits.

To be clear, I’d just like to add that there is a chance that there was no malice involved, just sloppy research and misinformed conclusions.

But whatever the case, it made me realise how much we take good, honest writing for granted.

We shouldn’t.

And for me, not any more.

Getting the most bang for your charitable buck

I just received a mailer from Effective Altruism, via which I do a monthly donation to charity. The mailer asked me to rate from 1 to 10, with 1 being least likely and 10 being most, how likely I would be to recommend Effective Altruism to a friend. I gave it a 10.

And since we’re all friends here on edonn.com… I recommend Effective Altruism if you’re looking to make your charitable dollar do as much as it can.


Effective Altruism is an organisation that’s, in their own words: about answering one simple question: how can we use our resources to help others the most?

I first learned about them through a book called Doing Good Better (loved it; it absolutely changed the way I thought about giving – especially the part talking about the careers we ought to pick for maximum societal impact: should we pick the higher-paying career where we have little opportunity to positively impact society, e.g. an investment banker; or the lower-paying career where we can make a positive, direct impact on society, e.g. a social worker? The book argues that it is the former that we can do more good, if we direct the funds we earn to charitable causes).

Its basic premise is this: all charitable interventions should be scientifically tested to determine how effective they are, and money should only flow to those that are more effective.

The more good an intervention does for a given amount of money, the more effective it is deemed to be.


How much “good” an intervention does is determined by the amount of QALYs and WALYs. This is a very interesting concept that I’d not heard of before coming across Effective Altruism.

A QALY stands for “quality-adjusted life year”, defined as (from Wikipedia):

[A QALY] is a generic measure of disease burden, including both the quality and the quantity of life lived. It is used in economic evaluation to assess the value for money of medical interventions. One QALY equates to one year in perfect health.

A WALY, on the other hand, stands for “well-being adjusted life year” (from the US Institutes of Health website):

[A WALY] is a measure that combines life extension and health improvement in a single score, reflecting preferences around different types of health gain.

In essence, the amount of good relates to how much life and life improvement it brings. The benefit of of using QALYs and WALYs is that they are fungible, and are therefore able to act as very versatile measures of charitable intervention.  A little like good old money.

For example, if you want to take up a new job, it’s extremely convenient to start thinking about the benefits in terms of money, even when some of the benefits are non-monetary. If you get more vacation time, how much more is an extra day of vacation worth to you? If the working hours are less, and you are planning to spend this extra time with your kids, how much more is this worth to you? And so on.

It helps us make apples-to-apples comparisons between two very disparate things, like deworming vs. microfinance.


Effective Altruism thus looks at the quality of all interventions, and aims to focus funds toward interventions that are the most effective. And though it may not be perfect, I find that it gives me peace of mind.

It allowed me to finally get past paralysis by analysis, making me comfortable with giving more money than before.

I still do give to random strangers on the street because it feels good; but for regular and systematic giving, the kind that I think will do far more good, this will be my avenue of choice.


And to those who ask: Is this “too scientific”? Shouldn’t giving be from the heart?

My answer is: No to the first question; and yes to the second.

The science and experimentation behind Effective Altruism helps to ensure accountability – charities that are deemed ineffective tend to be ineffective for very good reasons, and every dollar given to an ineffective charity is one less dollar given to a more effective one. Why should less effective charities, even those with the best of intentions, take money away from those that can do more good?

To be honest, I did have some concerns about how newer interventions or charities would be handled by them – many charities and interventions start out less effective than the most effective ones and need to be given a chance to grow and show their worth, and may eventually become as effective than the most effective ones or even more so. However, Effective Altruism does take care of some of that by having a dedicated allocation of their fund that looks at just these “promising charities”, which introduces a little bit of randomness into their portfolio of current strong performers.

On giving from the heart, to be honest I never really found a “logical” reason for giving, nor have I looked for one. Giving to me has always just been something we should do to be thankful we have what we have, that we are who we are.

Supposedly Irrelevant Factors

I’m halfway through reading one of the best books I’ve read in a long while: Misbehaving, by Richard H. Thaler.

One of the things that most stuck with me was that of “supposedly irrelevant factors“, which refers to something that, in theory, should not affect or influence the thinking of a rational person but does.

Thaler has also written about this in an article for the New York Times. The example that Thaler shared in the article is that of the grading of the notoriously difficult midterm exam that he gives, which he uses to separate his really good students from the rest.

As per the usual practice in academia, the maximum marks you could get in that exam was a hundred. But this posed a problem. Because of the difficulty of the exam, his students were averaging only 72 out of a possible hundred.

Though it didn’t affect the overall grades the students got, since their relative scores were more important than their absolutes (see: bell curve), they didn’t quite like getting such low marks and many complained.

Thaler got worried that the complaints might eventually lead to the loss of his job. So he made a change: instead of having the exam be out of a hundred marks, he made out of 137.

This change had a couple of things going for it: firstly, it made it more difficult to calculate a percentage score; at the same time, it allowed him to give students higher marks, closer to what they would have got on the usual, less challenging exams.

Students were on average now scoring 96 instead of 72, with some delightfully achieving scores above a hundred. Despite the lower percentage scores his students were getting (70 now instead of 72!) his students were happier.

This wasn’t supposed to happen. But it did.

To him, test scores were a supposedly irrelevant factor. To his students, they were anything but.


The concept of supposedly irrational factors really appeals to me because it’s something we tend to forget, especially in a work setting (because everyone’s somehow less human!) and yet something that may have large consequences.

Imagine this conversation between you and your boss:

Boss: Run this report for them every day by noon.

You: Why? They shouldn’t need to see the numbers at that high frequency – no actions can be taken at this late stage that will change anything anyway.

Boss: Just do it.

How would you feel? Would it make you a little more likely to be unhappy? To consider quitting and doing work that feels more worthwhile?

The knowledge of why could be seen as a supposedly irrelevant factor.

Knowing why you’re doing it wouldn’t really change anything – doing what your boss tells you is part of your job. Maybe she knows something you don’t.

But still, it just doesn’t feel right and the knowledge that there might be a purpose that isn’t shared doesn’t make you any less unhappy.

But what if your boss said this instead after you’d asked “why?”:

Boss: Maybe it won’t change anything. And I know it seems pointless from an actionable point of view. But what I know is that it helps calm their nerves; it helps calm their boss’ nerves. It’s not easy being in their shoes – they’re currently under immense pressure, and I’m hoping to support them in whatever way we can.

How would you feel now? If it were me, I’d actually feel even more empowered than before, and that I was making a positive difference in people’s lives.

The simple knowledge of why changes things quite a bit, though it really shouldn’t.

We’re not quite the uber-rationals we think we are.


Some interesting “supposedly irrelevant factors” examples that I’ve come across:

  • Choice architecture, and the default option – we go for default options more often than would be expected, even if the default’s the worst option available
  • Decoy pricing – Classic experiment done by The Economist where the introduction of a third, obviously poor subscription option made the most expensive option much more appealing
  • The Endowment Effect – Owning an item makes it seem much more valuable than it was before ownership came about

How to convince the inconvincible

So how does one go about convincing the inconvincible (actually a proper word as per Webster)? Contrary to popular belief, there’s no need to resort to heavy artillery. Just an interesting new tool in thinking I just learned from the book Decisive by Chip Heath (great book by the way).

The tool is this question: “What data might convince us of that?”

As in, “What if our least favourite option were actually the best one? What data might convince us of that?”

It’s actually a great way to convince the inconvincible.

Instead of two or more parties with differing agendas going head to head and each sticking to their guns, say in a company making a decision that would benefit one party and/or penalize  the other, both are asked “what has got to be true in order for the other side to be right?”

In this way, both are forced to have tangible “targets” (a KPI or a number of some sort) instead of a vague sense of right. Both will also have no choice but to put themselves In the other person’s shoes in order to think of these targets (“what has to be true on order for them to be right”?)

“If it takes a 12-month revenue losing streak before you are convinced there’s something wrong with the organisational structure (just three months away), then fine, I’m happy to wait till then to get your complete buy in, because I know we’re going to hit that streak and I don’t want this argument to drag any longer than it needs to.”

Whatever anyone feels about the decision, if that 12-month losing streak is hit, a decision will be made.

There’s just no more arguing if both parties agree on what has to be right (the KPIs; the right “targets”) because the data is the data, and if it overwhelmingly shows that one party is right (as agreed beforehand), then objectively that party is right.

How to make better decisions using Opportunity Cost

The cynic knows the price of everything and the value of nothing.
— Oscar Wilde

Opportunity cost can help you make better decisions because it helps put your decisions in context. Costs and benefits are framed in terms of what is most important to you at the time of the decision.

Every time we make a decision involving mutually exclusive alternatives, we will always be subject to this thing called “opportunity cost“.

Opportunity cost is the cost you pay for choosing one alternative over the others. But this cost isn’t “cost” in the regular sense of the word. It is the benefits of the next-best alternative that you have given up.

I hope you’re still with me here. But even if you’re not, don’t worry. I’ll give more examples below.

The concept of opportunity cost illustrated in under 60 words

You are given a choice between two pieces of fruit: an apple and an orange. You can choose only one. By choosing one, you give up the other. If you choose the apple, your opportunity cost would be the enjoyment of the orange. And if you choose the orange, your opportunity would be the enjoyment of apple.

The concept’s that simple. You give up the enjoyment of the orange when you choose the apple, so you “pay” for the apple by giving up the opportunity to enjoy the orange. So far so good? Good.

Let’s shake things up a bit.

Opportunity cost can be for indirect costs too

The scenario. Say a man, we’ll call him Man A, comes up to you and demands from you a glass of orange juice. If you don’t give in to his demands within the next five minutes, he’ll spill permanent ink all over the shirt you’re wearing, which just so happens to be your favourite. Unfortunately for you, you don’t have any orange juice or oranges on hand.

Suddenly another man, Man B, whom you had once given a banana, comes up to you and serendipitously offers to return the fruity favour. Not knowing what type of fruit you like, he offers you a choice of two fruits, an apple and an orange, from which you can take one. You grab the orange, thank him, and quickly make some orange juice for Man A, saving your favourite shirt from certain doom.

Let’s say that in normal times you would pay $1 for an apple and only $0.80 for an orange. Without Man A, the guy who threatened to spill ink on you, you’d have most definitely gone for the apple because it’d have been a better value. But because you knew of what would happen if you didn’t get the orange juice to Man A in time, you opted for the orange.

Opportunity cost is context-sensitive. You gave up the opportunity for an additional $0.20 in value (the difference between the apple and the orange) for the opportunity to save your shirt. Very smart.

Money isn’t everything: Applying opportunity costs to decisions not involving money

Thinking about the opportunity costs also helps us to think about value beyond price (as illustrated by the example above). And sometimes when price isn’t a factor at all, this can be especially important.

The scenario. Imagine facing the decision of painting a wall in your room green or blue. The paint of both colors cost the same, and both look equally good. So you consider flipping a coin and letting chance determine the colour.

But because you’ve learned about opportunity cost, you ask yourself, If I paint my wall blue, what do I give up? And if I paint my wall green, what do I give up?

After giving it a little think, you realise that by painting your wall blue, you’d probably not be able to hang your favourite poster because the colours wouldn’t match. Some of the furniture you had previously picked out would also have to be given up because they didn’t match the blue colour scheme either.

A green wall, on the other hand, would suit the poster and the furniture you picked out just fine. With your knowledge of what you had to give up if you chose the blue paint, you decide to go for the green.

Remember, if cost was the only consideration, you might not have gone for green. If colour preference was the only other consideration, you still might not have gone for green either. It was only after you considered everything in context, figuring out what had to be sacrificed (the poster and the furniture you had picked out), that you could make an informed decision.

Score one for opportunity cost.

Opportunity cost and time

Opportunity cost can also be applicable to time. If you’re stuck doing activity A, chances are you won’t be able to do activity B at the same time.

The scenario. Suppose you’ve just learned how to do your taxes. You estimate that it will take up about two hours of your time if you did it yourself.

Your cousin, who happens to love doing taxes, offers to do it for you for $50, with a free tub of ice-cream (she’s dropping by the supermarket and there’s a two-for-one special).

If you rejected your cousin’s offer, you’d save yourself $50. But it’d cost you two hours of your time doing whatever you wanted, the actual work of doing your taxes, and a tub of ice-cream.

If you took your cousin’s offer, apart from getting your taxes done for you, you’d a free tub of ice-cream. Of course, you’d have to pay her $50 — that’s the cost.

Depending on how much you valued your time, and how much you valued money, I’d say it’s a tough call. If you felt that an hour of your time was worth only a dollar (and two hours of your time being worth two dollars), it would probably make sense for you to do your taxes if you felt neutral about it (i.e. you didn’t hate doing it).

If you felt that an hour of your time was worth $50 on the other hand, letting your cousin do your taxes would probably make pretty good sense, since you’d essentially be getting back $100 worth of time for an expense of $50.

But we’re cold rational beings, and “feelings” of how much our time is worth just doesn’t cut it. So how do we find out how much our time is really worth? Again, here comes opportunity cost to the rescue.

Using opportunity cost to find the monetary value of your time

The scenario. Suppose you earn on average $25 per hour doing freelance work. Let’s say you’ve got more than enough jobs to go around, and that any free time you have could be used to your work. If you didn’t have to do your taxes, you’d be working on your freelance gigs, earning $25 per hour.

The estimated monetary value of an hour of your time would then be $25, which is the amount you’d earn if you had put that hour to work.

So, carrying on from the previous example, if you had given up your cousin’s offer you’d save yourself $50 but be giving up $50 in lost paid work (that’s $25 x 2 hours) and a tub of ice-cream. If you do the math that’s a negative return.

But if you took up your cousin’s offer, assuming you used those two hours you saved to work, you’d break-even and get a free tub of ice-cream.

Everything else being equal, there’s no reason why you shouldn’t be taking up your cousin’s offer.

Think about all your decisions using opportunity costs. And before making a decision, ask yourself these questions:

  • If I choose alternative A instead of alternative B, what am I giving up?
    • Now that I know what I’m giving up, what are the consequences of giving that up?
  • Are there any hidden benefits or costs I’m not seeing? Anything in terms of:
    • Time;
    • Energy;
    • Money; or perhaps
    • Intangibles?

With practice, thinking in terms of opportunity costs and benefits forgone or sacrificed will come naturally to you. And you’ll start looking not at the price of things, but the value of everything.