This was supposed to be a post on radical transparency.
But an article bashing radical transparency just left me feeling so outraged with its lies and misleading statements that I just spent the last four hours of my life writing this warning to all of us media-consumers out there: Don’t trust all you see, even if it says “research”, links to academic papers, and cites its sources!
The first time I heard about radical transparency was from G.
And though I hadn’t heard the term before then, it was something I felt that I could really relate to; something I already did.
Not because I thought that it brought the best outcomes, but because my mind was just wired that way.
I’ll tell you why in the post on radical transparency I eventually do write (maybe next week?), but hint: it’s got to do with having an awful brain for lies.
For today, let’s talk about the article that enraged me.
I found it while reading up on radical transparency for the post I had intended to write: Radical transparency sounds great until you consider the research.
I looked forward to reading it just based on its title, as it was perhaps a warning I needed to heed: maybe I ought to be a little less transparent with my dealings with people?
The word “research” also appealed very much to the scientist in me, giving it more weight than it would have had without.
Almost immediately though, within the first paragraph, a red flag was raised.
Here’s what it said:
Radical transparency is an old management approach with new branding. Previously called micromanagement or snooping, this approach supposedly creates higher performance and trust by letting everyone know what’s on the table.
You see, I’m an amateur rhetorician (well, not really, but I am currently reading Jay Heinrichs‘ book Thank You for Arguing) and smelt a rat: I knew radical transparency wasn’t synonymous with “micromanagement” or “snooping”, or even remotely analogous.
My rhetoricsense tingled. Something was up but I didn’t quite know what. So I did a quick search on logical fallacies, and identified what was wrong: the author was guilty of a false comparison!
Snooping, micromanagement, and radical transparency were qualitatively very different things, and there was no “new branding” apparent to me whatsoever.
- Snooping to me implies trying to find out information others deem to be private and not expect to share;
- Micromanagement to me implies a person in authority dictating to a worker how to do a job without giving the worker much or any degree of autonomy;
- Radical transparency to me implies making what may sometimes be deemed private open to everyone, but making sure everyone knows it is no longer private.
I could live with micromanagement, to a certain extent. I could live with radical transparency (I think). But I would probably not be able to take snooping very well.
You can’t really club them together.
Was the author trying to mislead his readers by saying they were the same except for rebranding?
Whatever the case, I continued, albeit with caution.
Then I came across this paragraph, which appeared filled with juicy insights:
But research about human judgement suggests that relying on such data is a mistake. People are terrible at assessing trustworthiness and most skills. Assessments are driven not by real actions, but by appearance and personal situation. On top of these potential inaccuracies, labeling someone as untrustworthy or poor in certain skills has a corrosive effect on collaboration and morale, perhaps one of the reasons why Bridgewater has in the past had very low retention rates that costed the company tens of millions of dollars a year.
The links in the quote above were found on the original article. I clicked on every single one of them to learn more.
(And boy did I learn. I learned that if you take an author’s word for it at face value, despite the authoritative-looking links you’d be hoodwinked quicker than you can say “radical transparency”.)
Here’s my commentary on each of the links in the paragraph shared above:
- “terrible at assessing trustworthiness”
- This link brings you to a paper talking about assessing trustworthiness from facial cues. The experiment involved asking strangers to play a game to see if people would invest more money in faces that appeared more trustworthy. If radical transparency involved asking you to rate your colleagues, an hour after you got to know them, on trustworthiness based on how their face looked, then yes, this is relevant.
- “most skills”
- This link brings you to a paper talking about the JDS or Job Diagnostic Survey tool, which basically assesses the fit between workers and their jobs. The paper surmises that the tool works, though warns that it is easily faked. But for it to support the premise that “people are terrible at assessing most skills” is ridiculous, because the paper actually doesn’t say that.
- “appearance” and “personal situation”
- These two links are paywalled, but based on the abstracts these are related to people assessing people in TV commercials (for the first link) and strangers (for the second). Like the experiment in the “assessing trustworthiness” link above, this is about assessments of people whom you know very little about. Radical transparency isn’t about assessing strangers one-off. Again, I don’t see the relevance.
- “has a corrosive effect on collaboration and morale”
- Paywalled. The first sentence of the abstract? “Four studies examined the relation between trust and loneliness.” I’m curious to know what the article is about, but given I don’t know enough I’m not going to judge on this one.
- “very low retention rates”
- This link brings you to an interview with an author who wrote about Bridgewater’s radical transparency. The author actually praised its implementation at Bridgewater and was extremely supportive of it. Though it was mentioned that there was a 25% turnover rate, there was no mention of it costing “the company tens of millions of dollars a year”. Also, assuming that it does cost the company tens of millions of dollars a year, could the benefits outweigh the costs? If being radically transparent brings in more than the “tens of millions of dollars a year” that it hypothetically costs, it’d still be worth it.
I’d always been extremely curious as to the effect of knowing my peer’s salary, and them knowing mine.
I’d even considered moving to a company that did just that for just this reason because I personally thought it was a great idea.
So when I came across the following that the author wrote, it came as quite a surprise:
Publishing individual salaries has negative consequences. While companies should never prevent people from sharing their compensation (and in many states it’s illegal to do so), publishing these numbers for all to see psychologically harms people who are not at the top of the pay scale. Research shows that this directly reduces productivity by over 50% and increases absenteeism among lower paid employees by 13.5%, even when their pay is based exclusively on output.
The first link talks about income disparity and its negative effect on happiness, a common finding in psychological research.
That the author worded it in this way (i.e. “top of the pay scale”) seems deliberately misleading. There’s a lot of dependence on the “reference group” – e.g. a junior employee, despite earning far less than the CEO, would generally not be too concerned. Also, full individual salary disclosure isn’t necessary for radical transparency; compressed payscales and other forms of salary disclosure could be used instead.
The second link was the one that I was more interested in: could salary disclosure really lower productivity and increase absenteeism, even when pay was based on output?
The author said yes.
I read the paper and found otherwise.
What the study found was that it was perceived fairness that had the greatest negative effects, not the disclosure of salary information per se. Where there was wage disparity and output was not easily observable (i.e. there was no way to tell which worker “deserved” the most), those who were paid less than their peers were the most negatively affected, as they would have perceived it as unfair.
And in a world of radical transparency, I’d think that “output” information would also be something that would be freely shared, reducing any perceived unfairness.
I don’t know what led the author to write what he wrote. I was very close to just taking what he wrote at face value, and if it wasn’t for me being a little perplexed and curious at some of the claims that were cited I’d never have uncovered the deceits.
To be clear, I’d just like to add that there is a chance that there was no malice involved, just sloppy research and misinformed conclusions.
But whatever the case, it made me realise how much we take good, honest writing for granted.
And for me, not any more.