Not ready, but I’m doing it anyway

50 metres from the fork in the road I had to decide, “where do I want to go from here?”

To the left was a detour of about 7 kilometres; on the right, a more direct route that would have taken me home in about 3.

It was an exceptionally warm day; I hadn’t ran this far in ages; and I wasn’t feeling great.

My mind told me, “running literature recommends the direct route on the right. You haven’t built up sufficient mileage in recent weeks; you aren’t sufficiently hydrated; and the weather’s only going to get warmer.”

In short, I wasn’t ready for the detour on the left.

But I took it anyway.


When I applied for my first job, I was asked if I knew pivot tables. “Yes,” I said. “Great,” the recruiter replied. That night I spent an hour researching what pivot tables were and another hour practicing it. I aced the interview test on pivot tables and eventually got the job.

I wasn’t ready. Did it anyway. And then I was.


When I applied for my second job, the job description that was provided me said, “MBA preferred,” among a list of other nice-to-haves that I did not have. I didn’t feel ready. Not by a long shot. I applied anyway.

I passed the first round of interviews; and then the second. And then a third. At each round of interviews I learned more about the job. The more I learned about the job, the more I learned what was required. The more I learned what was required, the more I knew where to focus my attention on. With each round I’d become a stronger candidate. I got the job.

At the end of all the interviews, I had on hand a much clearer idea of what success would look like in this role. In the weeks leading up to my starting the role, and in the months following, I continued studying and researching how best to carry out my role. It’s been six years now, and sometimes I still don’t feel “ready”, but I think I’ve done decently well.

I wasn’t ready. Did it anyway. And then I was.


In one of my first major projects I was tasked with developing a forecasting tool for the Sales team. It would be used by the whole sales force, from frontline to senior management. There was a laundry list of requirements and it needed to be done by a date weeks earlier than I would have liked.

“Sure,” I said, a million doubts singing at the back of my mind. A few weeks and many overnight programming escapades later, I released the tool, and it’s been in use for almost five years now, and has been one of the most successful projects I’d ever done.

I wasn’t ready. Did it anyway. And then I was.


And similar stories emerge when I took on a management role; when I introduced machine learning to the sales team; and when I emceed at the company’s year-end event and the Sales Kick-off a few months later. Or even when I ran a marathon last December – when I signed up I definitely wasn’t “ready”.

In all the most important and exciting challenges that I’d put myself in, I’ve never been ready.


Still, it hasn’t been easy ignoring my mind who cowers instinctively from such challenges with a “but you’re not ready!”

  1. On the pivot table question I panicked after I said yes – I almost said no.
  2. On the application to my second job I was filled with self-doubt and constantly second-guessed myself – I almost gave up pursuing this opportunity.
  3. On the forecasting project I pushed back hard on timelines and only agreed after making very sure on the scope – I almost gave in to the temptation to say this was something IT should be doing.

I’m glad I took the detour on the left. Because now if I had to do it again, I’d be ready.

Less insight, more value

One of the things that I get asked a lot at work is to create a reports, run an analysis, or get some data so we can get visibility on XYZ, normally as a result of a question asked by a HiPPO (highest paid person in the office) because they were “curious”.

To these people throwing these requests at us, more data is always better. If we knew more about our customers/competitors/employees, even if just incrementally, wouldn’t it be better than knowing less?

Well, yes and no.

If there is zero cost to obtaining the data; if there is zero cost to refreshing the report; if there is zero cost to running that analysis, then yes, for sure let’s do it.

But the problem is that there is often a cost involved.

A cost to get the data in the first place; a cost to run an analysis; and a cost to generate and maintain a report.

And if it’s a regular report, that cost just goes on and on unless fully automated, which may also incur a larger large initial development cost to get it automated in the first place.

Then there is also the opportunity cost. Creating this report means not creating that report. Running this report regularly means less time for focusing on optimization and future development work.

Yes, the having that insight and visibility would be nice. But not having it could possibly be nicer by freeing up more resource for higher value work, and we shouldn’t forget that.

Making algorithms more human

Applying uncertainty

I once wrote about one of the dangers of machine learning algorithms (e.g. the thing that powers the rules behind which many decisions are made in the real world): the closed feedback loop.

An algorithm that falls into one of these closed feedback loops starts to lose its ability to learn from more data, since the future that it “predicts” is based on an outcome of the past, which it deeply influences. In other words, it becomes self-fulfilling. If the only people whom I talk to are people like myself because I think good outcomes come only from talking to people like myself, I’m never going to learn that talking to people unlike myself may also bring good outcomes.

One possible way out? Random mutation, which is a key part of what we know works in the natural world: evolution.


Mutations are essential to evolution. Every genetic feature in every organism was, initially, the result of a mutation. The new genetic variant (allele) spreads via reproduction, and differential reproduction is a defining aspect of evolution. It is easy to understand how a mutation that allows an organism to feed, grow or reproduce more effectively could cause the mutant allele to become more abundant over time. Soon the population may be quite ecologically and/or physiologically different from the original population that lacked the adaptation.

– Emphasis mine, read full post on Nature.com here: Mutations are the Raw Materials of Evolution

So just how does one apply random mutation to algorithms? I came across an article via Slashdot today that seems to suggest a possible (and quite clever) solution to the problem: introducing uncertainty into the algorithms. Where previously it would have been a very straightforward if A (input) then B (output) scenario, we now have a if A then most likely B but maybe C or D.

This seems to be aligned to how nature and evolution works (i.e. through random mutations), which having recently read Ray Dalio’s principles, reminds me very much of principle 1.4, and in particular 1.4 (b):

1.4 Look to nature to learn how reality works.

b. To be “good,” something must operate consistently with the laws of reality and contribute to the evolution of the whole

– Recommended: see all the Principles in summary

How might this work in the real world?

Imagine a scenario where somebody writes an algorithm for credit worthiness for bank loans. When the algorithm’s built, some of the attributes that the algorithm thinks are important indicators of credit worthiness may include things like age, income, bank deposit amount, gender, and party affiliation (in Singapore we might have something like the ruling vs. opposition party).

Without uncertainty built in, what would happen is that people of a certain characteristic would tend to always have an easier time obtaining home loans. So let’s say older, high income earners with large bank deposits who are male and prefer the ruling political party are considered more credit-worthy.

Because this is the only group that is getting most (or all) of the loans, we will get more and more data on this group, and the algorithm will be better able to more accurately predict within this group (i.e. the older higher income earners etc.)

Younger, lower income candidates who have smaller bank deposits, and are female and prefer the opposition party (i.e. the opposites of all the traits the algorithm thinks makes a person credit-worthy) would never get loans. And without them getting loans, we would never have more data on them, and would never be able to know if their credit worthiness was as poor as originally thought.

What is more, as times change and circumstances evolve many of these rules become outdated and simply wrong. What may have started out as a decent profiling would soon grow outdated and strongly biased, hurting both loan candidates and the bank.

What the introduction of uncertainty in the algorithms would do is to, every once in a while, “go rogue” and take a chance. For example, every 10th candidate the algorithm might take a chance on profile that’s completely random.

If that random profile happens to be a young person with low income, but who eventually turns out to be credit-worthy, the algorithms now knows that the credit-worthiness of the young and low income may be better than actually thought, and the probabilities could be adjusted based on these facts.

What this may also do is to increase the number of younger, lower income earners who eventually make it past the algorithms and into the hands of real people, giving the algorithms even more information to refine their probabilities.

Seems to me to be a pretty important step forward for algorithm design and implementation, and making them, funnily enough, more human.

Tackling the Missing Middle of Adoption

As he watched the presentation we were giving him on the machine learning project we were working on, I couldn’t but help notice his furrowed brows.


I knew him to be a natural sceptic, one who loved asking tough questions that dug deep into the heart of the matter. Though these questions occasionally bordered on what I felt was an annoying stubbornness, especially when I was on the receiving end of them, they were oftentimes great at uncovering issues one may not have thought of; or, at the very least, making sure that important issues were discussed out in the open and transparent to all who mattered.


Our machine learning project had to do with the estimation of how likely a customer was to convert. I won’t delve into too much detail given the need for confidentiality, but on a high level what the model we built provided us was a very good estimate of how likely a customer was likely to pay a deposit, the next stage of the Sales pipeline.

In other words, we had a great predictive model – one that helped us to predict what would happen.

“But,” he said, “how does that help us know what to do?”

We, the project group, looked at each other. I’m not sure if the others knew how important this question was, but I did. It was the very question I had been asking early on, but one that I decided we could only answer later.


Given the quality and quantity of our activity data (i.e. the logging of activities by our salespeople, and/or the collection of activities carried out by our customers and partners etc.), and the Sales processes we had historically in place, there simply wasn’t enough standardisation and control for a sufficient time to use in our models, something I was working on as the head of Sales Operations to fix (ah, the beauty of holding both Sales Ops and Analytics hats!)


“In effect,” he continued, “what you’re doing is forecasting what’s going to happen, but not what we should do to get better outcomes. In a way predicting the past, and not influencing the future.”

Spot on, dear sir.


The model we had was a prediction model, not quite yet a prescription one. A prescription model was what we were working toward: what can we tell the Sales team to do in order to improve their conversion efficacy? Do we contact our customers, or do we not? (e.g. though possibly counter-intuitive, it might actually be better to leave customers alone in order to improve conversion rates!) Do we make 1 call or 3 or 5?

We needed more data, but we were not quite there yet. The model we had would be great for forecasting, sure, but in terms of prescribing an activity or activities not quite, yet.


So what’s all this got to do with tackling the missing middle of adoption? Well, you see, when we had started with machine learning I knew it was going to be a tough sell. Machine learning isn’t standard in the industry I am in (i.e. Higher Education), unlike technology or finance. There’s huge untapped potential, but it’s potential we can’t get at if we don’t start.

Together with several forward-thinking senior leaders in the organisation (including most importantly my boss!) we made the decision to go ahead with machine learning on a small scale, to “get our feet wet”, and iterate ourselves to success as we learned through doing.

You don’t go from zero to a hundred without first encountering 20, 50, and 70. This exploration phase (“exploration” because we knew it wasn’t going to be perfect and was not quite the “end goal”) was a necessity. Sometimes, it might even seem a little like giving up on the promise of progress – to continue the analogy, slowing down.

And as per the image of this post, you’ll have noticed that in order to get to our destination, sometimes the best move is “backwards”, getting to “the middle” before we get on a highway from where we accelerate to our desired destination.

To have avoided this “middle” would have made achieving the “end” very much harder – notice the curved, narrow roads in the image? – reminds me of how it’s sometimes much easier to go around a mountain than to tunnel through it!

In the missing middle of adoption, we always tend to forget that in order to achieve our innovation goals, we sometimes need to take up an option that’s not quite perfect, and may at first glance seem like a detour. We just need to make sure we don’t fall into that other trap: complacently thinking that our detour is the final destination! (But more on that for another day.)

The Attention Asset

There’s a post on Seth Godin’s blog today called Do we value attention properly?

In it, he argues that we need to be careful not to discount the attention we get from our audience, i.e. anyone who pauses to listen to us, because attention is valuable.

He makes a good point: attention if leveraged properly can lead to more business and customers (for a for-profit) or more volunteers and donors (for a non-profit).

Spamming our audience burns trust, and sometimes we inadvertently do it. In order to “ensure the executives respond”, I’m sometimes compelled to send “reminder e-mails”. But what I find is that send too many of them and eventually those reminders go the route of spam: ignored.

Better to be silent and shout only when absolutely necessary, so when you do shout people know you’ve something important to say.

We might actually know more than we think we do

As I listened to the speaker of the webinar, a man who had tons of Sales Operations experience, something gnawed at me – something about what he was saying felt incongruent, felt wrong, but I didn’t just couldn’t put my finger on it.

I took notes, and then started connecting the dots. And before long I realised what was wrong: the assumptions he was using, and the analytics advice he was espousing, were questionable at best, and were most likely incorrect.

Despite his deep Sales Operations experience, and despite his air of authority, he was no analytics expert. 

It was actually the first time it became really clear to me that I was actually closer to an analytics expert than many other people were. And though I’ve felt like a newcomer/newbie for the longest time, it is a fact that I’ve been working in the data/analytics field for more than a decade now – it’s time I started thinking that way, and acting it as well.

(Just a casual observation, but I find that we Asians are most susceptible to  imposter syndrome, or at least a lack of belief in our abilities and influence. Or it might be a cultural thing – we know we know better, but out of humility or reverence we hold back our opinions. Problem is, when we hold back our own light everyone stays in the dark, and nobody benefits.

Come on people, let’s shine!

The Perfect Car

Give me a
Merc; a Porche; a Bugatti;
A Fiat; a Bentley; no, give me a Ferrari.
Give me the speed; the space; the luxury!

I could just imagine myself sitting in one of those perfect cars. Hands on wheel, jazz playing softly in the background, driving down a lonely country road in the orange glow of the setting sun. I don’t really know where I am, but it’s beautiful. As I turn to give you a smile I realise you’re not there(?!) Instantly I am sad. This is no longer a car but a prison. Get me out to where you are.

Who needs speed; space; luxury?
What’s it mean if it’s just me?
Screw the perfect car.
I’ll take family.