A little plug for my Brother (Labeller)

Here’s a little video from Brother that you should definitely check out for two reasons: one, it’s funny in a dry humour kind of way; and two, if you care for it you can participate in their promo.


Fyi, I’d actually gotten my wife (then girlfriend) a Brother labeller a couple of years ago (I think). And though we weren’t as trigger happy as the lady in the video up there, we haven’t been too shabby either.

Here’s our Brother labeller that I’d bought:

Image of our Brother labeller
Bought for her birthday or Valentine’s. How practical, right?

And here’s a couple of stuff we’ve labelled. Notice the plug for iPad? I once plugged in my iPod with this, and felt a strange numbing sensation as I turned the power on. Realised it was the current travelling through my fingers 😦

Labelled to make sure we don't plug iPods with this!
Labelled to make sure we don’t plug iPods with this!

And maple syrup (yum!) that came with me from Canada after my business trip there. Brought back two tins. Totally worth it.

Image of maple syrup
Travelled with me all the way from Canada after my business trip.



Tackling Impossible Projects

There was one time one of the files used in building a report was corrupted. In most cases this would be an easy fix: e-mail the relevant IT person in charge of this file and get him or her to send the corrected file over. But there was a small problem: we needed this report to be sent out within the next few hours, but this person lived half a globe and multiple time zones away.

So I had to work on a local fix.

The very first thing I had to do was this: to decide that I would work on the fix. I cannot tell you how important this step was because I was this close (holding up thumb and forefinger a centimeter apart) to not trying to fix it at all.

When I first realised the data wasn’t correct, I started immediately thinking about was what exactly the data was (Was it essential to the report? Was it time-sensitive? What information did it convey?) and how I might salvage the situation. I dug through memories of past events trying to figure out if this had happened before and if so what was done then. I figured that this exact situation was new, and that that best I could do was figure out if similar situations had occurred (yes) and see if approaches to those situations could be applied to this one as well (no).

All this time, the thought that I wouldn’t/couldn’t be faulted for failing to provide the numbers on time kept presenting itself to me. It was extremely tempting to just say it couldn’t be done and call it a day (because frankly the fix was, intuitively, “too difficult”). But if there’s one thing I hate it’s giving up before I’d actually had a good go at it.

Which brings me to the very important second thing I did: to convince myself that if I was going to go through with this, I’d sure as hell believe that it was possible to do. Since I was going to go through with trying to fix this damn thing, it wasn’t going to help continuing to think it was impossible, right? (Yup, it’s my version of the four-minute mile.)

So with these two things out of the way I pushed ahead.

In the end, within a couple of hours after planning my route of attack and plowing through a programming fog of war that descended early on (where we’re always just one step away from declaring the exercise more trouble than it was worth), the fix was complete. Virtual celebratory drinks were passed all around, and Asia had another good reporting day. On time. On target. World peace.

A lesson on tackling impossible projects

What I found was that the fix was surprisingly easier than I’d expected. (Granted, everything’s easy on hindsight.) And the hardest part was really taking that first step, telling myself that (as in Seth Godin’s words) I was going to ship today and not tomorrow.

And you know what really stoked my fire on this fix? That I managed to use high school algebra to sort out several equations in my queries (and I thought it had no real world value, silly me).

So the next time you start thinking a project’s impossible: stop, take a deep breath, and think hard about it’s impossibility. Is it really impossible, or merely impossible to do easily? Don’t take the easy way out, because one day there may not be one, and you’d be left unprepared.


I just found out about Coursera last week. Yes, I know, I’m late to the party! If you’re late like me, here’s what’s Coursera about (taken from their About page):

We are a social entrepreneurship company that partners with the top universities in the world to offer courses online for anyone to take, for free.

Yes, I know. I drooled like you. The possibilities are endless!

I signed up for a never of courses myself, mostly on business and analytics, though I’m thinking of dabbling in some “outside the box” stuff. These are exciting times.

The Truth About the Poverty Line

I learned something new about the poverty line of “$1.25 per day” today. I’d thought it was an absolute number. That as you moved from one country to another, $1.25 would buy you more or less stuff, depending on how much the goods and services of a particular country were going for.

Poorer countries tend to have cheaper things, so in really poor countries $1.25 will go a long way. But I was wrong. It doesn’t work that way.

From the book The Life You Can Save by Peter Singer:

In response to the “$1.25 a day” figure [cited by the World Bank], the thought may cross your mind that in many developing countries, it is possible to live much more cheaply than in the industrialised nations. Perhaps you have even done it yourself, backpacking around the world, living on less than you would have believed possible. So you may imagine this level of poverty is less extreme than it would be if you had to live on that amount of money in the United States, or any industralised nation. If such thoughts did occur to you, you should banish them now, because the World Bank has already made the adjustment in purchasing power: Its figures refer to the number of people existing on a daily total consumption of goods and services–whether earned or home-grown–comparable to the amount of goods and services that can be bought in the United States for $1.25.

I was shocked when I first read this. Whenever I read about the poverty line I’d think to myself sure, $1.25 isn’t much, but it’d probably buy me much more in a third-world country. To me, it didn’t make sense to live on so little each day. It wasn’t part of my world view at all. It was as if you told me that there were two moons in the sky–I wouldn’t, couldn’t, believe it.

It’s made me think. And I hope you, too.

A quote on salary negotiation

There is a great passage on salary negotiation from the book Purple Squirrel by Michael B. Junge, that reminds that in salary negotiation, it’s useful to think multiple steps ahead of your next move, knowing that in a relationship winning the salary negotiation battle is not winning the career war.

Traditional negotiation works in the context of one-time events and transaction. You can negotiate the price on a car, cell phone plan, or garage sale item, pay or get on a payment plan, and be done. You never have to interact with the other person again if you don’t want to, and your contact with the company or service provider is limited to brief interactions of your choosing.

Employment, however, is not a one-time transaction. It’s an ongoing series of interactions and interpersonal relationships. Salary negotiation is simply one interaction among many. After it’s done you have to live and work with the people on the other side of the equation for the foreseeable future. If one person walks away feeling like they’ve lost or been forced to compromise it sets up a disempowering context for the rest of the relationship. When it comes to employment, the paradigm of someone winning and the other person losing doesn’t serve either party in the long run.

Statistics do not always tell the whole story

I’m not sure if you’ve read or heard about the recent unfavourable review of the Tesla Model S by New York Times reporter John Broder, but if you haven’t, you should.

Image of Tesla Model S
The Tesla Model S, similar to the one Broder took on his trip.

Not because of the review itself, which was newsworthy in possibly putting a large dent in the credibility of the Model S as an “everyday car”, but because of the very interesting back-and-forth between Broder and Elon Musk (Tesla Motor’s
CEO), and the use of statistics/numbers to prove a point. And how those numbers fail to show the whole picture.

After the review of Broder’s was published, Musk provided a scathing rebuttal on his blog, going so far as to say that Broder “worked very hard to force our car to stop running”. Filled with logs and statistics he put forth a convincing argument. Broder then responded in his own blog, explaining just as convincingly how some of the facts Musk brought up missed some context.

Two things in particular stood out for me.

First, as an analyst, was how statistics and “facts” were used by Musk to refute Broder’s claims.

I am–was–a strong believer in the saying numbers don’t lie. And when Musk dug up the car’s logs, posting evidence that Broder deliberately set out to jeopardise the car’s performance in his review, I couldn’t help thinking there was no way out for Broder.

For example, Musk had remarked on this blog that Broder “drove in circles for over half a mile in a tiny, 100-space parking lot. When the Model S valiantly refused to die, he eventually plugged it in.” Damning evidence if there was one.

I thought that Musk made plenty of good points, but I also couldn’t help thinking that there was a possibility of the statistics not telling the whole story. Musk was, after all, selecting the logs/numbers he thought would back up his claim the best, and several commenters had brought up the point that he neglected mentioning anything about the battery’s dramatic power-loss when parked overnight.

Second, as a son of parents not particularly keen to experiment when trying out technology they’re not confident of, what Broder had written in his response to Musk’s rebuttal made perfect sense. His actions, though deemed by some commenters as “stupid” (read what he did in the next paragraph in parenthesis) were probably what any person in the context of unfamiliarity might do: rely on the experts (in this case the Tesla representative on the phone with him).

(Broder set out for his destination 61 miles away “even though the car’s range estimator read 32 miles – because, again, I was told that moderate-speed driving would ‘restore’ the battery power lost overnight”.)

On the “driving in circles” comment, Broder made himself quite clear on this point: “I drove around the Milford service plaza in the dark looking for the Supercharger, which is not prominently marked. I was not trying to drain the battery. (It was already on reserve power.) As soon as I found the Supercharger, I plugged the car in.” This claim was backed up by a number of commenters who themselves owned electric cars, saying that it wasn’t uncommon to circle around looking for chargers.

When statistics doesn’t tell the whole story

This reminded me of how businesses sometimes use statistics to measure employee performance, and how it might sometimes fail. Don’t get me wrong. I believe in that practice: statistics helps add an objective viewpoint to an otherwise very subjective activity. The problem is when statistics is taken at face value, with the context behind the numbers ignored, even when it’d have changed the story completely.

When we see employees “circling a parking lot” seemingly looking to jeopardise the company’s performance (e.g. not hitting budget or unproductive), we miss the bigger picture: that the superchargers aren’t prominent enough. So, employees are unproductive. Is it really their fault or is the support lacking, or is it something else?

If these “circling” employees are fired because of their poor performance, employees hired to take over these “poor performers” who come in are still going to be “circling” the parking lot because the root cause wasn’t found. And that’s not an entirely smart thing to do.

Happy Chinese New Year!

Posted this on my Google+ profile (follow me there, please) a few days back and suddenly thought better about it–my blog ought to have it, too: Happy Chinese New Year everybody. May the new year bring lots of love, happiness, and prosperity to you and your loved ones. (And for those of a more generous disposition, your enemies, too.)

On another note, I just got back from a trip from Malaysia with the missus. We’d left for Malaysia (where the missus is from) on the first day of the Chinese New Year, after having the traditional reunion dinner with my own family on the Eve (in Singapore).

Had a blast visiting relatives (in my case relatives-in-laws). And there were many. Many. And now we’re back, thankfully with one more day of a public holiday, lounging around, preparing ourselves we call our “professional mode” — readying the “office face” if you will.


Potential vs actual performance

Nice article on how potential beats actual performance when during an evaluation (based on a study on the great potential vs actual performance question, actual journal article paywalled though).

In one study, the authors took out a Facebook ad to promote the fan page of a comedian.  They created different versions of the ad.  Some versions focused on actual performance (“Critics say he has become the next big thing.”)  Other ads focused on potential performance (“Critics say he could become the next big thing.”)  People were more likely to click on ads that focused on potential performance than on actual performance.  They were also more likely to become Facebook fans of the comedian when the ad focused on potential performance than on actual performance.

A variety of laboratory studies demonstrated a similar effect with judgments about job candidates, athletes, and artwork.

Why does this happen?  The researchers suggest that statements about potential performance create more feelings about uncertainty than statements about actual performance.  This uncertainty leads people to think more about the options, and that gets them more involved with the option.

Could it be though, that we all intuitively believe in the fact that people “peak” at certain periods of their lives, and that actual performance is evidence of this “peak”? I don’t want to hire someone who had achieved a lot at his or her previous job. I want a person who achieve much at the job I’m employing for.

Anyway, it’s worth a read and a think.

Predictive analytics in layman’s terms

I’m going to be talking a little about predictive analytics today, to give you a rough idea of what it is (and isn’t).

You might have read in the news before about things like computers and algorithms churning out predictions on what might happen next, in industries as diverse as the financial markets to soccer betting.

You might have read how accurate (or inaccurate, as the case may be) they were, and how “analytics” (or more accurately, predictive analytics) is changing the nature of information. Analytics used to simply describe what happened (see “descriptive statistics“), but now they’re almost just as often used to predict what’s going to happen (“predictive analytics“).

Perhaps you had in your mind a vision of data scientists in lab coats peering into a computer screen like a crystal ball, with the crystal ball telling them what is or is not going to happen in the future. If you did, then get a cloth and prepare to wipe that image out of your head, because other than the computer screen nothing else is true.

Predictive analytics isn’t rocket science. Not always, anyway.

The ingredients of predictive analytics

The ingredients that go into predictive analytics are quite straightforward. In most cases, what you’ll have is some historical data and a predictive model.

Historical data

The historical data is often split into two: one for building the predictive model, and the other for testing it. For example, suppose I have 100 days of sales history (my “historical data”). For the sake of simplicity, let’s assume my sales history contains just two pieces of information: number of visits to my website, and the number of units of widgets sold. I would split this 100 days of sales history randomly into two separate groups of sales history, one with, say, 70 records for building the model, and the remaining 30 for testing it.

Using the 70 records, I build a model says that for every 1000 visitors, I’d sell approximately 10.3 units of my product. This is my “predictive model”. So if I had 2000 visitors, I’d sell approximately 20.6 (10.3 x 2) units; and if I had 3000 visitors I’d sell 10.3 x 3 or 30.9 units and so on.

In order to test my data, I’d run this model on the 30 days of sales history I had put aside. So for each day of sales history, I’d use my formula of 10.3 units sold per 1000 visitors and compare that result against the actual sales results I had.

If I found that the model’s predicted results and what actually happened were very different, I’d know that the model needed tweaking and wasn’t suitable for real-world use (it’s a “bad fit”). On the other hand, if I found that the predicted and actual results were close, then I’d be happy to assume the model was correct and test it on current, ongoing data to see how it worked out.

You may be wondering why we don’t just use the data we built the model on to test the model. It is because we want to make sure that the model we built is too specific to the data used to build it (i.e. the predictive model predicts with great accuracy the data it was built on, but not anything else). Testing the predictive model against the data that helped to build it would be inherently biased.

Think of it like the baby with a face only a mother could love, with the predictive model the baby and the dataset the mother. Just like you wouldn’t ask the baby’s mom to judge a baby contest her child was participating in, you wouldn’t want to test a predictive model against the data it was built from.

Predictive Model

Now that we’ve settled one half of the predictive analytics equation (i.e. the data portion), let’s get to the predictive model. You may be wondering what a predictive model is exactly. Or you may have guessed it already based on what was written above. Whatever the case, a predictive model is simply is a set of rules, formulas, or algorithms: given input [A], what will be the output be?

This predictive model is something like a map. It aims to predict what will happen (the output) given a value (the input).

Let’s run with the map anology for a bit. Let’s say that I have in my hands the perfect map (i.e. it models the real world perfectly). Using this map, I can predict that starting from where I am right now, if I walked straight for 100 meters, turned 90 degrees to my right, and walked straight for another 50 meters (the input), I should arrive at the mall (the output). And if I tested the map and actually followed its directions, I’d find the “prediction” to be right and I’d be at the mall.

But if I had in my hands an inferior map (i.e. a lousy representation of the real world), if I “asked” it what would happen if I followed the exact same directions as above (100 metres straight, turn 90 degress right, 50 meters straight), it wouldn’t say the mall. And because it doesn’t say the mall, which so happens to be where I want to go, I “ask” the map what directions I needed to take if I wanted to get to the mall. The inferior map would provide some directions, but because it’s so different from the real world, even if I followed these directions to the most exact millimeter I wouldn’t get there.

So the prefect predictive model will predict things to happen in the real world exactly as they will happen, given a set of inputs.

In a nutshell, that’s just what predictive analytics is: an input, a predictive model, and an output (the prediction). Though what I’ve written here is grossly simplified, it helps to have a concept in your head when you hear people talking about algorithms or computers predicting such-and-such.