Expensive Software and Consultants

They took our data, ran it through their software, and they got the answers that eluded us for so long.

I was told they were a big consulting company, which meant they probably had great, restrictively expensive software that could do the job. That’s why.

But I don’t buy that argument.

Great software needn’t be expensive.

I’ve lived and breathed great open-source, free technologies growing up. Linux; Apache; PHP; MySQL; WordPress; Python; R.

Are any of these free technologies inferior to their paid counterparts? In development (including data science) work, I don’t think so.

So why were they “successful”? Why could they come up with an answer we couldn’t?

My guess: they were a consulting company with less vested interest.

They came up with an answer. But would it have been better than the one we would have come up with if we were in their shoes? I don’t know.

As a consultant I’d have been much more liberal with my analyses. No matter how badly I mess up, the worst that would happen would be that my company would lose a contract. And chances are good I could push the blame to the data that was provided, or having been provided the wrong context, or information that was withheld.

When you’re part of the company, you have far more vested interest. Not just in your job, but your network, both social and professional. Consequences extend far beyond they would if you were an external consultant working on “just another project”. I’d be far more meticulous ensuring everything was covered and analyses properly done.

 

Business Implications of Analysis

“And,” she said, “we found that the more rooms a hotel has, the higher the positive rating.”

I was at NUS (National University of Singapore) in my Master’s class — listening to my peers present their analysis on the relationship between hotel class (e.g. budget, mid-scale and luxury) and the ratings of several key attributes (e.g. location, value, service) based on online reviews.

By now, having been through ten presentations on the same topic in the last couple of hours, it was clear that there was a link between hotel class and attribute ratings: higher class hotels tended to get better reviews.

But something was missing in most of these presentations (mine included, unfortunately): there wasn’t a business problem to be solved. It was simply analysis for analysis’ sake. Through it all I couldn’t help but think, “so what?”

So what if I knew that a budget customer’s standard of “service quality” was different from that of the patron of a luxury class hotel? So what if I knew that economy-class hotels didn’t differ from mid-scale hotels but differed with upper-scale hotels? So what if I knew that hotels with more rooms tended to have more positive reviews?

(And on this last point, it was a rather common “finding”: it was found that hotels with more rooms tended to have higher ratings, and presented as if if you wanted higher ratings, you might want to build hotels with more rooms; the problem of course is that larger hotels with more rooms tend to be of the higher-end variety; budget and independent hotels tended to have fewer rooms. Would the business implication then be that even budget hotels with more rooms will improve their ratings? Probably not.)

In the end the 15 presentations or so that we went through just felt like a whole lot of fluff. Sure he analytical conclusions were technically correct; statistically  sound. But so what?

It reminded me that you can be great at analysis, but without an understanding of the business, without a mindset of constantly questioning “so what does this mean — what are the implications on the business?”, all your analytical prowess would be for naught.

On Hiring for the Long Term

This was something I read in a book called The Art of Scalability, something I believe I’d always intuitively known but never had spelt out explicitly: that having additional hands (or brains) does not necessarily equate to a proportional increase of output – it is often less, especially at the start.

The problem is relatively new. In the old industrial economy where work was relatively simple or specialised, it was possible to have somebody come in and make widgets at almost the same productivity level as someone who had been there for a far longer time.

If one widget-maker can make 100 widgets in a day, two should be able to make 200, or maybe 150 if one of them is new.

But in the knowledge economy where work involves far greater scope and interdependencies, with steeper learning curves, this model doesn’t necessarily replicate very well.

If one analyst can create a spreadsheet model within a day, can two create the model within half a day? Or three quarters of a day? Probably not. And if the second analyst is new, it’d actually probably take two days. Throw in a third analyst and you’d probably get that model done in a week.

There is often a learning curve on the part of new joiners; and though we often take note of the the learning of process and technical skills, we often forget there’s also cultural and general adaptation, which can take far longer.

And if the new hire has had plenty of prior experience, there’s also the time needed to spend unlearning old behaviours if they are incompatible with current ones.

There’s also somebody who’s got to give the training, often a senior team member or manager, whose productivity would likely decrease during this period as the new joiner’s increases; and this increase/decrease is often disproportionate, with the drop of productivity in the trainer being far worse than the increase of productivity of the one being trained.

If the new joiner leaves just as he or she gets up to speed, which could be a year into the role, then there’s simply no justification for bringing him or her into the team in the first place.

 

You make it look so easy

I’ll start with a quote I read today from the book Getting Ahead (Garfinkle, 2011) about a problem faced by people good at their craft. It made me smile because I this was the first time I’d seen it brought up anywhere and which I thought was one of those things I thought you just sucked up and lived with:

Former local San Francisco TV host Ross McGowan was negotiating a contract with his boss. He was surprised when his boss made a fairly low offer, especially considering how high his programs’ ratings were. McGowan asked why the offer was so low and his boss said, “You make it look so easy.”

Not to brag, but I think I do lots of great work (and so do many people I know), but oftentimes I make it look too easy, even when it’s not.

If you work with me, you’ll see the output of my design, programming, and execution. You see the 20 minutes that they can see but miss the 600,000 that has gone on behind the scenes preparing for just this very moment (and moments like these).

You don’t see the hours of PowerPoint deck preparation and storyline rehearsals I do for each and every presentation.

You don’t see the countless trips to the library I make getting books to hone my craft.

You don’t see the endless hours of coding I do just practicing, like differentiating the nuances of a while loop from a for loop so I can use it in my next project.

You don’t see the articles I read on metrics on sales team remuneration design so that I’m aware of potential flaws in the company’s compensation schemes and can proactively work around or advise on these when the time comes.

Easy? If giving up many aspects of life that you feel for and wish you had more time for is easy, then well, yes.

Freely Sharing Information

I’m three quarters of my way through a book called Team of Teams by General Stanley McChrystal, a book on leadership, organisational structure, and a way of thinking that’s so insightful I can’t wait to finish reading just so I can start from the beginning again. Other than the Nassim Taleb books I don’t think there’s been another book that’s had as much of an impact on my thinking.

There are tons of interesting insights in the book, many of which I’m sure will crop up in some form or another on this blog in the near future. But there’s one that really stood out and gave me plenty of pause, because it reminded me of a way of thinking that I’d parked because I felt my organisation wasn’t ready for it: that we should seriously consider freely sharing information, across hierarchies, and across teams. But maybe I can change that.

An excerpt from the book, setting the scene for just this need:

The problem is that the logic of “need to know” depends on the assumption that somebody—some manager or algorithm or bureaucracy—actually knows who does and does not need to know which material. In order to say definitively that a SEAL ground force does not need awareness of a particular intelligence source, or that an intel analyst does not need to know precisely what happened on any given mission, the commander must be able to say with confidence that those pieces of knowledge have no bearing on what those teams are attempting to do, nor on the situations the analyst may encounter. Our experience showed us this was never the case. More than once in Iraq we were close to mounting capture/kill operations only to learn at the last hour that the targets were working undercover for another coalition entity. The organizational structures we had developed in the name of secrecy and efficiency actively prevented us from talking to each other and assembling a full picture.

The place for the polymath. Because there's too many good things in life to be great at just one thing.