The Loss of Sales Conversion “Efficiency”

Let me admit right off the bat that the post today contains less original thought of mine and more myself reminding my future self on a fact I always intuitively knew about but never saw documented anywhere: that in a sales funnel, an increase of in an earlier stage of the funnel quite naturally lends itself to lower sales conversion rates in the following stage(s).

Picture of Sales Funnel showing sales conversion from one stage to the next
Simple sales funnel – we start with the first stage, which is generally prospecting or lead generation; and eventually end up when the prospect or lead eventually makes a purchase. When absolute numbers in a single stage increases, sales conversion to the next stage tends to decrease.

From the book The Perfect Salesforce by Derek Gatehouse (a great book on building and managing a sales team, and that I thoroughly enjoyed, by the way – read my full review on goodreads.com):

A bigger machine will have more parts to fix and more leaks to patch. You cannot fight the natural byproduct of growing larger…

And if your closing ratio happens to drop from 25 percent to 15 percent over a five-year period, you should be okay with it: because it is an inevitable part of being bigger and, more relevant, given the choice, you would rather close 15 percent of five thousand prospects visited than 25 percent of one thousand prospects!

It is also interesting to note that Gatehouse doesn’t believe in “fixing” the lower sales conversion rates, saying that the top sales-centric companies focus on the sales instead of “the ones that get away”. This may sound slightly controversial, but not so much if you understand that Gatehouse very much believes in playing to your strengths and not shoring up your weaknesses.

The problem with fighting fires

The problem with fighting fires, day after day, is that there’s no time for anything else.

Sometimes, you just have to step back and observe.

To think; to plan; to conquer.

That’s not going to happen while you’re fighting fires.

Sometimes, you have to lose the battle to win the war.

Expensive Software and Consultants

They took our data, ran it through their software, and they got the answers that eluded us for so long.

I was told they were a big consulting company, which meant they probably had great, restrictively expensive software that could do the job. That’s why.

But I don’t buy that argument.

Great software needn’t be expensive.

I’ve lived and breathed great open-source, free technologies growing up. Linux; Apache; PHP; MySQL; WordPress; Python; R.

Are any of these free technologies inferior to their paid counterparts? In development (including data science) work, I don’t think so.

So why were they “successful”? Why could they come up with an answer we couldn’t?

My guess: they were a consulting company with less vested interest.

They came up with an answer. But would it have been better than the one we would have come up with if we were in their shoes? I don’t know.

As a consultant I’d have been much more liberal with my analyses. No matter how badly I mess up, the worst that would happen would be that my company would lose a contract. And chances are good I could push the blame to the data that was provided, or having been provided the wrong context, or information that was withheld.

When you’re part of the company, you have far more vested interest. Not just in your job, but your network, both social and professional. Consequences extend far beyond they would if you were an external consultant working on “just another project”. I’d be far more meticulous ensuring everything was covered and analyses properly done.

 

Business Implications of Analysis

“And,” she said, “we found that the more rooms a hotel has, the higher the positive rating.”

I was at NUS (National University of Singapore) in my Master’s class — listening to my peers present their analysis on the relationship between hotel class (e.g. budget, mid-scale and luxury) and the ratings of several key attributes (e.g. location, value, service) based on online reviews.

By now, having been through ten presentations on the same topic in the last couple of hours, it was clear that there was a link between hotel class and attribute ratings: higher class hotels tended to get better reviews.

But something was missing in most of these presentations (mine included, unfortunately): there wasn’t a business problem to be solved. It was simply analysis for analysis’ sake. Through it all I couldn’t help but think, “so what?”

So what if I knew that a budget customer’s standard of “service quality” was different from that of the patron of a luxury class hotel? So what if I knew that economy-class hotels didn’t differ from mid-scale hotels but differed with upper-scale hotels? So what if I knew that hotels with more rooms tended to have more positive reviews?

(And on this last point, it was a rather common “finding”: it was found that hotels with more rooms tended to have higher ratings, and presented as if if you wanted higher ratings, you might want to build hotels with more rooms; the problem of course is that larger hotels with more rooms tend to be of the higher-end variety; budget and independent hotels tended to have fewer rooms. Would the business implication then be that even budget hotels with more rooms will improve their ratings? Probably not.)

In the end the 15 presentations or so that we went through just felt like a whole lot of fluff. Sure he analytical conclusions were technically correct; statistically  sound. But so what?

It reminded me that you can be great at analysis, but without an understanding of the business, without a mindset of constantly questioning “so what does this mean — what are the implications on the business?”, all your analytical prowess would be for naught.

On Hiring for the Long Term

This was something I read in a book called The Art of Scalability, something I believe I’d always intuitively known but never had spelt out explicitly: that having additional hands (or brains) does not necessarily equate to a proportional increase of output – it is often less, especially at the start.

The problem is relatively new. In the old industrial economy where work was relatively simple or specialised, it was possible to have somebody come in and make widgets at almost the same productivity level as someone who had been there for a far longer time.

If one widget-maker can make 100 widgets in a day, two should be able to make 200, or maybe 150 if one of them is new.

But in the knowledge economy where work involves far greater scope and interdependencies, with steeper learning curves, this model doesn’t necessarily replicate very well.

If one analyst can create a spreadsheet model within a day, can two create the model within half a day? Or three quarters of a day? Probably not. And if the second analyst is new, it’d actually probably take two days. Throw in a third analyst and you’d probably get that model done in a week.

There is often a learning curve on the part of new joiners; and though we often take note of the the learning of process and technical skills, we often forget there’s also cultural and general adaptation, which can take far longer.

And if the new hire has had plenty of prior experience, there’s also the time needed to spend unlearning old behaviours if they are incompatible with current ones.

There’s also somebody who’s got to give the training, often a senior team member or manager, whose productivity would likely decrease during this period as the new joiner’s increases; and this increase/decrease is often disproportionate, with the drop of productivity in the trainer being far worse than the increase of productivity of the one being trained.

If the new joiner leaves just as he or she gets up to speed, which could be a year into the role, then there’s simply no justification for bringing him or her into the team in the first place.

 

The place for the polymath. Because there's too many good things in life to be great at just one thing.