If it’s not a ‘Hell, yes!’, it’s a ‘No.’

The title of this post, “if it’s not a ‘Hell, yes!’, it’s a ‘No.'” comes from a Tim Ferriss book I’m currently reading called Tools of Titans, and is one of Ferriss’ favourite rules of thumb. Here’s a little more context (Ferriss is quoting Derek Sivers here):

Because most of us say yes to too much stuff, and then, we let these little, mediocre things fill our lives… The problem is, when that occasional ‘Oh my God, hell yeah!’ thing comes along, you don’t have enough time to give it the attention that you should, because you’ve said yes to too much other little, half-ass stuff[.]

It reminded me of how uneasy I was when being tasked with a slew of little projects that I knew were nice to have and that closed a few “open loops” (if only for the sake of closing them). I wasn’t too keen because I knew these were not the game-changing things I wanted to work on, things which I anticipated were on the horizon for myself and the team.

I concurred that, in principle, these were things needed to be done eventually, but that they would have be pushed to the back of the queue the moment something more momentous opened up.

We agreed to putting these tasks on the back-burner, with one or two trickling through during periods of slack and/or while we gained more clarity on any “Hell, yes!” projects that might be coming up (the act of scoping and gathering requirements may turn what seems like a “Hell, yes!” project into a solid “No.”).

I’d never actually though too much about it, but this has been one of key plays of my career thus far. Admittedly, it’s difficult to say “no” to customers (internal and external) early on, when you’re still finding a career niche, building up work experience and interpersonal clout (in fact, saying “yes” to just about everything is likely the better strategy when starting your career).

But once past that, saying “yes” to each and every opportunity and task is a recipe for mediocrity. If I’d continued doing that since starting work a decade ago, I’d probably still be copying and pasting data from spreadsheets, generating business reports by hand because somebody else told me so (albeit in an excellent manner, no doubt).

Instead, I’m working on developing my data science career, leading a great sales operations team, and thinking in my spare time about how I could bring my company’s analytical capabilities to the next level. Things I’d very much rather be doing, because I’ve said “no.”

How I Said No

  • Sure, I understand the report is essential, but does it have to be done that way? (Can we change the process or data sources a little so we can automate this?)
  • Can a self-service option be considered?
  • Can it be done by somebody else in the team?
  • What if we could generate a report that had 80% of the information but that could be churned out at 20% of the usual time?

The questions above are actual examples of those I’ve asked over the years. They were my way of saying “no” to projects that would have sucked up my or my team’s time, without which the many “Hell, yes!” projects (highly impactful, hundreds-of-people-thanking-us projects) would never have come into existence.

 

Business Experimentation

Imagine for a moment that you want to implement a new sales initiative that you think will transform your business. The problem is, you’re not too sure if it’d work.

You decide, prudently, that maybe a pilot test would be good: let’s roll out the initiative to just a small subset of the company, the pilot group, and see how it performs.

If it performs well, great, we roll it out to the rest of the company. If it performs badly, no drama – we simply stop the initiative at the pilot stage and don’t roll it out to the rest of the company. The cost of the pilot would be negligible compared to the full implementation.

After consulting with your team, you decide that your pilot group would be based on geography. You pick a region you know well with relatively homogeneous customers, and whom are extremely receptive to your idea.

You bring your idea to your boss, who likes it and agrees to be the project sponsor. However, he tells you in no uncertain terms that in order for the initiative to go beyond a pilot, you need to show conclusively that it has a positive sales impact. You have no doubt it has, and you readily agree, “of course!”

Knowing that measurement is a little outside your area of expertise, you consult your resident data scientist on the best way to “show conclusively” the your idea works. He advises you that the best way to do that would be through doing an A/B test.

“Split the customers in your pilot group, the region you’ve picked, randomly into two,” your data scientist says. “Let one group be the ‘control’ group, on which you do nothing, and the other be the ‘test’ group, on which you roll out the initiative on. If your test group performs statistically better than the control group — I’ll advise you later on how to do that — you know you’ve got a winning initiative on your hands.”

You think about it, but have your doubts. “But,” you say, “wouldn’t that mean that I would only impact a portion of the pilot group? I can’t afford to potentially lose out on any sales – can’t I roll it out to the whole region and have some other group, outside the pilot, be the control?”

Your data scientist thinks about it for a moment, but doesn’t look convinced.

“You can, but it wouldn’t be strictly A/B testing if you were to do that. Your pilot group was based on geography. Customers in other geographies won’t have the exact characteristics as customers in your pilot geography. If they were to perform differently, it could be down to a host of other factors, like environmental differences; or cultural differences; or perhaps even sales budget differences.”

You’re caught in two minds. On the one hand, you want this to be scientific and prove beyond a doubt the efficacy of the initiative.

On the other hand, having an initiative that brings in an additional $2 million in revenue looks better than one that brings in an additional $1.5 million, due to having a control group you can’t impact.

Why would you want to lose $500,000 when you know your idea works?

What do you do?

A Culture of Experimentation

Without a culture of experimentation, it’s extremely difficult for me to recommend that you actually stick by the principles of proper experimentation and go for the rigorous A/B route. There’s a real agency problem here.

You, as the originator of the idea, have a stake in trying to make sure the idea works. Even though it’d have just been a pilot, having it fail means you’d have wasted time and resources. Your credibility might take a hit. In a way, you don’t want to rigorously test your idea if you don’t have to. You just want to show it works.

Even if it means an ineffective idea is stopped before more funds are channeled to an ultimately worthless cause, for you it really has no benefit. Good for company; bad for you.

In the end, I think it takes a very confident leader to go through with the proper A/B testing route, especially in a culture not used to proper experimentation. It’s simply not easy to walk away from potential revenue gains through holding out a control group, or scrapping a project because of poor results in the pilot phase.

But it is the leader who rigorously tests his or her ideas, who boldly assumes and cautiously validates, who will earn the respect of those around. In the long run, it is this leader who will not be busy fighting fires, attempting to save doomed-to-fail initiatives.

Without these low-value initiatives on this leader’s plate, there will be more resources that can be channeled to more promising ventures. It is this leader who will catch the Black Swans, projects with massive impacts.

I leave you with a passage from an article I really enjoyed from the Harvard Business Review called The Discipline of Business Experimentation, which is a great example of a business actually following through with scrapping an initiative after the poor results of a business experiment:

When Kohl’s was considering adding a new product category, furniture, many executives were tremendously enthusiastic, anticipating significant additional revenue. A test at 70 stores over six months, however, showed a net decrease in revenue. Products that now had less floor space (to make room for the furniture) experienced a drop in sales, and Kohl’s was actually losing customers overall. Those negative results were a huge disappointment for those who had advocated for the initiative, but the program was nevertheless scrapped. The Kohl’s example highlights the fact that experiments are often needed to perform objective assessments of initiatives backed by people with organizational clout.

Can you imagine if they decided not to do a proper test?

What if they thought, “let’s not waste time; if we don’t get on the furniture bandwagon now our competitors are going to eat us alive!” and jumped in with both feet, skipping the “testing” phase?

Or what if the person who proposed the idea felt threatened that should the initiative failed  it would make him or her look bad, and decided to cherry pick examples of stores for which it worked well? (An only too real and too frequent possibility when companies don’t conduct proper experiments.)

It would, I have little doubt, led to very poor results.

And now imagine if this happened with very single initiative the company came up with, large or small. No tests, just straight from dream to reality.

Disastrous.

But unfortunately in so many companies just the case.

Developing a Culture

Seth Godin wrote a wonderful post on how we sometimes need an external push (through laws, policies, cultural guardrails) to do what’s best for us. It can be basically summed up by the following statements (from the post):

  • We know that wearing a bicycle helmet can save us from years in the hospital, but some people feel awkward being the only one in a group to do so. A helmet law, then, takes away that problem and we come out ahead.
  • Guard rails always seem like an unwanted intrusion on personal freedom. Until we get used to them. Then we wonder how we lived without them.

I was just thinking about true this is for so many other aspects of our lives. The friends we choose, because of the context they set, determine many of the decisions we make, and consequently many of the paths of life we take.

When setting up a company, a department, a team – how important it would be then to make sure that the cultural norms we encourage and enforce are the ones we want.

Whether it’s a culture of success (however you define it); freedom of experimentation; openness of communication; risk taking; or hard work, it is our job as servant leaders to ensure that it’s the least awkward thing to do.

 

 

Long vs. Short-term: Doing what needs to be done

There is a huge difference in working with a team that you know will be with you for only a single project and a team that you know will be with you for still many more.

When you’re working with a team that you know will be with you for a long time, you may do what needs to be done to achieve a favourable outcome for this project, but understand the outcome of the sum total of all potential projects to come is just as important, if not more: setting the right precedent and ensuring goodwill among all (as far as is reasonable!) needs just as much attention.

But with the team that you work with for only a single project, you do what needs to be done to achieve the best outcome for this project without too much regard to how that might implicate future interactions with the team. Thinking long-term when you shouldn’t could potentially hurt the outcome of this project.

If you’re only going to be working with them for this one time,  setting a bad precedent or upsetting one or another doesn’t matter too much.

(This post is more a reminder to me than anything else: last year I worked on several one-off projects, during which I was always in this “long-term” mode of thinking. I tried pleasing everybody and making sure I didn’t set poor precedents – “fairly” distributing workload, for example, to people whom I knew couldn’t perform, ultimately hurting the results of these projects.)

Ship Already

I’ve written about shipping before: the act of delivering a product; an article; a report; a piece of art. You can have the best ideas in the world, but if you don’t ship, they’re worth as much a ton of gold at the bottom of a rubbish heap.

“We don’t know if the data’s 100% right – are you sure we should publish it? What if they question us? What if we have to change something later? Shouldn’t we validate some more till we’re completely sure?”

Yes, you should – if you had all the time in the world.

But we don’t.

We have done our homework; we know the assumptions; we know there are issues with the data but these are not show-stoppers. For our purposes, 90% is good enough. If we waited till we were 100% before shipping, nothing would be shipped.

Ship already.

The Loss of Sales Conversion “Efficiency”

Let me admit right off the bat that the post today contains less original thought of mine and more myself reminding my future self on a fact I always intuitively knew about but never saw documented anywhere: that in a sales funnel, an increase of in an earlier stage of the funnel quite naturally lends itself to lower sales conversion rates in the following stage(s).

Picture of Sales Funnel showing sales conversion from one stage to the next
Simple sales funnel – we start with the first stage, which is generally prospecting or lead generation; and eventually end up when the prospect or lead eventually makes a purchase. When absolute numbers in a single stage increases, sales conversion to the next stage tends to decrease.

From the book The Perfect Salesforce by Derek Gatehouse (a great book on building and managing a sales team, and that I thoroughly enjoyed, by the way – read my full review on goodreads.com):

A bigger machine will have more parts to fix and more leaks to patch. You cannot fight the natural byproduct of growing larger…

And if your closing ratio happens to drop from 25 percent to 15 percent over a five-year period, you should be okay with it: because it is an inevitable part of being bigger and, more relevant, given the choice, you would rather close 15 percent of five thousand prospects visited than 25 percent of one thousand prospects!

It is also interesting to note that Gatehouse doesn’t believe in “fixing” the lower sales conversion rates, saying that the top sales-centric companies focus on the sales instead of “the ones that get away”. This may sound slightly controversial, but not so much if you understand that Gatehouse very much believes in playing to your strengths and not shoring up your weaknesses.

Expensive Software and Consultants

They took our data, ran it through their software, and they got the answers that eluded us for so long.

I was told they were a big consulting company, which meant they probably had great, restrictively expensive software that could do the job. That’s why.

But I don’t buy that argument.

Great software needn’t be expensive.

I’ve lived and breathed great open-source, free technologies growing up. Linux; Apache; PHP; MySQL; WordPress; Python; R.

Are any of these free technologies inferior to their paid counterparts? In development (including data science) work, I don’t think so.

So why were they “successful”? Why could they come up with an answer we couldn’t?

My guess: they were a consulting company with less vested interest.

They came up with an answer. But would it have been better than the one we would have come up with if we were in their shoes? I don’t know.

As a consultant I’d have been much more liberal with my analyses. No matter how badly I mess up, the worst that would happen would be that my company would lose a contract. And chances are good I could push the blame to the data that was provided, or having been provided the wrong context, or information that was withheld.

When you’re part of the company, you have far more vested interest. Not just in your job, but your network, both social and professional. Consequences extend far beyond they would if you were an external consultant working on “just another project”. I’d be far more meticulous ensuring everything was covered and analyses properly done.

 

Business Implications of Analysis

“And,” she said, “we found that the more rooms a hotel has, the higher the positive rating.”

I was at NUS (National University of Singapore) in my Master’s class — listening to my peers present their analysis on the relationship between hotel class (e.g. budget, mid-scale and luxury) and the ratings of several key attributes (e.g. location, value, service) based on online reviews.

By now, having been through ten presentations on the same topic in the last couple of hours, it was clear that there was a link between hotel class and attribute ratings: higher class hotels tended to get better reviews.

But something was missing in most of these presentations (mine included, unfortunately): there wasn’t a business problem to be solved. It was simply analysis for analysis’ sake. Through it all I couldn’t help but think, “so what?”

So what if I knew that a budget customer’s standard of “service quality” was different from that of the patron of a luxury class hotel? So what if I knew that economy-class hotels didn’t differ from mid-scale hotels but differed with upper-scale hotels? So what if I knew that hotels with more rooms tended to have more positive reviews?

(And on this last point, it was a rather common “finding”: it was found that hotels with more rooms tended to have higher ratings, and presented as if if you wanted higher ratings, you might want to build hotels with more rooms; the problem of course is that larger hotels with more rooms tend to be of the higher-end variety; budget and independent hotels tended to have fewer rooms. Would the business implication then be that even budget hotels with more rooms will improve their ratings? Probably not.)

In the end the 15 presentations or so that we went through just felt like a whole lot of fluff. Sure he analytical conclusions were technically correct; statistically  sound. But so what?

It reminded me that you can be great at analysis, but without an understanding of the business, without a mindset of constantly questioning “so what does this mean — what are the implications on the business?”, all your analytical prowess would be for naught.

On Hiring for the Long Term

This was something I read in a book called The Art of Scalability, something I believe I’d always intuitively known but never had spelt out explicitly: that having additional hands (or brains) does not necessarily equate to a proportional increase of output – it is often less, especially at the start.

The problem is relatively new. In the old industrial economy where work was relatively simple or specialised, it was possible to have somebody come in and make widgets at almost the same productivity level as someone who had been there for a far longer time.

If one widget-maker can make 100 widgets in a day, two should be able to make 200, or maybe 150 if one of them is new.

But in the knowledge economy where work involves far greater scope and interdependencies, with steeper learning curves, this model doesn’t necessarily replicate very well.

If one analyst can create a spreadsheet model within a day, can two create the model within half a day? Or three quarters of a day? Probably not. And if the second analyst is new, it’d actually probably take two days. Throw in a third analyst and you’d probably get that model done in a week.

There is often a learning curve on the part of new joiners; and though we often take note of the the learning of process and technical skills, we often forget there’s also cultural and general adaptation, which can take far longer.

And if the new hire has had plenty of prior experience, there’s also the time needed to spend unlearning old behaviours if they are incompatible with current ones.

There’s also somebody who’s got to give the training, often a senior team member or manager, whose productivity would likely decrease during this period as the new joiner’s increases; and this increase/decrease is often disproportionate, with the drop of productivity in the trainer being far worse than the increase of productivity of the one being trained.

If the new joiner leaves just as he or she gets up to speed, which could be a year into the role, then there’s simply no justification for bringing him or her into the team in the first place.

 

Freely Sharing Information

I’m three quarters of my way through a book called Team of Teams by General Stanley McChrystal, a book on leadership, organisational structure, and a way of thinking that’s so insightful I can’t wait to finish reading just so I can start from the beginning again. Other than the Nassim Taleb books I don’t think there’s been another book that’s had as much of an impact on my thinking.

There are tons of interesting insights in the book, many of which I’m sure will crop up in some form or another on this blog in the near future. But there’s one that really stood out and gave me plenty of pause, because it reminded me of a way of thinking that I’d parked because I felt my organisation wasn’t ready for it: that we should seriously consider freely sharing information, across hierarchies, and across teams. But maybe I can change that.

An excerpt from the book, setting the scene for just this need:

The problem is that the logic of “need to know” depends on the assumption that somebody—some manager or algorithm or bureaucracy—actually knows who does and does not need to know which material. In order to say definitively that a SEAL ground force does not need awareness of a particular intelligence source, or that an intel analyst does not need to know precisely what happened on any given mission, the commander must be able to say with confidence that those pieces of knowledge have no bearing on what those teams are attempting to do, nor on the situations the analyst may encounter. Our experience showed us this was never the case. More than once in Iraq we were close to mounting capture/kill operations only to learn at the last hour that the targets were working undercover for another coalition entity. The organizational structures we had developed in the name of secrecy and efficiency actively prevented us from talking to each other and assembling a full picture.