Category Archives: Measurement

Why Do Marketers Latch Onto Whatever’s New & “Hot”?

The numbers are disturbing. A 2011 Nielsen survey of advertising credibility ranks mobile device advertising 14th in a field of 15 and advertising on social media dead last. Yet these two venues are at the top of many marketing preference lists. What’s in the makeup of marketers or the state of their profession that makes them cling to whatever’s new and “hot?”

BTW, personal and online recommendations from consumers grabbed the top two spots. How does that add to the story?

“Averaging” Customers – Necessary? Misleading? Or Both?

What’s your knee-jerk reaction to hearing someone say, “our average customer?” Mine is to wince. Describing customers according to statistical averages – age, income, net worth,  shopping trips per week, number of employees, annual revenues, number of vendors used for each purchase category and on and on – tells us something about customers. Seeing a statistical distribution across each of these parameters tells us lots more. But still not very much. We can’t understand customers without understanding their behaviors,  And you can’t average behaviors. What’s the average of leaving a restaurant angry and believing you overpaid?  Might as well average a tomato and a pork chop.

Nonetheless, we let the impracticality of assessing and responding to individual behavior dictate use of statistical averages to describe customers. Then we enhance the data by imputing behaviors to people we don’t know. Is this the best we can do?

How should we go about understanding our customers?

Can We Measure the Outcomes of Improving Customer-Facing Process?

Please, no comments like “You can’t manage what you can’t measure.” That’s bunk. Always has been. Always will be. And to support my harsh stance on this ridiculous statement, I’ll cite none other than Albert Einstein, who kept a sign on his Princeton office wall saying:

“Not everything that matters can be measured. But not everything that can be measured matters.”

In many cases, trying to measure growth in share of customer stemming from improved customer experience triggered by introduction of Outside-In process quickly becomes a fool’s errand. We can get halfway there by measuring improvement in customer experience (although doing so requires a very high level of research expertise, beyond simple NPS scores). But even these measures are subject to influence from contextual changes. And freeing changes in share of wallet from contextual changes defies research. Hell, we can rarely measure the thickness of the wallet, so how do we calculate the share?

So what are the alternatives to direct measurement? Or does anyone want to argue with Einstein? :-)

Based on our experience, the most effective way in most situations is establishing intuitive “cause & effect” relationships where certain actions well-performed will enhance customer experience in ways that should broaden relationships – or directly trigger additional business from customers, as should be the case for new products/services. While research can’t statistically measure the effects in most cases, they can validate the connections using Kano studies (not VOC, C-Sat or especially not NPS).

Not precise enough for you? Then you don’t belong measuring anything to do with people, customers included.

So what should O-I implementers do instead?

Whose Customers Complain the Most? The Better Business Bureau dishes on the worst offenders.

The North American BBB has released its 2009 compilation of which industries drew the most complaints and how well they resolved them. Of course, “resolved” is a relative term for BBB complaints. It can mean “customer gave up,” but the majority of resolutions indicate at least partial satisfaction of customer claims. While you have to discount some high “resolution” numbers, you can reasonably use them to differentiate addressing complaints among industries.

Here’s the list – from “least worst” (10) to “worst.” The number sequence represents: # of complaints / % “resolved”

 10. Retail furniture: 12,313 / 76%

Having worked with clients in this sector, my educated guess is that going out of business without returning deposits triggered lots of these.

9.  Auto repair:  12,410 / 65%

Hey, these blokes finish lower than used car dealer. Again, based on with a client providing technical back-up not knowing what they’re going and ginning up problems Click & Clack have never heard of are the primary culprits.

 8.  Wireline telcos:  13,166 / 96%

They answer to state regulators, so they can’t afford to just stick it to customers outside of what they’re allowed to do.

 7.  Used auto dealers:  13,235 / 69%

Most (but not all) new car dealerships have cleaned up their act here, so likely a high % of “used only” sellers.

6.  Collection agencies:  15,628 / 85%

There is no more predatory and less ethical industry out there, but state regulators are final;ly clamping down.

 5.  Internet merchants:  21,154 / 69%

I cannot believe the ads some consumers take seriously. Want to inherit a fortune from a dying Nigerian? Hong Kong’s really getting in the act now, too. Most legitimate web merchants are actually very responsible, but far from all are legit. A little trick. If you don’t see an “unsubscribe” link at the bottom of the ad, get the hell out of there.

 4.  New car dealers:  26,019 / 83%

Thank you, Toyota. A number of dealer networks have really cleaned up their act. But obviously lots haven’t.

 3.  Banks:  29,824 / 95%

BBB acknowledges the “resolution” rate is inflated, probably because the FDIC satisfied lots of claims. If left to the banks own devices, and especially to credit card departments, the 95% would probably drop lots.

 2.  Cable & Satellite Television:  32,158 / 98%

Comcast for one is trying to straighten up a little, but basically we’re talking about two packs of liars caught in a life or death competitive struggle. A whole lotta these folks would qualify as politicians. But, they’re regulated, at least in part, which forces lots of make-goods.

1.  Cellular providers:  36,086 / 95%

Verizon wireless is a pretty straight shooter, but watching AT&T defend its “Swiss cheese” coverage and lack of bandwidth to handle iPhones tells us just how low the industry can go. Sprint used to a service nightmare. Now they’re just a bad dream. Viral attacks have hit some so hard that all of them tend to roll over and make good rather than risk bad PR. Yeah, angry customers!

 Two lessons here: 1.) If a company screws you, consider going to the BBB. No half-way sane (and legitimate) company wants to be on the black list; 2.) All those BBB window stickers you see at used car dealers and repair shops come from auto-supply distributors.

Forget About Lead Cost…Please!

What’s wrong with using “lead cost” as a B2B, lead-generation campaign measure? Oh let me count the ways: 1) activity after a “lead” is generated overwhelms “lead cost” in contributing to ROI; 2) “lead cost” doesn’t affect campaign ROI sufficiently to serve as a KPI (key performance indicator); 3) fixation with “lead cost” enables laissez faire attitudes towards what really matters; 4)  “leads,” as marketing and sales commonly use the term, are really inquiries, unqualified inquiries, and shouldn’t be called “leads” prior to successful qualification. But the whole world calls inquiries “leads,” so I’ll join in and stop using all the parentheses.

So why can’t B2B companies get passed judging campaign ROI by lead cost? Three basic reasons: 1) they don’t know any better; 2) they know better but lack the will and/or discipline to collect the data necessary to measure ROI; 3) they collect the data but can’t find a calculation for converting raw outcomes data into ROI information. I’ve already beat my brains out unsuccessfully trying to address the first two issues through articles like this, so it’s on to reason #3.

“The formula” for lead-gen ROI calculation

Here’s our cherished formula at HYM.


Don’t panic! I can explain. Let’s take one operand (computational element) at a time.

Selling price minus variable cost (cost of goods sold):  We arrive at this number by taking the average revenue created by all the leads converted to sales and then subtracting the average variable cost across all sales. “Variable cost” describes all expenses that change in consort with changes in the volume of products produced or services delivered. If you’re unfamiliar with the concept, please talk to your CFO.

One caveat–don’t treat selling costs as a variable cost. “The” formula factors in sales expense.

Marketing cost divided by number of contacts:  Nothing more than the good old CPC (cost per contact). For web programs, you can use the number of click-throughs on your site as contacts.

Response rate multiplied by sales conversion rate:  Just what it says.

Qualification cost:  Average cost to qualify one inquiry. If you’re generating sales leads but not qualifying them before forwarding to sales, you should be shot.

Selling cost:  Track the percentage of sales calls made on campaign leads over the sales follow-up period, then take that percentage of all relevant sales costs.

Fulfillment cost:  Hopefully you’ve moved on to using PDFs instead of glossy product information brochures, almost zeroing this out. 

That’s it for the terms. Now let’s discuss the whys and wherefores of the calculation.

The rationale behind the math

The formula says that average gross profit for a closed sale should be greater than or equal to the variable cost of the sale, with “equal to” representing break-even. To extrapolate from the value of a single transaction to total campaign profitability, you simply multiply this average gross profit (or loss) by the number of closed sales. Which begs the question why are we calculating the return on one transaction, rather than the whole campaign?

Two good reasons: 1) the math is so much simpler using a single, average transaction; but more importantly, 2) looking at a single transaction creates a formula ready-made for running “what-if” projections and pre-campaign modeling.

CPC (the numerator) divided by response rate multiplied by conversion rate (the divisor):  All we’re doing here is allocating all marketing cost to only closed sales. The concept is simple, in words at least. Dividing the CPC by the response rate allocates all marketing costs to just inquirers. Then, dividing that number by the conversion rate shifts all marketing cost down further to only closed sales.

Qualification, selling and fulfillment costs divided by conversion rate.  Because these costs apply only to inquirers, we just need to divide by the conversion rate to allocate them to closed sales only.


Is calculating  ROI worth all this work?
Yah sure, it is. Let me give you several examples.

  • Many clients are running ongoing lead-gen programs when we start engagements. They usually don’t know whether they’re making or losing money. Applying “the formula” tells the true story–and provides a critical tool for identifying what’s broken and what can be optimized.
  • On a more granular level, we’ve worked with numerous clients that think they’re saving money by skipping qualification. Invariably, their campaign ROI is in the toilet, with selling costs disproportionate to revenue. Using “the formula” (and lead-gen experience) to project the revenue increase and the sales cost reduction proper qualification would provide usually disabuses clients of that errant thought. Not only do the financial returns from qualification overwhelm qualification expense–but qualification often makes the difference between substantial sales returns and no sales whatsoever.
  • I was running a print, lead-gen program for a division of Pitney-Bowes, which wanted to kill the more expensive placements. Until, that is, “the formula” showed them the most expensive per-lead source was the most profitable. In fact, profitability of lead sources was almost inversely proportional to lead cost.
  • “The formula” got me fired by American Express by showing that AXP’s ill-fated Financial Services Direct initiative was going to do a face plant–which I duly reported, to the chagrin of AXP execs. But “the formula” didn’t lie. AXP lost its shirt, and a whole bunch of AXP execs had to “walk the plank” from the top floor for their foolish optimism. Boy, did I look good. After the fact.
  • On another AXP engagement, we applied “the formula” for up front modeling and learned that to generate positive numbers, we’d have to 1) change a planned two-step mail program to a one-step; and 2) keep our CPC brutally low. “The formula” was right–with one exception. We hadn’t dared plug in a response rate value 8X industry average.  We forgave “the formula.”

And I could go on and on.

Yup, “the formula” really is worth the work. You betcha it is.