Thursday, October 6, 2011

Marketing Measurement: Back to the Basics

{Demand Gen Brief}        October, 2011


Home field embarrassment
The score was 47 to 3, and the home team was beyond embarrassed. The offense was listless and the defense couldn't stop a bowl of molasses. While the team walked dejectedly to the locker room, the on-field TV announcer stopped the coach. "What went wrong today, coach?"
"Well, we got stomped because we needed to increase our passing efficiency by 2.7% and improve our run defense by 0.7 yards per carry."
What?
Ok, I've never heard such a ridiculous answer, either. What I have heard consistently after a thorough defeat goes something like this: "We're going to go back to the basics - fundamental blocking and tackling." I've run into this frequently in the world of demand generation, and it may be time for you to ask yourself the same question about your organization's approach. Is it time get back to the basics?
Bad Company
Let me give you a couple of real-world examples and see if this sounds like your demand gen practices. An organization recently built a closed-loop tracking system that enabled them to track user clicks from source to destination using query strings. The new capability could track a user who clicked from a Google AdWords ad or an outbound email campaign all the way to the download of a white paper PDF file. At download, a blind submit form registered the download and assigned a coded campaign status to the user's contact record and updated the associated campaign in the CRM system. Enamored with the new ability, this organization buried links within the white paper PDF file to a form... to download the very same PDF file.
In another example, an organization got in the habit of sending three separate invitations to online events in a series from an automated event management system. Increasingly, they found registrations dropping off with each successive invitation. I did some digging into their email stats and discovered they were not only failing to significantly increase incremental registrations, they were receiving twice as many unsubscribes as registrations and were damaging their sender reputation due to complaints. The client, aware of my studies and success in improving click-through performance was wondering if I could apply my wisdom to the problem. In response, my first question was "What did additional value did you offer in the successive invitations?" The question was met with silence. After a moment, the client admitted the three invitations contained identical content and nearly identical subject lines, only adding the word “Reminder.” I am still awaiting an answer to my next question: "If they didn't respond to your first offer, why did you think they would respond favorably to the very same offer twice more?" The fact is, their prospects did respond... by unsubscribing.
Pump it up – until it pops
The point of these illustrations is not to simply point out myopic marketing practices, but to point out the focus necessary to really create demand. While technology provides us with wonderful tools to execute out tradecraft and measure results in ways never before possible, we must always remember that demand generation is a craft. In my seemingly unending repertoire of sports analogies (my apologies to non-sporting readers) I have often related the golf club metaphor. While I can buy the latest golf clubs that will, indeed, add yardage to my driving distance, that technology only helps me reach the water hazard or bunker I could never before reach. Tiger Woods could still beat me with his grandfather's clubs and an umbrella for a putter. The analogy in the demand gen world is that even the best marketing automation technology can only help us if we are already executing our craft very well and our systems and processes are structured to support well-conceived business objectives.
Technology only allows your organization to increase the amplitude - or turn up the volume - on your already existing business processes. In the case of one client, it allowed them to make 80,000 mistakes in a matter of seconds when an over-zealous marketer released an email without adhering to a QA process I had strongly recommended. Needless to say, that event led to immediate reconsideration of my process recommendations.
Lost in Space
Back to our original premise – the basics. As a CMO, has automation caused you to shift focus from absolute measurement to relative measurement? Do your metrics illustrate a percentage year-over-year revenue increase, market share increase or ROI on marketing activities? It is common to focus on the relative and lose sight of absolute measurement of your craft. While relative measurement is useful in determining your position relative to your last measurement, it can actually provide a false indicator of real progress towards your business objectives. How can that be? Let's examine a non-sports analogy this time.
Say you are sailing from Mobile, Alabama to Cancun, Mexico, which is a nearly due south course along 88-degree west longitude. You have installed the latest GPS device and programmed in your destination coordinates of 86.50 by 21.10. As you progress towards your objective, the GPS dutifully reports your course, location, speed, distance to and time to destination. Based on the numbers, everything is fine until you bump into the west coast of Florida and discover you had set a course for somewhere in the Bay of Bengal off the coast of Calcutta (you cartographers can have some fun with that one). Relative numbers are only helpful in the context of a clearly-defined overall objective. Let's look at one more real-world example.
After installing a best-in-class marketing automation system and getting their feet wet with a few campaigns, a client decided to push the "more" button. The organization quickly became used to the high velocity with which their new system could distribute campaign messages. Within mere months, the objective subtlety shifted from distributing quality messaging to distributing quantity messages. A metric was created to measure email sent per month, even though that metric had absolutely nothing to do with strategic objectives and had little to do with revenue generation. Unaware of the impact 1.3 million emails per month had on sender score, the organization continued this volume for months. I ran some tests and discovered their sender score was slowly declining and complaints and unsubscribes had reduced throughput to in-boxes to less than 80 percent. 360,000 emails per month were never seen by the recipients and, of those who did receive the emails, unsubscribe rates nearly equaled click-through rates. So much for email volume as a viable marketing metric - at least without a relevant context.
So what, then?
We've looked at a number of things not to measure, so what about metrics we should measure? Open rates? Yes. Click-through rates? Certainly. How about A-B testing of subject lines? Definitely. These will help you understand single-message performance retrospectively, and that is a good thing. However, exactly how much is that A-B subject line test going to really help you in designing a different product email campaign to a different audience? To accomplish that, you need something a bit more sophisticated, like a sender/relevance score and an offer/relevance score. A sender/relevance score is an indicator of how likely a recipient is to open your email message, and an offer/relevance score is an indicator of how likely your recipient is to click through to your offer.
Each of these scoring systems presents a methodology to review recipient tendencies with a forward-looking focus, rather than a retrospective performance focus. The primary difference between performance measurement and indicative measurement is the application of cumulative performance data analysis based on specific, contextual hypotheses. Let's look at a specific example of an organization overly concerned with performance focus without a forward-looking context. This organization was (rightly) concerned with email open rates and conducted numerous A-B tests and applied best practices they had read about in online articles much like this one. They determined that "best practice" subject lines contained fewer than 35 characters, and ran tests on subset of distribution groups to determine the better performer of two subject line candidates. The problem was that testing was retrospective only, and was without a contextual hypothesis, so when I asked the question, "So how will this testing shape your next subject line?" I was met with silence. Someone finally mumbled something about testing and best practices of 35-character subject lines, to which I asked whether or not "My product stinks" contained fewer than 35 characters. While radically idiotic, it merely exaggerates the effect of blind adherence to "best practices" without a context.
Text without context
What would context look like for your organization and product or service? While it will be different for every organization, the framework will be very similar, no matter the product or service. Sender/relevance score reflects a structured coordination between the individual and corporate sender address and the subject line relevance to the recipient. This score is dictated by two framing questions applicable to any organization or product:
  • Does the sender have a positive relationship with the recipient?
  • What relevant recipient question does my email answer?
It is important to note that for any given point in time only the second answer is variable; your sender(s) either have a relationship with the recipient or not. By assigning a score to your email batch, you can evaluate the likelihood your emails will be opened. For example, if each of the two components can carry a maximum of three points, a perfect score would be a nine. There are a number of ways you can rationalize this score to an absolute probability, but the key is to initially understand the reasons your emails are being opened or not.
In short, if the sender has a good personal relationship with the recipient, he or she is more likely to open the email regardless of the subject line. If the recipient has no relationship with the individual, but is familiar with the sending organization, there is a possibility he or she will open the email, but is much more likely if the subject line is relevant. In the case where it is unlikely the recipient knows either the individual or the sending organization, there is little likelihood the recipient will open the email unless the subject line is relevant, compelling and a bit provocative.
The offer/relevance score works in a similar fashion, but is used to determine a recipient’s propensity to click through to a call to action offer within an email. This measurement requires a careful balance because the relevance needs to be considered bi-directionally. This is very important, because if click-through rate (CTR) is not considered in a relevant context, the CTR results can contraindicate your program effectiveness. Let’s look at a specific example.
A client wanting to show high response rates to a specific email campaign promoting an online webinar chose an external vendor to create the content, present and host the webinar. A topic highly relevant to the target audience was chosen. The email was successful in filling all available seats to the webinar within a couple of days, creating a waiting list of an additional 50% of the webinar capacity. Open rates were high at over 25% and CTR for the email was outstanding at nearly 40%. Mission accomplished, right? Wrong.
It turns out that, while the webinar topic was very relevant to the target audience, it was completely irrelevant to the sending organization’s goals. The objective of creating sales opportunities was completely lost on the target audience, whose primary objective was accumulating free CPE credits. At a very large expense in both time and money, this organization netted zero sales leads as a result of a mismatch in sender relevance. When considering the cost per lead, this campaign is the worst possible disaster while achieving outstanding open and click-through rates.
Are we thrilled at the prospect of campaign execution scale and quantitative feedback our Marketing Automation and CRM systems can provide us? Yes! Do we need to get back to the basics? Absolutely! Is there a gap between pure marketing craft and marketing automation skills? It’s wider than you think! We will take a look at how to bridge the gap between marketing craft and mechanical execution in the next edition of {Demand Gen Brief}.
© 2011, Stephen D. Turley. All rights reserved.

No comments:

Post a Comment