Monday, August 25, 2014

Three Ways A-B Testing Will Improve Your Marketing. (Part 3) Into the Vortex.


Automating your Demand Generation functions is often referred to as a journey. However, your journey need not be aimless and without a destination. In fact, if you don’t have a destination in mind, your journey will take much longer then necessary and, in fact, may never end. As the saying goes, “If you don’t know where you’re going, how will you know when you get there?”

While your ultimate destination may be something like “a world-class Demand Center,” you will need to establish some milestones along the way to measure your progress. Testing will help you to establish your progress toward (and beyond) those milestones. When it relates specifically to A-B testing, there are some great milestones against which you should always be measuring your progress! Those milestones may be found in your Demand Waterfall (or Demand Funnel).

Principle #1: Programs should be measured against movement
Marketers often fall into the trap of trying to generate activity. Activity is fine, but if that activity does not result in movement, it has both cost you money and gained you nothing. Let me explain.

Let’s say you create an outbound marketing program, sending 50,000 emails to your target audience. You have written incredibly compelling content with a CTA pointing to a wildly popular whitepaper. Your Subject Line drew a 50% open rate and your content prompted 50% of the opens to download your fabulous whitepaper, resulting in 12,500 downloads at a 25% conversion rate. My guess is you’d be jumping up and down at your success after posting these metrics. Not so fast! You’ve generated lots of activity, but have you generated any movement?

Unless you got paid for all of those whitepaper downloads, you’ve actually spent time and money to send out a free piece of information to your prospects, customer and, likely, your competitors. Here’s the real question: what did the recipients do as a result of reading the whitepaper? What was the goal of your program? Was it awareness, engagement or conversion? Did the readers react according to your goals?

Principle #2: Movement should be measured in terms of Waterfall Stages
You have a Demand Waterfall for a reason, which is to determine your prospects’ stage in their buyers’ journeys. Assuming your Waterfall accurately reflects that buyer’s journey, each and every program you deploy should have the specific purpose of moving the prospect from one stage to the next in the funnel.

In our previous example, let’s assume our whitepaper program was an engagement program, with the purpose of moving Inquiries to AQL (Automation Qualified Lead) stage. Based on this goal, we now have a movement objective against which we can measure the success of our program. We can determine the success or failure of the program in terms of how many Inquiries convert to AQLs as a results of the program.

Principle #3: Movement should indicate a significant interaction with your brand
Is downloading the whitepaper enough to indicate a significant interaction? Just because a reader downloaded it does not guarantee he or she even read it. How many downloads do you have just sitting on your hard drive waiting to be read at some future date? We can differentiate between insignificant and significant by creating a definition for those actions.

An insignificant interaction is a download with no continuation of the interaction. Sometimes I download something simply because I want the information and somebody has offered it for free. I have absolutely no intention of purchasing anything, but the information seems interesting or useful.

A significant interaction is a download with a continuing action. For example, a well-designed program might have a secondary CTA within the downloaded asset, prompting readers to engage more deeply with your brand. As an example, after reading mare about a specific A-B Testing solution, another CTA could prompt the reader to try a free testing tool or interact with a free testing framework generator. This continuation action would indicate the reader’s real interest in your solutions.

How do we measure movement?
Waterfall movement is a matter of understanding changes to a Contact record in your MAP. This change has a number of components you need to understand in the context of many other potential changes simultaneously affecting that same Contact record. So how do you isolate the movement you need to measure? The first priority is to make sure you are capturing the required data behind the metrics you want to measure!

1.     You should systematically link your Waterfall program to your response measurement. There are a variety of ways to do this; the goal is to attribute Waterfall movement to a specific response – almost always the most recent significant interaction. This is different from Campaign attribution, in which most organizations want to attribute closed revenue across all campaigns that touched the Contact during the Lead lifecycle.
2.     You need to record your program responses so they are actionable by your Waterfall program. This means you will not only need to understand which program, but when the Contact responded.
3.     In most cases, you will also need to record how the Contact arrived at the program CTA in order to convert. This is a great basis for testing which routes to the CTA are most effective.

Change the conversation.

Optimization should be conducted with the program movement goal in mind. If we go back to our original whitepaper marketing program, we would likely make significant changes to the way we determine success if our goal is Inquiry-to-AQL conversion. IN that case, 12,500 whitepaper downloads that don’t result in AQL conversion might look like a tactical success, but is actually a strategic failure. How would we change the conversation?

First, we would change our target audience to include only Contacts in the Inquiry stage. If the goal is to convert Inquiries to AQLs, what is the purpose of sending to any other than Inquiries? Second, we would insure our infrastructure was pre-built to capture the data points necessary to measure Inquiry-to-AQL conversion. There is no sense setting a goal we cannot measure. Thirdly, we set up our A-B testing points to measure the things we can both control and change, such as traffic sources. Did outbound emails work better than banner ads, or did paid SEM work better than purchased media?

Notes:

Testing must be performed with an overall objective in mind.

You should be testing movement, not activity.

Your infrastructure must be pre-built to capture the data necessary to your metrics.

Next week, we will take a look at how your MAP platform might be doing a whole lot of processing that accomplishes nothing. With SaaS and cloud-based solutions come a plethora of cloud-based add-ons. They do all sorts of nifty stuff, but do you really need or want them? Maybe, but there is a right and wrong way to look at cloud connectors and plug-ins. We will look at them in next week’s edition: Head in the Clouds? Why Less may be More.

Monday, August 18, 2014

Three Ways A-B Testing Will Improve Your Marketing. (Part 2) Avoid Misdirected Testing.


Last week we decided that any testing framework needed to have a specific destination in mind – your objective.  While it is reasonably straightforward to match your testing framework to your overall Marketing objectives, it is easy to get misdirected and lose track of your destination.

If your car has a GPS navigation system and if you’re anything like me (I always have a better route), you have heard that voice in the GPS say, “Turn around at the nearest U-turn!” Repeatedly. Mindlessly. Until you just turn it off. (Or, in my case, the GPS finally gives up, decides I’m right, and recalibrates the new course.) You’re A-B testing framework should be like that voice in the GPS, incessantly reminding you that you have left the prescribed course. How? Here are some guidelines to help you understand when your testing is off-course.

Guideline #1: A-B test results are not extensible and repeatable
Subject Line testing often falls into this category. If your testing is not extensible and repeatable, the test is only applicable to that particular email. To repeat a lesson we learned in {Demand Gen Brief} last week, categories of tests, rather than specific tests should be used like this:

We would apply scientific method to create a series of hypotheses to test these assumptions to ultimately create a rule by which all future subject lines are created, such as:

1.     Subject lines should be less than 35 characters.
2.     Subject Lines should include our company name.
3.     Subject Lines should contain the recipient’s first name.


Guideline #2: A-B tests are not actionable
What’s the point of performing a test if you can’t action the results? An example of such a test would be to perform an A-B test on CTA button colors only to find that purple wins by a landslide. Problem? Purple is your main competitor’s color and your brand standards prohibit its use in any way. There are, however, some very interesting takes on this guideline. For example, “best practices” are to not use some key words in your subject line, because they can trigger email client SPAM filters. Or not. What if 50% of your emails got sent to SPAM filters because you used the word free in the subject line, but the remaining 50% showed a 1200% open rate increase over the next best subject line? Would you break “best practices” and go with free? Of course you would, unless…

Guideline #3: A-B tests are specific to only one part of the equation
Remember, each test in your framework is designed to test a specific metric, and each metric measures only a part of the journey from Prospect to Closed/Won. In the previous example, using the word free in the subject line increased your open rates by 1200%. Great, but what happened next? Did those opens turn into clicks? Did those clicks turn into MQLs, SQLs and, ultimately closed business? Your testing framework should be looking at the entire lead lifecycle to determine how each action contributes to the progress of the entire demand funnel. Let’s look at another example.

Open                    Clicks                       Convert to MQL                   Convert to SQL                   Closed/Won
1200x.5=600      .01x600=6               .50x6=3                                  .5x3=1.5                               .5x1.5=.075
100                       100x.25=25            .50x25=12.5                          .5x12.5=6.25                        6.25x.5=3.125

In the end, that great open rate using the word free only resulted in opens, not clicks. Using the exact same conversion rates from MQL through close, we find a 400% improvement in closed business not using the word free in the subject line.

Change the conversation.

Again, A-B testing must be performed within a holistic framework that considers the entire demand funnel and what each tested step contributes to the whole. Unless you get paid for people opening your emails, the 1200% increase in open rates does not serve your organization’s overall objectives.

Notes:

Testing must be performed within an overall framework with a specific objective in mind.

Testing must consider both the specific action being tested and its overall contribution towards the overall objective.

Again, “best practices” for company A are not necessarily best practices for company B.

Next week, we’ll look at how to tie your testing framework to demand funnel progression, and why it is critical to build your framework that way: Three Ways A-B Testing Will Improve Your Marketing. (Part 2) Into the Vortex.

Monday, August 11, 2014

Three Ways A-B Testing Will Improve Your Marketing. (Part 1)

The old Chinese proverb says. “A trip of a thousand miles begins with a single step.” While I’m still not sure how the ancient Chinese knew anything about English units of measure, the saying does imply something very important: a destination. A trip of a thousand miles invariably leads somewhere. And that somewhere is likely missing from you’re a-B testing.

If your testing framework is only that, a testing framework, you are missing out on one of the key benefits of testing. You should be testing with a destination in mind: optimization. Your goal is to both optimize current campaign performance and do it in such a way you can apply that optimization to future campaigns. Let’s look at an example of the types of A-B testing I’ve seen.

Test #1: A-B email Subject Line testing
You may have performed this type of testing, so you know how it goes. This is an open rate test, and is critical, since nobody can respond to your CTA if they don’t open the email first. You take otherwise identical emails and test one subject line against the other, such as:
  1. ACME sells really keen widgets
  2. ACME widgets solve all the world’s problems
You split off 10% of your campaign segment and send Subject Line 1 to half that audience and Subject Line 2 to the other half. Whichever Subject Line wins the test gets applied to the other 90% of the segment, assuming the open rates will follow the same pattern as the test.

Test #2: Email CTA link testing
Once the recipient opens your email, the next critical step is to obtain a response to your CTA. A number of tests have been employed here, such as:
  1. Change the color of the link button – red vs. blue
  2. Move the button to different spots on the page – right column vs. inline
  3. Use different graphic elements as a button – arrow vs. rectangle
The test is run in exactly the same manner as the Subject Line test, with the winner of the pilot group getting sent to the remainder of the segment.

So, what’s wrong with these tests?
Nothing is wrong with the tests themselves. What’s wrong is they are not performed within a framework aimed at solving the real problem. Let’s start by asking this question: How much does your organization get paid when someone opens an email? How about when they click through from an email? (Unless you are a PPC organization those business model is built on creating click-throughs.) For the vast majority of B2B organizations, the answer is $0. We get paid when someone engages with our sales team and ultimately buys our products or services. So what should our optimization framework ultimately measure?

Tactically, we should think of open rate optimization in terms of “best principles” (there’s that term again) we can apply against all of our tactics to improve our funnel conversion rates and velocity. In our Subject Line test, will we actually use that identical Subject Line in another program to the same segment next month? I hope not. Therefore, we need to think of our optimization framework as a series of repeatable principles that we can employ in all subject lines. We would apply scientific method to create a series of hypotheses to test these assumptions to ultimately create a rule by which all future subject lines are created, such as:
  • Subject lines should be less than 35 characters.
  • Subject Lines should include our company name.
  • Subject Lines should contain the recipient’s first name.
Important Note: these are examples of best principles and should not be applied uniformly to your emails as a “best practice.” Again, best practice for Company A could be worst practice for Company B!

Change the conversation.

To successfully optimize, your testing should not stop when the tactical campaign is over. Your testing should follow all the way through the demand funnel to Closed/Won (or lost, but we’ll assume the best here). As we’ve mentioned in previous editions of {Demand Gen Brief}, all of your programs should be specifically designed to create forward funnel movement, and should be built around a specific process. Optimizing only a part of the process will not provide the end-to-end improvement you want.

Notes:

Testing must be performed with an overall objective in mind.

Tactical testing is not generally applicable over all of your campaign tactics.

Build your optimization framework around principles that can be applied to multiple tactics.

Next week, we’ll look at misdirected testing and how you can avoid falling into that trap: Three Ways A-B Testing Will Improve Your Marketing. (Part 2) How to Avoid Misdirected Testing.
 

Monday, August 4, 2014

Sales Thinks Your Leads Stink. And yes, you can fix that! This is how. (Part 3)


You know the smell. It’s late and you’re driving on a deserted highway out in the country. The smell hits you like a ton of pungent, acrid bricks. Dead skunk. Your only thought is how to not get any of that on you as you carefully navigate the darkness seeking to avoid the putrid remains.

Now picture this. Sales thinks your “marketing qualified” (imaging the sales leader making sarcastic air quotes as she complains to you) stink like a dead skunk. That smell you try desperately to avoid on the road in the dark of night. Then the unthinkable: after describing the nauseatingly disgusting state of your (air quotes) leads, she asks for more. Who intentionally runs over that dead skunk? Is your Sales department stupid? Nope. Just uneducated. And it’s your job to educate them!

Remember the Brass Tacks questions?
Last week, we ran the math. (If you missed last week’s blog, click here. You’ll want to know the math behind the brass tacks!) Math is indisputable, but it doesn’t answer the question of “how?” This left us with three Brass Tacks questions:

1.     How many Contacts do we need at top of funnel to meet the required number of MQLs?
2.     What can we do to affect out conversion rates at each stage?
3.     What can we do to affect the time-in-stage to increase velocity?

We actually need to ask question 1 twice, once before we begin our exercise (to establish a benchmark) and again after we finish (to establish our improvement). In last week’s example, we needed 10,000 Contacts at the top of our funnel to reach the target number of Marketing Qualified Leads (MQLs). This established our benchmark segment size. (Again, click here to review how to calculate your waterfall.)

The answer to brass tacks question 2 is a lot more flexible, and requires your unique marketing skill set to execute. However, there are some guiding “Best Principles” that will help you get there quickly, regardless of your market or product.

Principle #1: Execute every program with a specific funnel objective in mind.
This is different from a shotgun approach that targets everyone at every stage in the buying process. Unless you are selling commodities and your only market advantage is price, it is unlikely your B2B buyer is going to find you out of nowhere and send in a million-dollar order based on a single online interaction with your company. Nobody seems to disagree with this fact, but I find it amazing how many people execute marketing programs in this manner.

Principle #2: Even if you’re right, you’re not right.
This means you need to test everything, even if it seems to be working. Improving conversion rates is a matter of playing a continuous game of “King of the Hill.” Your current champion is only the benchmark by which you will compare the next challenger. And as soon as a challenger dethrones the current champion, the game begins anew.

Principle #3: There’s not one answer for everything.
What works for one market or vertical may not work for another, so don’t assume someone else’s “best practice” will automatically work for you. For example, at a time when “best practice” was to send all emails as HTML, I tested this practice. It turned out not to be “best” at all for my application. In fact, text-only emails outperformed their HTML counterparts (as measured by click-throughs) by over 35%. Again, test everything, including “best practices.”

Affecting velocity (brass tacks question 3) is a matter of reducing time in stage for those stages up to a conversion stage, such as MQL. Similar to question 2, there are principles, rather than practices you should follow to decrease your time in stage. Remembering that we need to be executing every program with specific stage movement in mind, we can add two more principles to the mix:

Principle # 4: Know your buyer.
In order to accelerate your funnel, you must completely understand your buyer: who, what, when, how and why she buys. If you have not profiled your buyer, you have no hope of accelerating your funnel because you have no idea which parameters to change and, likely have no data to support those changes. As an example, if your product or service is highly reliant upon FY-driven buying cycles, does your MAP database contain FY start month? If not, how are you going to know when to begin delivering marketing messaging? Does your buyer rely heavily on input from a technical user to make purchasing decisions? Have you created the right content to help that buyer with the technical conversation, and is that communication a part of your funnel acceleration strategy?

Principle #5: Make sure you’re measuring the right thing.
Your MAP will require customization to automate and measure time in stage for your specific demand waterfall because your stage promotion and demotion rules are unique to your organization. Having a correctly defined waterfall program is the first step in measuring progression.  Once built, you need to measure your demand waterfall on a regular cadence – at least monthly, perhaps even weekly. The two key metrics you need to capture are:
1.     Total number of Contacts in each stage. From this metric, you will be able to calculate your stage conversion rates and cumulative conversion rates.
2.     Average time in stage for each stage.
You will want to keep a running log of these measurements to spot trends and measure improvement over time. In general, you want your funnel to move from looking like this:






Change the conversation.

Once you have a complete view of your demand waterfall and understand the factors that affect conversion rates and velocity, you can change the conversation. Following theses principles, you will know beforehand the who, what, when, how and why of your buyer’s journey and know what it will take to reach a specific goal. And you’ll have both the math and the data to back up your argument.

Notes:

You must agree with sales on the definition of a “sales-ready” lead.

There are only three components of Lead volume Marketing can control.

Learn to calculate each of these components and let the math do your talking for you.

This week we talked a lot about testing. Test your conversion rates. Play King of the Hill. Even test “best practices.” That’s a lot of testing. So how should you go about testing? Next week, we’ll begin a new series named Three ways A-B testing will improve your Marketing results.