Monday, August 11, 2014

Three Ways A-B Testing Will Improve Your Marketing. (Part 1)

The old Chinese proverb says. “A trip of a thousand miles begins with a single step.” While I’m still not sure how the ancient Chinese knew anything about English units of measure, the saying does imply something very important: a destination. A trip of a thousand miles invariably leads somewhere. And that somewhere is likely missing from you’re a-B testing.

If your testing framework is only that, a testing framework, you are missing out on one of the key benefits of testing. You should be testing with a destination in mind: optimization. Your goal is to both optimize current campaign performance and do it in such a way you can apply that optimization to future campaigns. Let’s look at an example of the types of A-B testing I’ve seen.

Test #1: A-B email Subject Line testing
You may have performed this type of testing, so you know how it goes. This is an open rate test, and is critical, since nobody can respond to your CTA if they don’t open the email first. You take otherwise identical emails and test one subject line against the other, such as:
  1. ACME sells really keen widgets
  2. ACME widgets solve all the world’s problems
You split off 10% of your campaign segment and send Subject Line 1 to half that audience and Subject Line 2 to the other half. Whichever Subject Line wins the test gets applied to the other 90% of the segment, assuming the open rates will follow the same pattern as the test.

Test #2: Email CTA link testing
Once the recipient opens your email, the next critical step is to obtain a response to your CTA. A number of tests have been employed here, such as:
  1. Change the color of the link button – red vs. blue
  2. Move the button to different spots on the page – right column vs. inline
  3. Use different graphic elements as a button – arrow vs. rectangle
The test is run in exactly the same manner as the Subject Line test, with the winner of the pilot group getting sent to the remainder of the segment.

So, what’s wrong with these tests?
Nothing is wrong with the tests themselves. What’s wrong is they are not performed within a framework aimed at solving the real problem. Let’s start by asking this question: How much does your organization get paid when someone opens an email? How about when they click through from an email? (Unless you are a PPC organization those business model is built on creating click-throughs.) For the vast majority of B2B organizations, the answer is $0. We get paid when someone engages with our sales team and ultimately buys our products or services. So what should our optimization framework ultimately measure?

Tactically, we should think of open rate optimization in terms of “best principles” (there’s that term again) we can apply against all of our tactics to improve our funnel conversion rates and velocity. In our Subject Line test, will we actually use that identical Subject Line in another program to the same segment next month? I hope not. Therefore, we need to think of our optimization framework as a series of repeatable principles that we can employ in all subject lines. We would apply scientific method to create a series of hypotheses to test these assumptions to ultimately create a rule by which all future subject lines are created, such as:
  • Subject lines should be less than 35 characters.
  • Subject Lines should include our company name.
  • Subject Lines should contain the recipient’s first name.
Important Note: these are examples of best principles and should not be applied uniformly to your emails as a “best practice.” Again, best practice for Company A could be worst practice for Company B!

Change the conversation.

To successfully optimize, your testing should not stop when the tactical campaign is over. Your testing should follow all the way through the demand funnel to Closed/Won (or lost, but we’ll assume the best here). As we’ve mentioned in previous editions of {Demand Gen Brief}, all of your programs should be specifically designed to create forward funnel movement, and should be built around a specific process. Optimizing only a part of the process will not provide the end-to-end improvement you want.

Notes:

Testing must be performed with an overall objective in mind.

Tactical testing is not generally applicable over all of your campaign tactics.

Build your optimization framework around principles that can be applied to multiple tactics.

Next week, we’ll look at misdirected testing and how you can avoid falling into that trap: Three Ways A-B Testing Will Improve Your Marketing. (Part 2) How to Avoid Misdirected Testing.
 

No comments:

Post a Comment