How Strong A/B Tests Can Read Customers’ Minds
How does A/B testing work?
What are the 4 A/B testing steps?
How can I create a strong test?
Good news: Despite the whole “test” part of the name, A/B testing can actually be fun.
It gives you the chance to peek into your customers’ minds and see what they like and don’t like.
You could even see if they like the men in your ads bearded or not bearded. How?
Your car is blue, your favorite shirt is blue, your dog’s name is Blue. But is that the right color for your call-to-action button?
This may seem like a tiny detail...but small things can make a huge difference in how successful your marketing is. That’s why A/B testing is so important.
A/B testing pits different versions of one element (like your website layout, mobile ad, design, email subject line, or copy) against each other to see which gives you the best results.
Basically, your marketing goes mano-a-mano against itself. For example, you could create 2 (or more) versions of the same landing page with different layouts, randomly show the different versions to visitors, and see which one performs better.
What things can you test? Almost anything that might affect performance. You could switch out one word in your call to action, move a button from the right or the left side, or change the background color of your design.
You could also test whether beard length affects click-through rate, like the clothing company Betabrand did.
Here’s Betabrand’s original ad. A fine ad. A good ad. But the company wanted to see if changing the model’s beard could get more people to click on it.
They created 5 more versions of the ad, changing just the beard style. Then they ran those new versions and the original ad at the same time, to the same audience.
They compared the results and #6 won by a huge, um, beard. It had a 79% higher click-through rate than the other ads’ combined average.
What did Betabrand learn? If their ads feature men, they should be unconventionally beardy to boost click-through rates.
LISTEN UP
Great, let’s slap a beard on a call-to-action button. But wait...it takes time to create a well-designed test that will generate reliable results.
You need to hone in on your objective, come up with a hypothesis, create your “variants,” and start testing and calculating the results.
First we’ll look at honing in on your objective, or the outcome you’re trying to improve – like getting more signups from your email marketing.
Betabrand’s objective, for example, was to improve the click-through rate (aka outcome) of their ad.
The beards helped them reach this objective, and they kept the ad design the same during testing.
Next you should come up with a hypothesis, or theory behind what will help you reach your objective.
Look at your original ad or layout or button or whatever you’re planning on testing. You probably already have a nagging feeling that something in there could change for the better. Follow that hunch and turn it into a hypothesis.
For example, Betabrand hypothesized that beard length would affect click-through rates. But they could have focused on how the order of the copy would affect click-through rates. Or they could have looked at what color shirt to use.
After your hypothesis comes creating your variants, which is just an awkward word for different versions of one thing.
Your original version would be called the “control variant.” Take the one thing in your control variant that inspired your hypothesis, and come up with different ways to tweak or change it. Then turn those tweaks into new variants/versions to test.
To get the best, non-biased A/B testing results, make sure you only change one thing from your original version.
Why? Let’s say Betabrand tried out different beards, copy lines, and logos in one A/B test. How would they know which of these factors triggered the better click-through rate?
By only changing the facial hair (and sticking to their hypothesis), they could confidently make the right improvement in future ads.
If they really wanted to test two or more things they would run two or more separate tests, making sure to test only one variant per test.
Finally, you’ll test and calculate your results to find out which one will help you reach your objective best.
How you test your variants plays a huge part in how meaningful your results are. Unreliable data and mere coincidences are no one’s A/B testing friend.
You should:
HAVE A HUGE TESTER GROUP
The bigger it is, the better your test results.
RUN VARIANTS SIMULTANEOUSLY
Test them at the same time as each other and the control variant.
RANDOMIZE YOUR VERSIONS
Don’t decide who sees which version. Show people different ones randomly.
KEEP EVERYTHING ELSE THE SAME
The time the variants run, where people are seeing them, etc.
DO THIS NOW
Now that you’ve seen the steps to smart A/B testing, start creating an objective, hypothesis, and variant for your website.
If you’re participating in the course, go to the next section to access your self assessment.
KEY TAKEAWAYS
A/B testing is about the best outcome, not personal likes and dislikes.
Change only one thing in your variants and keep everything else consistent.
The 4 steps of A/B testing: objective, hypothesis, variant, and calculating the results.