A lot of AB testing and conversion rate optimization discussion is focused on adding or removing elements from the site. For example:
- Does free shipping messaging help or hurt? (Our case study on this)
- Should we add quick links on the homepage? (Our case study on this)
- Does an upsell after adding to cart help or hurt? (Our case study on this)
- Does a sticky add to cart element help or hurt? (Our case study on this)
When running and writing about those tests, what is often given secondary importance is the visual aesthetic of the new layout or designs being tested.
Even the way ecommerce teams, our clients, talk about these tests is mostly about the the existence of these elements or not: “So should we keep the carousel or not?” They don’t place as much value to the detail of the design of that element once it’s added.
What I mean is that:
- The font, placement, size, and colors of your free shipping messaging could affect whether it increases conversion rate or not.
- The photography, layout, and design details of how you propose the upsell could affect whether it increases conversion rate or not.
- Same for sticky add to cart.
These details are tedious to test. If you design something you think is good and the test shows no difference, what do you do? How do you even know if it’s that the whole hypothesis is proven false (e.g. maybe upsells don’t increase AOV for this site), or it’s just a design detail which, if you fixed, would change the outcome?
You don’t know. That’s the hard part.
And practically speaking, most AB test programs typically have a queue of other tests the team (and executives) are itching to launch. So you don’t have the luxury of trying 5 different design concepts for one hypothesis (Aside: We don’t use hypotheses, we ask questions instead).
But that doesn’t mean you shouldn’t recognize and keep in mind that the details of designs can affect the outcome of tests, sometimes heavily.
An increasing number of ecommerce AB tests we are running these days has led me to be reminded of this. And here I’ll profile one particularly telling example.
At the bottom of each category page (or “product listing page”) for an electronics ecommerce site, we wanted to test putting links to the other categories. There was one extremely dominant category on this site and we wanted to expose customers to complimentary products in other categories.
We had two design concepts drawn up to test whether these category links would increase sales of other products or affect conversion rate in any way. Functionally, they were identical, but one used product photos and the other used lifestyle photos. Here are some mockups using a camping site as an example (not our client).
Adding a “Discover More” section using product only photos:
Adding the same section using lifestyle photos:
Here is the key result: the product only photos increased proceed to checkouts by 13.5% with 97% statistical significance and increased transactions by 23% with 89% statistical significance while the lifestyle photos showed no statistically significant difference in any key metric. Specifically, proceed to checkouts and transactions did not get higher than 50% statistical significance in the lifestyle photo version — in other words, not even close to statistical significance.
(The test was run for 3 weeks with over 550 proceed to checkouts per variation over 20,000 sessions per variation).
What’s interesting is that both our team and the client’s design team preferred the lifestyle photos! We felt the existing pages were too full of “product on grey background” images and the lifestyle photos added interest and color to the page. We thought it helped the overall brand look and feel.
But if we had run only the lifestyle photos, we would have concluded this test made no difference and moved on. Instead we saw a sizable increase in visitors making it to the checkout page and an non-negligible increase in transactions (albeit with only 89% statistical significance) in the product only variation. So we can see that this idea will likely help conversion rate and sales on the site and is worth either implementing or at the very least exploring further.
(Note: A 13% increase in customers making it to checkout is substantial, and it’s very common to see a statistically significant increase in checkout pageviews before you see the same increase in actual transactions, so a 97% stat sig increase in proceed to checkout along with an 89% stat sig increase in transactions means you are on to something).
The difference was just a change in photos. This is something many teams would not test. We almost didn’t test it. But this difference mattered here. Our hypothesis of why is that the product photo version was visually consistent with the rest of the page. Although the lifestyle photo version looked more visually interesting, perhaps people were more likely to dismiss that section as ads (banner blindness) or just not give it the same attention as browsing the products above because it didn’t look like the products above.
Either way, the lessons are clear:
- When you’re testing “Does this thing help or hurt?” don’t forget that the details in how you design “this thing” may affect the outcome of the test.
- If you have the luxury of being able to test multiple variants of a concept, do it.
- If you can only test one variant, design multiple variants first and discuss as a group the pros and cons of each design, so you can hopefully settle on the version that gives you the best chance.
- When discussing the designs, don’t forget about visual consistency on a page. It’s not just about what you think looks the most clicks for that element, but also about how it fits with the rest of the page. Be careful about throwing in a design that is strikingly different form everything else. It doesn’t mean you never should do that, sometimes you want things to stand out, but think carefully about whether a visual inconsistency is intentional and will have your desired effect or not.