Should you AB test large site redesigns, and if so how?

Posted By Devesh Khanal | 2 Comments

An issue that’s come up with more than one client for us recently is when a large site redesign is “in the pipeline” and people in the company disagree about whether or not they should test the large redesigns or feature release.

The arguments for not testing large site redesigns is usually some flavor of the following:

These changes have been things we’ve wanted to do for a long time, we know we’re going to implement them, so why test them?

Sometimes I’ve heard it veiled in phrases like “This change is too important.” or “This will be too difficult to test.” But in the end the real reason is that some people in the company just don’t want to test it.

Don’t buy the “this is too hard to test” argument. As I explain below, almost any site change can be AB tested with even the most basic AB testing tool.

You shouldn’t just agree blindly to this idea of not testing large redesigns, though, because if your site is bringing in enough revenue, the ROI of continuous AB testing can be significant.

Why large or inevitable site changes should still be AB tested

I’ll list my arguments for why you should still test large site changes in response to the various objections I listed above that we often here in our work.

My hope is, if you’re the person in the organization arguing for testing, you can use these arguments to help with your battle.

“This change is going to happen regardless of the test outcome”

I have a couple responses to this, one is nicer than the other.

The nicer response: That’s okay if the change will be implemented regardless, but don’t we want to know what the effect on revenue will be?

For example (and this is common): Say an 8-figure ecommerce company is finally updating their entire site design. Their site was made 6 years ago, parts were added piecemeal over time, sales have grown tremendously, but the site hasn’t caught up. It doesn’t have modern design, the checkout process is definitely not optimal, and it’s not mobile friendly. The company knows it needs to update the site. So they hired an expensive agency to totally redesign it.

Anyone who has AB tested elements of an ecommerce site knows that seemingly simple changes to single pages can swing orders by 10%. A 10% change in orders is worth millions for an 8-figure revenue company.

Even if you’re going to implement the large change regardless of the outcome, don’t you want to know if it will reduce revenues by over a million dollars?

If it does hurt sales, you can delay the launch a bit, hypothesize what parts of the new design could be causing the decrease, retest just those elements, and isolate the problem.

The not as nice response: Rolling out a large change without testing it first is irresponsible. See arguments above for why.

Warning: Be careful if the web design agency is arguing that you should just roll out the changes and not test them. It’s not in their interest to see if their new design performs better or not. It’s in their interest to just tell you their design is fantastic and have you love it. Unfortunately customers and their wallets are the true (and ruthless) judge of whether the site is “better”.

The change is too big to test. AB testing is for front end changes, and this changes a lot more.

You don’t have to code the test in Optimizely or whatever testing platform you use. In fact, it’s not recommended that you do this for large tests. Instead, you should code and deploy the new design on your own servers with slightly modified URLs (site.com/home, site.com/page-1, etc) and have the AB testing program simply redirect users to both versions of the site. Most programs can do this.

This method can handle large changes as well.

It will be confusing if customers see two different experiences at the same time

First, every AB testing platform I’ve heard of cookies users so they’ll always see the same variation unless they clear cookies or use a different browser or device.

Second, to protect against confusion if they use separate devices (checking something at work vs. at home, etc.), on the new variation, you can always install a soft popup or bottom of page slider that says “Hey, we’re testing a new site design and would love your feedback. Let us know how you like it by…”.

Lastly, it’s just not that big of a deal. Modern companies test sites all the time now. Sites get updated, they change. Frankly, this idea that if you test a new site for a limited amount of time, and a fraction of customers see an inconsistent site experience for a while will “hurt your brand” is old fashioned marketing thinking.

This thinking relies more on “gut instinct” and opinions of people in conference rooms rather than data from the only opinions that matter: your customers.

Heard any other objections to testing large redesigns or features? Ask away in the comments.

If your business brings in 7 to 8-figures of annual revenue online and you’re interested in getting a conversion audit of your site to being increasing its conversion rate, email us or fill out the form on our homepage.

  • vladmalik

    Great post. While I also do A/B testing, I would add that the site need not necessarily be A/B tested in the traditional sense. For example, if the traffic is not high enough, it may be better to roll the change out on full traffic and compare against historical data. A sequential comparison might yield a stronger signal than splitting low traffic into an A/B test. Did that recently and the results were very clear. User testing could catch issues too.

    @linowski just wrote a post on a similar topic i.e. “fears of the experiment”. Probably can’t include a link here, but it’s the latest on the goodui blog.

    Cheers.

    • Yes, for sure if traffic isn’t high enough, all AB testing (not just for large redesigns) becomes more difficult to do. And in that case indeed serial or sequential testing can be a good option.