Ecommerce Goals for Optimizely or VWO: What goals should you use in AB tests?

Posted By Devesh Khanal | 2 Comments

Optimizely goals for ecommerceI see a lot of clients use just these goals on ecommerce AB tests in Optimizely or VWO (Visual Website Optimizer):

  • Engagement (included by default by Optimizely)
  • Revenue
  • Checkouts

That’s a good start (Except engagement, have you ever made an actionable decision based on the engagement goal?).

But you could get a lot more information by adding upstream goals. Here are the most common that you should add:

  • Add to cart button clicks
  • Proceed to checkout page
  • Pageview goals for your entire funnel
    • All PDP pageviews
    • Cart Pageviews
    • Checkout pageviews (this can be multiple different pageview goals if your checkout flow is separate into different pages like /checkout/shipping, /checkout/payment/ etc.)

Here are some common scenarios for why the funnel goals are important

Scenario 1: You run a test and checkouts are up 10% with 92% significance

You are not tracking upstream goals:  You are only measuring checkouts and not the upstream goals. You decide the test has been running for a while and is probably significant (this is dangerous, but that’s another topic). You stop the test and declare the variation a winner and implement it.

You are tracking upstream goals: You are measuring the funnel goals and notice that pageviews of the checkout page, pageviews of the cart page, and clicks on the proceed to checkout and add to cart buttons are down. So why are checkouts up? Something doesn’t look right so you run the test for another 2 weeks. After 2 weeks the checkout goal regresses to the other goals and is also down a few percent versus original. Phew, good thing you didn’t call the test early when checkouts were looking up.

Scenario 2: You implement a variation that everyone “just knows” is going to win.

You are not tracking upstream goals: Checkouts tank and are down 30% with 99% significance after 1 week. Uh oh. Why?! You sit around the room and debate a bunch of reasons why but in reality no one really knows.

You are tracking upstream goals: You notice in your upstream goals that add to cart and proceed to checkout goals are up 20% as expected but everyone drops off at the checkout page. Coincidentally your variation made some changes to that page as well. You look further. When looking at session recordings of the checkout page and notice there was some unintended behavior on mobile from your variation. You didn’t realize your variation pushed some important information far down the page. Culprit found! Now you can re-do the test with that mobile issue fix.

There are more scenarios than this, but these are two common situations (one with a positive checkout result and another with a negative) where tracking upstream goals can make a huge difference (possibly costing or making the company millions).

In our experience when a variation is a clear winner, all or most goals will show an upward trend. That is: more add to cart clicks, more views of the checkout page, more successful checkouts, more revenue.

Not all tests will show that and if you have a test that runs for a fixed period of time, reaches 99% significance, with hundreds or thousands of checkouts per variation over multiple purchase cycles (calendar weeks), it’s hard to argue with that, and that does happen.

But most AB tests (as anyone who has done a lot of testing will attest to) are not so textbook clean. Like anything in life, the majority of scenarios are in the grey area. When that happens, tracking more goals than just the end goal (checkouts and revenue) will help interpretation a lot.

Want help setting up goals for your ecommerce site in Optimizely? Contact us.

  • Paul Davies

    Thanks for the tips Devesh! Always struggle with the question “How long is long enough?” when it comes to testing. Or even, how much data is enough data! So it was nice to read a bit more about patience when it comes to data collection, thanks for the advice 🙂

    • For sure. Calling tests too early is one of the biggest dangers in AB testing. Perhaps second to “not testing”. 🙂