We recently ran an AB test for an apparel ecommerce client where we moved the “Size Guide” link up the PDP, closer to the size dropdown.
This test is interesting because CRO best practices suggest that typically, small changes yield small results, and on the surface, moving the location of a size guide link is a small change. After all, it’s auxiliary info that only a small fraction of apparel ecommerce shoppers even click on. Why would it’s position make a statistically significant difference in conversion rate?
So, if moving such a small link does make a significant difference in conversion rate, that would tell us that the size guide link may have relative importance for certain apparel ecommerce sites. For example, if a site normally has most sizes available, it may not make a difference. But if a site has many popular products that frequently have limited size availability, or they run out of certain sizes often, it may make a big difference.
In general, this means that assessing whether a feature is important and whether adjusting it qualifies as a big change or a small change may not be obvious, and definitely is not universal. A small single link can be a big deal if it’s critical to the buying decision for a large amount of users. This is store dependent and can even be product dependent.
Note: You can learn more about our CRO service for ecommerce brands here.
Details of this Size Guide AB Test for Apparel Ecommerce
In the original (how the live site was, pre-testing), the size guide link was really low on the page, in the description section. In our Variation, we moved the size guide up above the size dropdown:
Again, this is seemingly a “small” change. What percentage of users even click on the size guide in the first place? We ran Hotjar heatmaps on these product pages, and on desktop, out of 348 users, only 2 clicked the size guide link (1.65%) and on mobile, out of 1356 users tracked, only 3 (0.31%) tapped the size guide.
Results and Implications: 21% increase in conversions, 22% increase in AOV
To our surprise, variation B increased conversion rate by 21% with 96% statistical significance after 10 days. We should note that this is a lower traffic site, so both the original and variation had less than 200 conversions per variation (over 20,000 sessions per variation), so these results should be taken with a grain of salt, despite statistical significance being > 95% (See our article on stopping an AB test). The variation also showed a 22% increase in average order value (AOV). As a result revenue per session was up a whopping 46%.
So the results are noteworthy despite the low traffic.
Obviously the size guide makes more contextual sense near the size dropdown — that’s hard to argue. But in CRO, there is a principle of lowering distraction by reducing or moving links that don’t get a lot of links. So one could argue that size guide links are not used often and thus may distract from the add to cart experience and can be moved lower on the page.
This test suggests that may not be true for many apparel ecommerce sites.
Finally, we urge the reader to not interpret these results as: “So we should test the placement of every little link on our site!” That will lead to inefficient testing. Just because moving the size guide link caused an increase in conversions, AOV, and as a result, revenue per visitor, that doesn’t mean moving any small link on a PDP will do that. It means that there is evidence that the size guide link could be important to make a buying decision.
So the real questions ecommerce UX teams should be asking are: what information is important for our customers to make a buying decision (for our particular products)? And is that information in a contextually appropriate position? Is it easy to find? Could it be improved?
If you’re interested, you can learn more about our CRO service for ecommerce brands here.
- Ecommerce pricing presentation test case study
- Sticky add to cart ab test case study
- Our massive mobile checkout best practices study
We recently ran more than one pricing AB test on an ecommerce apparel client to gauge whether or not you should list the savings percentage next to the price of products. Although this might seem to be a “small” test, as per our usability vs. desirability framework, increasing the perceived value of a product is square in the desirability category, which, we argue in that article, typically has a greater chance of affecting conversion rate than usability tweaks (button colors, etc.). In addition, previous tests for this client showed that customers were heavily swayed by price and discounts.
We tested this on the listing page and the product detail page.
The original (“A”) for these tests looked similar to JCPenney.com:
For our client (who is also in apparel), there was also a “was” price and a “now” price — above, JCPenney happens to refer “now” as “after coupon”. Showing the current price (“now”) next to an anchor price (“was”) is a very common presentation of price in apparel ecommerce. Large department stores like Macys.com, Kohls.com, for example, also present their price this way:
But what we tested for this client was adding a simple percentage savings number in addition to the was and now prices. This is how Nordstroms.com presents their price, for example:
Note the “40% off”. That’s what we were interested in. Could that simple “40% off” make a difference?
The hypothesis for why it would help conversion rates is that it could more easily highlight the savings — the thinking being that customers aren’t likely to do math in their head, and when you have an entire listing page of products with two prices each, it’s just too much mental math to internalize. Customer may fail to realize, at a glance, which products are on the steepest discounts.
The opposite argument is that adding more numbers about price could contribute to color and confusion, which perhaps could even hurt conversion rates by making the page look messy (this is a real phenomenon) or by creating more distractions from the CTAs (calls to action).
Pricing AB Test #1: Adding Savings Percentage
We tested presenting this in 2 different colors, so it was an A/B/C test. In both variations the savings percentage was presented as “You Save #%”. In both variations, we added this savings percentage on the product detail page (PDP) and the product listing page (PLP, or “category page”).
The conversion rates to completed order for all three variations were within 0.3% of each other — amazingly close to the same. Here is a snapshot from the Optimizely results dashboard for this client (btw, if you’re curious about our experience with different platforms, here are our thoughts on Optimizely vs. VWO vs. Adobe Target):
You can see the amount of data we collected was significant (yes, this site gets a lot of traffic) — 280,000 visitors per variation for 3 variations, collected over 2 weeks. And yet the conversion rates were nearly identical.
Why did this result in “no difference”? Does this mean that ecommerce shoppers simply ignore percentage savings next to was and now prices?
We actually thought so, until we did a follow up test months later.
Pricing Display AB Test #2: Different Savings Percentages
A key aspect of the previous test, which showed no difference, was that the savings percentage was the same for all products on the site. This site has about 1000 different SKUs at one time, and all of them (except special items) had a 30% difference between the was and now prices.
The fact that adding this percentage did not change conversion rate tells us that listing the savings in a product’s price as a percentage instead of two dollar amounts, by itself doesn’t seem to do much for conversion rate. (Take the usual disclaimer in statements like these that this applies to this site in this instance, there are always exceptions in CRO).
But what we tested next was placing the savings percentage back on PLPs and PDPs during a period when the store had different pricing for different products.
In this test we did not have multiple colors, simply an A and a B variation, with and without the percent off.
This test showed a 2.57% lift in conversion rate from PLPs and PDPs with 99% statistical significance. Revenue per visitor also increased by 2.54% with 95%. This was across 700,000 visitors and 18,000 conversions.
The lift was higher on mobile than desktop. On mobile the lift was 3.61% with 99% significance and desktop only 2.22% lift with, notably, only 89% statistical significance, which by the industry convention of 95% statistical significance to declare a test “significant” would be declared “no difference”.
Nonetheless, even if the lift was for mobile only, it shows a stark difference from the first test which was very much “no difference”.
What does this tell us?
Price Presentation Is a Function of the Price of Other Products
These results tell us that price presentation — a huge needle mover in ecommerce — is not about price from single items. It’s a collective phenomenon. It’s a function of the price of all of your products. Customers view pricing of one product as a part of a collective, where relative differences matter a lot in buying psychology.
The savings percentage was clearly overlooked when it was the same number for all products. But when it changed product to product, it drew the attention of customers and perhaps drew them to certain products with steeper discounts and increased conversion rate. The fact that revenue per visitor also increased means that this was done without simply attracting customers to lower AOV products. The percentage discount mattered, not necessarily the final price.
Overall this suggests the following for ecommerce brands:
- If you have was and now prices and different savings percentages per product, definitely consider testing showing the percentage off
- In general test price presentation carefully, it can make notable differences in conversion rate but stopping after one failed test may leave revenue on the table
If you’d like to read more articles related to this or more ecommerce AB test case studies, here are some suggestions:
- Our usability vs. desirability framework for CRO and ab testing
- Our research study of mobile checkout best practices across the 40 largest U.S. ecommerce sites
- Case Study: Adding a sticky add to cart button on PDPs
- Case Study: Adding Free Shipping messaging in different locations
- Case Study: Adding a Growth Rock coined ‘Link Bar’ to improve mobile navigation
Finally, if you’d like to learn more about our ecommerce CRO agency, you can do so here.
Optimizely vs Adobe Target vs VWO (Visual Website Optimizer) is a decision we’ve helped many clients work through. We’ve run hundreds of AB tests for ecommerce clients using all three platforms (we estimate we run 200+ AB tests annually), so in this article, we’ll compare the pros and cons of each and give our hands-on industry experience and discuss what differences we’ve seen from practical use, and where we don’t see much of a difference.
We hope this article will provide a nice contrast to the surface level comparisons from software comparison sites and from the marketing documents from both Optimizely, VWO, and Adobe.
In addition to differences between the platforms, we also share strategy nuggets in this article, such as:
- Why your AB tests should measure more than just one or two conversion goals
- When to use click goals versus pageview goals
- How we setup preview links so we can easily do QA (quality assurance) on multi-page tests
Who we are: We’re an ecommerce conversion rate optimization (CRO) agency. We design and run AB tests in multiple platforms every day. You can learn more about how we can help your ecommerce brand increase conversion rates, or join our email list to get new articles like this one emailed to you.
Finally, we got lots of help on this article from Brillmark, an AB test development agency we partner with and highly recommend.
How We Came Up With This Review
This list is from our experience actually developing and running conversion optimization programs for ecommerce clients using all three a/b testing tools.
This article was not influenced in any way by any of the software platforms we discuss, and we have no affiliate relationship with any of the platforms.
We simply run ecommerce ab tests all day, every day, via our agency. Below, we first discuss the main functional differences in using the software, then we discuss what we know about pricing for each platform at the end and give some recommendations of situations where you may want to go with each of the platforms.
Setting up Goals in Optimizely vs Adobe Target vs VWO
We’ve emphasized this in its own article before and mention this to every client we work with: setting up proper goals is essential to every AB test.
To recap briefly here:
- It’s not enough to measure just one goal in an AB test (for example, revenue) — you could miss the detail of why that goal increased, decreased, or stayed the same.
- Don’t just measure the immediate next step only declare winners and losers on just that goal — for example, measuring the success of an AB test on the ecommerce product detail page with only “Add to cart clicks” as your only goal. We’ve seen add to cart clicks differ from completed checkouts more times than we can count.
- Be careful about tests where one goal increases (even with statistical significance) but related goals don’t agree with that result. Try to have an explanation for that or flag that test as one to retest later (e.g. transactions increases, but views on checkout, cart, and add to cart clicks are flat, and the test was on the category or listing page. Why would that be?)
So the solution to these problems is two fold:
- Have a Standard Goal Set that lets you measure key steps in the entire conversion funnel. For example in ecommerce, have a standard set of goals for transactions, revenue, views of key pages in the checkout flow, cart page views, and add to cart clicks.
- Define Unique Goals for Specific AB Tests: For tests that add, remove, or edit certain elements, you can add specific goals that measure usage and engagement of those elements. For example, if you add something in the image carousel of a product page, you can add a unique click goal inside that carousel (in the alt images for example) to see if your treatment affected how many people use the carousel. This is in addition to your standard goal set, not in replacement of it.
So, in order to do this, it should be easy to setup and re-use goals within your a/b testing solution.
Two Most Common Types of Conversion Goals: Pageview vs. Click Goals
Finally, there are largely two types of goals that we find most useful in AB testing:
- Pageview Goals
- Click Goals
You typically use pageview goals for the most critical metrics. (a) checkouts or final conversions (via the receipt, success, or thank you page) and (b) preceding pages (the checkout flow).
Clicks goals are most useful for measuring user engagement on certain items, and as substitutes when you can’t use pageview goals (for example submitting a form that doesn’t redirect to a thank you page. You can also use a form submission event goal for this.). In terms of the “engagement” use case, we find this most useful when you are testing the effect of a certain element on conversion rate.
Using a common ecommerce example, say you are testing the impact of a particular widget or feature on a product page.
For examples using Boots.com’s mobile cart as an example, say you are testing the impact of a “save for later” link:
What goals should you include in this test?
First, you want the usual ecommerce checkout flow goals: These are a series of pageview goals through the checkout flow, like shipping, checkout, conversions (completed orders or transactions) and revenue.
Those goals will teach you if including this link negatively impacts checkouts and thus immediate conversion rate.
But they don’t tell you the complete customer experience story: you also want to know if people actually use that link. So you’re going to want one click goal for the link itself and perhaps pageview goals for any page you get to after you click that link (company dependent).
Thus both pageview goals and click goals are useful.
(Note: This example is ecommerce, because that’s what we work on exclusively now, but it applies to other website types as well. For example when we did SaaS AB testing, we’d similarly want to measure clicks on pricing option boxes and other elements we added to marketing pages.)
Which Platform is it Easier to Create Conversion Goals In?
In general, we find Optimizely and VWO much easier than Adobe Target for setting up conversion goals. We found some pros and cons when comparing Optimizely vs VWO that may help you pinpoint the platform right for you.
In Optimizely and VWO you can create goals at the platform level and have them be re-usable across multiple experiments.
In Optimizely, you can go to Implementation > Events and manage your list of events and create new ones.
In particular, look how easy it is to create a click goal (We’ll contrast this with doing it in Adobe Target later):
The equivalent in VWO is their frequently used goals section in the setup area of every AB test:
Unlike in Optimizely, there isn’t a central “AB test goal management” section inside the AB testing product of VWO. You just can see your most frequently used goals in a list as per the above screenshot when creating goals for any test you’re building. Perhaps this is slightly more cumbersome than Optimizely’s central events area, but we find this more or less equivalent.
[Aside: VWO does have a “Goals” section in their CRO product, more on that below. None of our clients have their CRO product, just AB testing, so we don’t have direct experience with it, but it seems this is more for tracking certain conversion metrics overtime site wide rather than a database of goals to use in AB tests.]
Also, just like in Optimizely, adding click goals is just as easy as pageview goals:
In contrast, in Adobe Target, you can create click goals, but there are some restrictions. Namely, you can only do this in tests you create using the visual editor method, it doesn’t work for experiments created using the form method.
To actually add the goal, you have to get into the Target visual editor and click on the element you want. When you do that, Target will let you select which CSS selector you want to target for the click (e.g. class or ID). The restriction is that this can’t be updated during the test. So in certain situations, like if the element ID or class changes dynamically for every new session and the selector selected by adobe visual editor is static, and you can’t modify it manually.
Pro Tip: Create a Base Experiment Template
This gives a standardization to your AB tests that’s useful. Our strategists find it convenient to see goals in the same order, with the same set in every experiment. It makes results interpretation easier. Meanwhile our developers have a set template for writing their test code. This also allows developers to switch off, where one can finish a test that another started, help each other out, etc.
Preview Links in VWO vs Optimizely vs Adobe Target
Preview links are important to do QA (quality assurance) on AB test variations.
Quickly, unlike traditional development where you develop in a staging or development server, QA there, then push to production, in AB test development you develop inside the AB test platform, and QA the variations on the production server directly using preview links. Preview links are unique links that let you see a page as the variation, but you need the link to see the variation, no real users on the live production site will ever see it.
All 3 platforms (and basically every AB testing platform) has preview links, but there are some subtle differences in how they convenient they are to use.
First is how easy is it to see if goals are firing and measuring properly?
In Optimizely, there’s a preview widget that let’s see you see details about the experiments running, see a live feed of events Optimizely is picking up on, and more.
It’s pretty convenient for QA-ing a split test. For example you can click on elements and see in the events feed if Optimizely is picking that up as a click goal. So you can see, before you launch, if all your goals are measuring properly. For many tools, it requires checking logs and consoles, but this feature lets any non-technical employee do the same.
The other useful thing we use the preview snippet for is to see pages in certain audience and experiment conditions.
For example, note this tab above is called “Override”. What that means is you can enter audiences and experiments that you want to turn on and off for the purposes of viewing the page. This is useful for us since we’re always running multiple tests on a clients’ site at one time. So if a test is running on the PDP for example and we’re QA-ing a new test in the pipeline also on the PDP, we can preview it with the current test turned off. This is useful and speeds up the AB test development process (otherwise you’d have to wait until the first test concluded to QA and finish developing the new test).
VWO has a similar preview widget, but its features are more limited. It just let’s you choose the variation of the experiment you are previewing that you want to see, and shows you some log data to make sure things loaded properly:
It doesn’t show you click and event firings as you click around the site and load different pages.
Preview Widgets Help You Easily Perform QA on Multi-page Tests
One other convenient thing about the preview widget for both Optimizely and VWO is that it lets you browse around the entire site and “stay” in the variation.
This is really useful for multi-page experiments, which are common for most companies doing a/b testing.
Say you have an AB test running on all PDPs. If you get one preview link for one PDP, and you open it, you can see how the variation will look on that one page, but what about on other pages? Maybe other PDPs have longer product names, more photos, or different elements that could “break” the variation. It’s useful to test the variation code on multiple PDPs to check for this.
Normally with just one preview link, if you click off of that page, you don’t remain the variation, you just enter the normal live site.
But with the preview widget above in Optimizely and VWO, you can stay in the variation regardless of where you go on the site. So you can check as many PDPs in this example as you like and see how the variation looks on all of them. This helps speed up QA a lot.
We actually use this QA cookie method for all split testing platforms, including Optimizely and VWO, because although the preview widgets in Optimizely and VWO are useful, it requires you logging into the Optimizely or VWO account and clicking preview to get to it. So when a developer finishes coding a test and passes it to QA, it’s actually really convenient for the QA team, the designers, and the clients, who aren’t always logged into the right account, to just get a single link that keeps them in the variation.
Results Page User Interface
In our view, the results page UI is one of the biggest differences between Optimizely vs VWO.
In Optimizely there is a single long page of results and graphs for all goals, which is easy to scroll through:
…and there’s a nice summary of the results themselves at the top that you can horizontally scroll through:
In our opinion VWO’s results page UI is a little clunkier.
We use the “Goals” report almost exclusively, more on the difference between this and the “Variations” report later.
You have a list of goals on the right and you have to click into them one by one to see the results.
When you click on each, you get a bar graph with numerical results underneath.
You can click the Date Range Graph to see the cumulative conversions. It doesn’t have an option to see the conversion rate by date like the Optimizely graphs do.
This is largely serviceable and fine. When deciding between Optimizely vs VWO, we recommend VWO to most companies getting started with AB testing because these minor inconveniences are well worth the cost savings vs. Optimizely in our opinion for most companies starting with AB testing. But if you’re a power user, this is something to keep in mind.
Adobe Target has, in our opinion, the clunkiest results page.
First, it opens with only one goal’s results displayed:
Note above it has only one goal under “Shown Metrics”. So you have to manually select the other goals to include in your report.
Then, the second annoyance is that once you do select all your goals to be displayed, there is a clunky accordion UI where you have to manually “open” and “close” one goal after another to see the results of all of them. By default, all but your first goal show just the high level results without percentage increase or statistical significance, which isn’t that useful.
Here it is with all accordions closed:
Here it is with one goal’s results open:
Again, these aren’t major annoyances.
As many have written before (most notably, Evan Miller’s “How Not to Run an AB Test”) you shouldn’t be checking AB test results all the time anyways.
Nonetheless if you start to really integrate testing into your company’s CRO processes, you will start to use these features often. So in this article we wanted to be transparent with all pros and cons that we’ve encountered, even if they are small annoyances, and simply tell you later what we think is a big deal and what is not.
Statistical Model Used in Reporting Results
We’ll be brief here as a lot has been written about statistical models for interpreting AB test results, and we’ve found that ecommerce teams start falling asleep when we talk about AB test statistics too much.
In terms of platforms, both VWO and Adobe Target use more “standard” calculations of statistical significance (stat sig = 1 – p-value). VWO uses a z-score based calculation for it’s “Chance to Beat” number. Adobe Target uses a traditional confidence level in its results reporting.
Optimizely, however, is unique in that it uses a statistical model it developed with professors at Stanford that it calls it’s “stats engine”. This is a somewhat recent feature that rolled out at a similar time as Optimizely X (which is what they call their testing tool now). The math behind this is complex, but the end result for you, the user, is that Optimizely claims that with it’s statistics calculation, you can “peek” at the result whenever you want, and the moment it hits your significance threshold (say 95%), you can stop the test.
That’s actually really convenient, but in our experience we have seen multiple instances where Optimizely has reported 90%+ statistical significance, but after collecting more data, it goes back down.
Integration with Analytics
As you start to do AB tests, a proper integration with your regular analytics tools becomes borderline essential. In our experience, there are two main reasons why:
- Other Metrics – You’ll want to know how the variations in the experiment differed in metrics that you did not think to create goals for beforehand (e.g. time on site, visits to some other page on the site you didn’t think of, etc.)
- Consistency with Analytics – You’ll want to validate the ab test platform’s results with your own analytics platform
(1) Other Metrics
No matter how careful you are in measuring plenty of goals in your AB tests, inevitably, you’ll have tests where everyone will want to know how the variations contrasted in metrics you didn’t measure as goals in the ab testing platform.
Often, ambiguous results (a couple metrics increasing for the better, others not) will lead the team to want to know auxiliary information (e.g. “Well, how did this affect email capture?” or “Did more of them end up visiting [this other page]?”). Or you simply may want to know about other metrics related to user experience that you didn’t think to include as goals when launching the test (“Did the bounce rate decrease?”, “How was time on site affected?”)
Integrating your AB testing software with other metrics will let you answer these questions when they arise because you can create segments in analytics (Google Analytics, for example) for the variations of your AB test and see how they differ in any metric that the analytics platform measures.
(2) Consistency with Analytics
A common issue in AB testing is not seeing the lifts from AB tests manifest themselves in analytics once the winning variant is implemented on the site. This problem is almost always due to other factors affecting conversion rate, not the AB test being “invalid”.
For example, natural fluctuations in site conversion rate can be massive (due to day of the week, different promotions, ad traffic changes, etc.), and an improvement in conversion rate of 10% from the PDP to checkout will easily get buried in those fluctuations. That doesn’t mean it’s not important. For an 8 figure brand, that 10% increase is worth a lot annually. It just means AB testing is the way to measure that, and expecting to see a noticeable bump from implementing the winner when your normal fluctuations are huge is not reasonable.
All three testing solutions have ways to integrate with analytics, but for companies already on Adobe Analytics, obviously with Target, you get the potential perk of both software platforms being part of the same software company. Note, however, in our experience, that advantage is minimal. Why? Because if you can push AB test data from any platform into your analytics software, it essentially doesn’t matter if the AB testing software and analytics software are made by the same company.
Here are links to integrating with Analytics:
- VWO to Google Analytics
- VWO to Adobe Analytics
- Optimizely to Google Analytics
- Optimizely to Adobe Analytics
Perks of VWO’s “CRO Package”
One unique feature of VWO that we think is pretty useful is what they call their “CRO” package. Just to be clear, you can buy just AB testing for them, or buy this CRO package in addition. You can learn all the details on their CRO page, but we want to highlight just a few features we think are worth considering here.
In short, their built in user research tools, which they group under “Analyze”, are pretty useful when offered in a single platform that also does AB testing.
Here’s why: Yes, you can get heatmaps, recordings, forms and surveys in many other tools. But, having them integrated with your AB testing tool lets you get heatmaps or recordings, for example, on a variation by variation basis.
This means you can see differences in user behavior between original and variation of an AB test.
The need for this comes up in our normal work with clients all the time. You set up and run a test, and just like we explained in the analytics integration section, you realize after the results come in that you’re curious about user behavior that can’t be measured by only the goals you thought to create: (“Did they actually use this feature?” “Did they scroll a lot to see more images?” etc.) Other times you may want to see if you missed something at the QA step and want to see recordings to make sure customers did not encounter an error in a variation you designed.
By using a third party analysis tool like Hotjar or Full Story, it is possible to integrate to the AB testing tool, but it’s more tedious and in our experience sometimes doesn’t work well. With VWO, you can easily get heatmaps and recordings of different variations.
Actual AB Test Development
Our developers don’t feel like there’s much of a difference in developing in either platform. In all cases, developers typically develop a variation locally, then upload the code into the platform and test via preview links. All the platforms seem to handle this just fine.
The above sections were the things that matter to us the most. Here we’re going to rapidly go over other features you may want to consider:
- Multivariate testing – Advanced CRO teams may want to run multivariate tests (mvt) which is possible in all three platforms
- Server Side testing – Optimizely has a robust “Full Stack” product offering that lets you run server side AB tests and AB tests on iOS and Android. VWO also lets you run tests on mobile apps. Adobe Target also lets you run server side tests.
- Personalization – All 3 platforms advertise having personalization solutions, although it seems Target and Optimizely have the most robust offerings here, especially at the enterprise level.
Pricing and Which to Choose
It’s hard to discuss pricing definitively because all 3 optimization platforms (like many software platforms) change their pricing often, pricing is a function of your traffic levels, pricing is a function of the features you buy, and pricing is also a function of other licenses you may already have (e.g. Adobe Experience Cloud). So, sadly, it’s not as simple as us listing out prices here.
But in general, we’ve found that VWO (here are is their pricing plan page) is the most affordable testing solution by a long shot, followed by Target, then Optimizely.
For more than 2 million visitors per year to test with, expect to pay at least $1000 per month, with plans getting much higher than that quickly.
For clients that are just starting AB testing, we almost always suggest VWO as it’s the most affordable and we think the majority of your success in a CRO program has to do the with the strategy you implement and knowing what to test, not the software you choose.
That said, if you think it’s likely you’ll things like enterprise level personalization, or personalization across multiple device types, and Optimizely’s unique features are important to you, it would also be a good option to consider.
Finally, if you already are on Adobe Marketing Cloud and want to make sure you stay integrated, Target may be a good option to consider.
Not sure? Feel free to reach out to us and ask us what you’re considering and we can give our honest opinion.
And if you are an ecommerce brand looking to implement a strong, results driven CRO program with a team that has lots of experience doing this, you can learn more about working with us here.
[Note: Not evaluated in this article are other ab testing solutions such as: Google Optimize, Monetate, AB Tasty, and more. We may write subsequent articles contrasting these platforms.]
Should you make your add to cart section sticky (it stays visible when scrolling)?
In our work with dozens of ecommerce companies over the last 5 years, we’ve noticed that the design and UX of the add to cart section (buy buttons, options, quantity controls) is debated a lot.
- Should you make the add to cart button sticky?
- Should we make the entire section sticky?
- Should we hide some selections (size, color, flavor) or show all?
- Should we show all sizes as buttons or shove them in a dropdown?
- In what order should we present the different options?
This case study is about the first question.
In this article, we present results from a simple AB test where we tested having the entire add to cart area be sticky versus not sticky on desktop devices for an ecommerce site in the supplement space.
The variation with the sticky add to cart button, which stayed fixed on the right side of the page as the user scrolls showed 7.9% more completed orders with 99% statistical significance. For ecommerce brands doing $10,000,000 and more in revenue from their desktop product page traffic, a 7.9% increase in orders would be worth $790,000 per year in extra revenue.
In our experience, sticky add to cart areas on desktop product pages are less common than on mobile, so this result suggests many ecommerce brands, marketing teams, and store owners may benefit from AB testing sticky add to cart buttons or entire add to cart areas being sticky on their product detail pages.
Note: If you’d like to run AB tests like this to help increase your ecommerce site’s conversion rate or improve user experience, you can learn about working with our ecommerce CRO agency here.
Building a Data Driven Culture: AB Tests Can Help Settle Endless Debates
When you read about AB tests online, most articles have a nice predictable pattern: (a) We had a great hypothesis. (b) we tested it (c) it worked and got a conversion lift!
But when you do enough AB testing (we run hundreds of AB tests on ecommerce sites every year), you come to learn that most tests don’t end up so neat and clean.
So instead, we urge you to think of an alternative, but also valuable use case of AB testing:
Run a test to learn about your customers even when you could make arguments for either variation being “better” and there isn’t consensus on which one is “obviously going to win” (a phrase we often hear clients use).
Let’s use this article’s example to learn why this “ab tests for learning” use case is so useful: For this sticky add to cart button example, in the traditional design process, people in the company would debate in a conference room (or by email, or in Slack), about having a sticky vs. non-sticky add to cart area on their product page.
They’d go in circles fighting with each other about which is better, which is more on brand, which competitors do something similar, and on and on.
Then, either the higher ranking person wins or an executive steps in and gives their official decision like Caesar at the Colosseum (even though 99% of the time the executive is not a UX expert).
But in reality, neither side knows which will do better. Both arguments are legitimate. And the one important contingent that does not have a seat at the table is the customer. An AB test lets your customers give their input.
And, finally, whichever variation “wins” is important but not as important as the learnings about how your customer thinks, what they prefer, and what is more persuasive to them, all of which you could learn by simply running a test.
So, with all this on our minds, we ran this test.
The Variations: Fixed vs. Sticky Add to Cart Buttons
The variations were simple. One of them (A), the buy box was not sticky and in the other (B) it was sticky. (Aside: Read why we anonymize clients here.)
In this case, the client’s product page had a sticky buy box (variation B) to begin with, and it hadn’t yet been tested. The reason we decided to test this was because there was a content-focused culture around the brand, so we felt it was important to learn how much users want to be left alone to read content versus having a more in your face request to buy following them down the page.
One can make a theoretical argument for both variations:
- Argument for variation A (not sticky): You don’t want the site to act like a pushy salesperson, hovering over your shoulder when you’re just trying to read about the product and its benefits. It will turn people off.
- Argument for variation B (sticky): People can read about the product just fine, and reminding them of the ability to add to cart will increase the percentage of people that do so.
Results: Sticky Add to Cart Button Gets More Orders by 8%
In this test, the sticky add to cart variation showed 7.9% more orders from the product pages with 99% significance. The sticky version also showed an 8.6% increase in add to carts with 99%+ significance. The test ran for 14 days and recorded approximately 2,000 conversions (orders) per variation.
Referencing the arguments for either side above, this test gave the marketing department and us valuable information about the customer (and saved a potentially conversion hurting future change of undoing the sticky add to cart area).
Despite this brand’s heavy focus on content, despite the customers’ needs to read a lot about product uses, benefits, ingredients, and more, having an ever-present add to cart area seemed to be okay with the customer. It did not annoy the customers, and in fact seems to increase the percentage of them that decided purchase. This is a useful learning, despite this test largely being in the usability category, not desirability.
(Note: This is true of this store, it may not be true of yours. Hence we always suggest at the end of case studies that you should “consider testing” this. We don’t say you should just “do” this.)
This learning can actually be extended beyond the product pages to content pages such as blog posts where we can test being more aggressive with product links and placements to see if similar results can be achieved there.
This is why we love using AB testing for learning.
Update: Sticky Add to Cart Button on Mobile Product Pages Also Increased Conversion Rate
Since first writing this case study, we tested multiple UX treatments for a sticky add to cart button on the mobile PDP of this exact same supplement ecommerce store. The smaller screen real estate on mobile devices, of course, means finding CTAs like the add to cart button can be more difficult, so making the add to cart button sticky could improve user experience.
We tested three variations: (A) No sticky add to cart button (B) Sticky add to cart button that simply scrolls the user to the add to cart area on the product page (C) Sticky add to cart button that causes the add to cart area to slide up from the bottom of the page like a hidden drawer.
We observed a 5.2% increase in orders for variation C where the add to cart area slides up like a drawer, with 98% statistical significance. This test ran for 14 days and had over 3,000 conversion events (orders) per variation (so over 9,000 conversion events total).
Add to cart clicks increased by a whopping 11.8% on variation C (>99% statistical significance) and by 6% on variation B. So actual use of the button was substantially increased by making the add to cart button sticky on the mobile PDP.
Variation B — where clicking on the sticky add to cart button simply scrolls users to the add to cart area on the PDP — on the other hand showed no statistically significant difference in conversion rate from the original. As per an insightful reader comment below, this discrepancy between variation C showing a clear lift and variation B showing no difference could be explained by:
- The slide up “drawer” of add to cart functionality (choosing a flavor, quantity, etc.)in variation C may have kept users focused on that step because it feels like you’re in a new “sub-page” of sorts instead of just scrolling to another part of the PDP.
- Also that means on the PDP itself there was no space taken up by add to cart functionality in variation C like choosing flavors so users got to see more persuasive “content” about the product on the PDP.
This suggests that similar conversion gains can be realized on mobile product pages, but the details of how to implement them and the UI/UX that will cause a conversion increase are important. Questions? Let’s discuss in the comments.
If you have questions about whether your store should test sticky add to cart functionality, how to execute it (e.g. Shopify plugins vs. manual coding), or about working with us, you can ask us in the comments, send us an email, or learn about working with us.
Similar CRO Articles
- Mobile users don’t like the hamburger menu, results from testing our Link Bar
- Case study of upsell and cross sell placement AB tests
- Does promoting free shipping increase conversion rate? AB test results.
We know free shipping is a massive needle mover for ecommerce customers. In this short case study we share results from two AB tests we’ve done that help answer:
Where is the best place to put your free shipping and free returns messaging to get the biggest lift in conversions?
Test 1: Free Shipping messaging placement for furniture ecommerce site increases conversion rate 19%
On the original site, free shipping and free returns was already mentioned in the promo bar at the top of the page which was visible sitewide.
We hypothesized that due to (a) banner blindness and (b) too many competing messages in the promo bar, this message was not getting across.
Where could we place this message that would be least likely to be missed and most likely to influence the buying decision?
We settled on placing it below the add to cart button on the product detail pages (PDP).
We saw a 19% increase in orders with 99.9% statistical significance. The test ran for 2 weeks and recorded over 1,500 conversions.
In many ways, this test is fascinating. In the original, the free shipping and free returns messaging is already mentioned in the promo bar, at the top of the page, sitewide.
How could customers not see this?
This result suggests there is truth to the idea that banner blindness and competing messaging hurts the effectiveness of that message.
If you offer free shipping and free returns, or have other key value propositions (like an active discount code or promotion) you should strongly consider testing where free shipping and returns messaging is placed, and certainly test adding it near your add to cart button on the PDP. Most brands from what we’ve seen either put them in promo bars (not bad) or save them for graphics on the homepage (much worse).
Test 2: Free Shipping copy for a Supplement Company Does Not Affect Conversion Rate
We tested something very similar for a niche supplement company.
In this case, we actually hypothesized it would perform better because there was no mention of free shipping on the site except in fine print. (Definitely not in a sitewide promo bar like the example above).
Just like the above test, we put free shipping copy below the add to cart button:
The only differences were:
- The copy said “Free US Shipping & Returns instead of “Free Shipping & Free Returns”
- There was a dropdown caret that had more details on the 30 day return policy. The schematic above for B (our variation) shows the caret expanded. Upon pageload it was collapsed, i.e. the box with return details was not visible.
After 2 weeks and over 5,000 conversions, we saw no difference in conversion rate between the original and variation. The conversion rates were almost identical!
For this brand we actually tried a few different placements of free shipping copy including in a promo bar and still found it made no difference on conversion rate.
Why could that be?
AB tests tell you what and you have to hypothesize as to why.
In this case it could be several reasons:
- This is a specific, niche supplement space where there are only a few providers and most provide free shipping, so it may be expected by the customer.
- This is a much lower price point than the first example (furniture) so perhaps in the first example the thought of a hefty shipping cost and hassle of returning furniture is a huge friction point that the copy helped assuage.
- The supplement brand is very content heavy, so readers may be far more sure they want to buy after reading up on the details and details like shipping cost don’t matter as much.
- Finally, the customers for the supplement brand may simply be less price sensitive due to its niche characteristic. In fact, later we did pricing tests that also showed little difference (to be profiled in a later case study).
One lesson we’ve learned over and over is that while there are UX patterns that seem to perform better across multiple ecommerce sites, there are always plenty of exceptions. So what works for one site, doesn’t always work for another. The two examples above show that.
So we encourage you to learn, take inspiration, and think critically about the case studies above and how they may apply to your store. Then, we encourage you to run your own tests before simply implementing these UX treatments on your site.
If you’d like to talk to us about improving conversion rates for your ecommerce brand, you can learn more about working with us here.
Our ecommerce conversion optimization team has done an extensive analysis of the mobile checkout experience of the top 40 ecommerce sites in the U.S. (by traffic, according to the Alexa.com Shopping category). In this article, we present the results and analyze the impact of this data on current (2018 and 2019) mobile checkout best practices and mobile ecommerce trends.
This analysis includes:
- Key UX features in mobile checkout for each site (24 features total)
- The percentage of the top 40 sites that employ each feature
- When applicable, AB test data we have for each feature
- Our conversion optimization team’s recommendation for each feature
We’ve divided our analysis into 4 sections of mobile checkout:
Each section has between 4 and 8 features analyzed, with a total of 24 features discussed.
We go beyond just listing features to discussing which should be considered mobile checkout best practices and which can be ignored. For example, you’ll learn insights like:
- What percentage of these sites employ payment systems like Apple Pay
- What percentage have a save your cart feature?
- Does it matter if you add trust badges on shipping and how many sites do?
- Has our team AB tested these features and if so, what have we concluded?
- For 24 unique mobile checkout features
At the end, for reference, we’ve included screenshots of the mobile checkout flows of each of the top 40 sites we analyzed.
You can find details of our methodology, including the full list of the top 40 sites and the bottom of this page.
Finally, if you’d like to apply to work with us to increase your ecommerce conversion rates via AB testing, you can do so here.
Why Mobile Checkout Best Practices Are So Critical
Mobile checkout is arguably the most important ecommerce conversion optimization trend today.
Because most ecommerce stores have more mobile traffic than desktop, and it’s only going to get worse.
But mobile conversion rates are much lower than desktop. We typically see mobile conversion rates hover around half of desktop.
Brands with the best mobile checkout experiences will have a massive advantage over competitors for years to come. It’s our hope that this study will help your ecommerce site improve its mobile conversion rate through an improved checkout experience.
Mobile Shopping Cart Page Trends
Cart features we analyzed:
- Add to Cart Button Takes User To?
- Number of Upsells in the Cart
- Checkout Button Above the Fold
- Proceed to Checkout Button is Called?
- Displays Secure Checkout or Trust Badges
- Total Savings Highlighted in a Separate Line Item?
- Save Cart for Later Option?
- Keep Shopping Link?
Add to Cart Button Takes User To?
Designers love to talk about “minimizing clicks”. So in ecommerce companies, deciding what happens when the user clicks “add to cart” can be a source of debate:
Should you take them straight to the cart? This minimizes clicks if they will only check out with one item, but increases it if they will add multiple. For example Etsy.com, BestBuy.com, and Wiley.com all do this.
Should you use a temporary notification? (Appears, says they added to cart, then disappears) This bothers the user the least but may not be “in your face” enough to encourage checking out. Nike.com, HM.com, and Macys.com all do this. In general “sliders” or “drawers”, even on desktop checkout flows seem to be an ecommerce checkout design trend in the last few years.
Should you use a permanent notification that pops up or slides up? This is very clear, but may create more clicks for the user. Here is Walmart’s:
The majority (over 50%) of our Top 40 sites used a permanent notification. These are very similar to taking the user to cart, because they have an option to proceed to checkout immediately or review cart or continue shopping.
The key difference is a permanent notification can often be closed (with an “X” button) and the user remains on the product page without waiting for page loads, so the experience is faster then sending them to the cart.
What is Best Practice for Add to Cart Notifications? Our AB Tests Show…
We’ve tested different add to cart notifications before and haven’t seen large, impactful changes in conversion rate (on both mobile and desktop).
For example in one mobile specific test, we tried different styles of a permanent notification with differing amounts of information and size.
We saw clear differences in how many users clicked Proceed to Checkout vs. View Cart buttons (10% – 12%) with statistical significance but saw no net change in orders or revenue.
This suggests that the format of the add to cart notification may not make a huge impact in actual mobile conversion rates. As per our usability vs. desirability optimization framework, changes that affect a users desire to checkout (desirability) usually have a bigger impact than reducing friction (usability). This is a usability tweak so it’s unsurprising that it didn’t make a huge impact.
Growth Rock Recommendation: For high traffic or high transaction volume sites, this may be worth testing, but don’t hold your breath for large changes in mobile conversion rates (> 5%).
Number of Upsells in the Shopping Cart
65% of our top 40 ecommerce sites have upsells and cross sells on the cart page. Surprisingly 14 of the 40 sites we analyzed did not have any upsells or cross-sells on the cart.
On average, those that had shopping cart upsells, had 14 products recommended somewhere in the cart.
That’s a lot of products!
Typical mobile shopping cart upsell designs looked like this one, from Sears:
Upsells and cross sells are a huge factor for any ecommerce website because of their power to increase average order value (AOV).
What is the Mobile Shopping Cart Upsell Best Practice? Our AB Tests Show…
In our AB tests, we’ve seen upsells and cross-sells improve AOV significantly, but they are not always guaranteed to be effective.
Thus we strongly recommend every ecommerce site test this for themselves. Test different upsells, test including and not including upsells, and test how and where the upsells are presented.
In multiple tests, upsells have made no difference on AOV and revenue per visitor. As the site management team or owner, you want to know this so you can test alternative products, number of products, position, copy, etc.
On the other hand for those sites that don’t have upsells or cross-sells, they should for sure be tested as they have the potential of increasing our AOV and (most importantly) revenue per visitor, significantly.
None of our tests have yet to show a decrease in conversion rate due to the presence of upsells or cross-sells in the cart.
Checkout button is above the fold?
60% (24) of the top 40 sites had their checkout button above the fold on their mobile cart page.
A pretty well accepted ecommerce mobile checkout best practice is to put checkout buttons above the fold. You hear UX experts recommend this all the time.
But again, this is a usability tweak — where the checkout button is doesn’t affect a users desire to checkout. So should this be a best practice? What does the AB testing data say?
What is the Checkout Button Position Best Practice? Our AB Tests Show…
Because this is a usability tweak, we have only tested this once.
In that test, we did not see a statistically significant change in checkout rate by adding a proceed to checkout button above the fold, contrary to what the well accepted best practice would tell you. In a second variation where we added detailed savings and order total amounts in addition to the button, we actually saw a trend towards a 2% decrease in conversion rate (albeit with only 84% significance).
Growth Rock Recommendation: You can test this as your results may vary (very few UX trends apply to every site) but we’d suggest focusing efforts on bigger potential wins.
Ask what’s actually holding customers back from checking out? Chances are it’s not that they can’t find the checkout button.
Proceed to Checkout Button is Called?
We love to make jokes about people who think AB testing is about button colors and button text and other tiny details.
The vast majority of the time, details like this make no difference.
However, if you’re curious about the proceed to checkout button copy, above is what we found in our top 50 ecommerce sites. Most of the top 40 stores simply called their button “Checkout”.
Growth Rock Recommendation: Again, as per our Usability vs. Desirability framework, we don’t recommend spending too much time on small UX decisions like this.
Displays Secure Checkout or Trust Seals
Credibility icons and social proof is, for sure, an ecommerce design trend these days (e.g. 2017, 2018, and 2019). It is one of the most oft-mentioned tactics in the conversion optimization community.
But it’s interesting that less than half (42.5%) of our Top 40 sites had a security message or trust seal on their mobile cart page. Even fewer (only 17.5%) had credit card logos. It’s arguable whether credit card logos are necessary today as consumers now expect all sites to take all major credit cards.
Advertising a “secure checkout” experience on the other hand is more controversial. As we indicate in the payment section, our AB tests have not shown a lift in conversion by mentioned security, or using security badges, trust seals or icons.
It’s possible that at least some of the 57.5% of sites we analyzed that also don’t mention secure checkout or have trust seals on the cart page have also tested this and found it didn’t make much of a difference.
What are Trust Seal Best Practices on the Shopping Cart? Our AB Tests Show…
Of course social proof is a well-known persuasion tactic and we agree with its usage.
That said, we have not seen credibility icons or social proof quotes make a huge difference in most AB tests, in particular in the checkout flow.
As mentioned below, our few tests of credit card and security trust logos on the payment step have not shown a conversion lift.
Growth Rock Recommendation: This is an easy test to run and we suggest you try it as many have reported data suggesting trust seals and security messages improve conversion rate. Just don’t hold your breath for a conversion increase. If you have a trust seal or security message on your site currently, it could be worth testing removing it. In our view, if something is not helping, its best to remove it to keep experiences as clean and distraction free as possible.
Total Savings Highlighted in a Separate Line Item?
Savings (or, more specifically, perceived savings) are a huge factor in for ecommerce stores. We’ve seen evidence of this for many different ecommerce brands: low price (AOV < $30), luxury apparel (AOV > $300), furniture (AOV > $1000), to name a few. User surveys and customer support interviews in luxury apparel even indicate that customers love feeling like they got a deal or “the best price” (even on a $1000 purse).
But only 35% of the Top 40 sites we analyzed have savings highlighted as a separate line item in their order total in the shopping cart:
So what is the best practice for how a store should highlight savings on the shopping cart page?
Our AB Tests Show…
We’ve observed something interesting in savings highlighting on the cart page: Highlighting savings at the product level seems to be significantly more important than highlighting it at the order total level.
In one store, we saw a 4% increase in revenue per visitors and 3% increase in checkouts when we highlighted savings on each product, vs no statistically significant increase in either metric when we only highlighted it at the order total level.
Save Cart for Later Option?
Cart abandonment is such an issue for ecommerce teams that even writing this sentence feels like a cliche.
But on mobile? It’s even worse. Mobile users are notorious for adding to cart, and dropping off. So features like this that can capture their email — as long as it doesn’t hurt checkouts, can be really impactful.
Here is an example from Nordstrom:
We’ve tested a save your cart feature on many ecommerce stores and the results have almost always been positive.
Most sites already have this ability built-in, if a user is logged in. So the easiest way to test this is to include a button or link in the cart page that says “Save Your Cart for Later” followed by “by creating a free account”.
In one of our tests, we saw a whopping 250% increase in account creations by adding this link. Why so big? Because most ecommerce sites have dismal account creation rate other than people who already buy. (When is the last time you decided to create an account when buying clothing online for example?)
So adding this incentive (save your cart) and clear CTA on a very high traffic page (cart) increases account creations dramatically.
But does this hurt checkouts?
This can often be a concern as you’re adding a secondary CTA on the cart page that could distract. Our save your cart tests haven’t shown a drop in checkouts. If anything both showed slight increases in checkouts.
Growth Rock Recommendation: All ecommerce sites should test adding save your cart functionality.
Keep Shopping Link?
Keeping distractions to a minimum is a key conversion principle across all website CRO. You want to keep users focused on their primary desired actions.
Thus, our team feels that a “Keep Shopping” or “Continue Shopping” link is not useful on the cart page. Users can already continue shopping in many ways.
- Back button
- Logo to go to the homepage
- The full navigation menu, which is present on almost all cart pages
Instead, we feel it can serve to distract from the primary CTA of proceeding to checkout.
Our AB tests show…
In fact, in one of our “save your cart” AB tests involved replacing the “continue shopping” link with “save your cart” and saw orders trending positive by 6% vs. original, albeit with only 83% statistical significance.
We hypothesized that removing the prominent “continue shopping” link in that instance may have been the true cause of the slight potential conversion rate increase rather than the save your cart link in that test.
About half of the top sites we surveyed had a continue shopping link. Notable sites without such a link in the cart include two of the biggest ecommerce sites today: Amazon, and Walmart.
Notably, for larger ecommerce stores like those, search is a major feature, and since normal nav elements remain on the cart, the search bar is, in effect, another “continue shopping” option for the user, rendering the “continue shopping” button less useful.
This redundancy is made starkly clear on Rei mobile cart page:
Growth Rock recommendation: Test removing your continue shopping link. In fact, consider replacing it with a “save your cart” link from the item above.
- Guest Checkout Option?
- Separate Page to Choose Guest Checkout?
- Continue with Social Media Account Option?
- How Many Pages is the Mobile Checkout Flow?
Guest Checkout Option?
The vast majority (75%) of the sites we investigated have a guest checkout option. It’s well regarded at this point as a conversion “killer” to not have a guest checkout option.
These brands in our list do not have guest checkout and require account creation to checkout:
- Zappos.com (owned by Amazon)
- 6pm.com (owned by Amazon)
What do you notice? The majority are huge brands with household brand names in the U.S.
Half are Amazon or owned by them. Amazon, of course, is built on registrations which feed it’s business model including growing Prime subscribers.
Costco.com won’t even let you checkout of their physical retail store without being a member so that goes without saying.
So the only two unexpected brands on this list are Target and Wayfair. They are both big brands, one in brick and mortar retail, and the other online, but other than size and brand recognition, nothing in their brand ethos would suggest it’s an obvious move to require sign in. Thus, it’s an interesting decision by them to demand users create an account.
Notably, other household brand names as big or bigger than Target and Wayfair allow guest checkout. In particular, Walmart, Home Depot, and Ebay (which historically started out as sign up required to bid on items and only later allowed guest checkout.
Growth Rock Recommendation: Unless you have the brand recognition and size of Target and Wayfair, you should probably stick to allowing guest checkout. At the very least AB test removing it and calculate whether the drop in immediate purchases is made up for
Separate Page to Choose Guest Checkout?
The majority of top ecommerce sites (56%) still send users to a separate page prior to the start of the checkout flow to choose whether to use guest checkout or create an account.
Once again, traditional UX theory suggests “minimizing clicks” should help conversion rate.
Our AB Tests Show…
We’ve seen indications of a conversion lift by removing this page but nothing particularly convincing. The closest was a test run to 880,000 visitors, where, where removing this page (and sending customers straight to the checkout page) showed an improvement in a conversion rate of 1.5% – 2%, which held steady over multiple weeks but ended with only 86% statistical significance.
Growth Rock Recommendation: We suggest you test this yourself if you have the resources to run at least 1 – 2 test a week. If your AB testing bandwidth is limited, focus on bigger wins. If you run this, pay attention to a potential tradeoff between new account creations and completed orders.
Aside: 84% of these pages present guest checkout second to sign in for returning users. While we haven’t tested this (if removing this page entirely makes only a small difference, optimizing this page doesn’t seem like a good use of time), we find it interesting that almost all brands visually prioritize returning user sign in over guest checkout when it’s widely accepted that guest checkout is necessary because the majority of checkouts are from non-registered customers.
Second, on mobile, 36% of the sites don’t even have the guest checkout option above fold (on iPhone 8). This seems like an easy UX fix to make to improve mobile conversion rates.
Continue with Social Media Account Option?
Poor mobile checkout rates are the massive elephant in the room problem for ecommerce stores. Most crossed over from “majority desktop” to “majority mobile” traffic sometime in the past 3 years. But mobile conversion rates are abysmal compared to desktop.
One big reason for that filling in a bunch of forms on your phone still stinks. Consumers just don’t want to do it. So we’ve seen time and again AB tests that improve mobile Add to Cart rates but barely move the needle on completed mobile orders.
Continue with Facebook, Continue with Google, etc. help the users out by using your address known from that social platform. You no longer have to fill all that in.
We found only 27.5% of the sites we examined have social options for checkout. We’re curious to see if that increases with time.
Growth Rock Recommendation: Test adding continue with social buttons as an alternative to normal checkout.
How Many Pages is the Mobile Checkout Flow?
Mobile users have less patience for page loads, so the conventional wisdom is to minimize the need for page loads, which slow them down and inevitably cause some fraction to bounce at each page load.
The average (and median) number of pages in the checkout flow of the sites we examined was right around 4.
The highest was REI.com at a whopping 7 pages of checkout (6 plus one for choosing guest checkout)
Each of their pages are small and easy, asking for just one thing at a time.
On the other hand, HomeDepot.com has only 2 pages (assuming you don’t checkout with an appliance that needs insurance coverage, etc.): (1) guest or sign in (2) the entire checkout form on one page.
The counter-argument to reducing checkout pages is that long forms on one page are intimidating and may scare away the user. The Home Depot checkout page looks intimidating as a long screenshot but on your phone you’re only seeing one part at a time.
Growth Rock recommendation: We’ve heard both sides of this debate in web UX in general, not just ecommerce checkout. We suggest you test this for yourself.
Mobile Shipping Page Trends and Best Practices
- Is There Some Form of Address Detection?
- Do They Have Instant Form Field Validation?
- Do the Number Fields Use Number Keypads?
- Is Site Navigation Hidden on Checkout Pages?
- Estimated Delivery Date Shown?
Is There Some Form of Address Detection?
One of the main reasons for low mobile checkout rates is how tedious it is to fill in forms on mobile.
One way to counter that is with address detection.
55% of the sites we examined had some form of address detection, most of which work like this:
Our team has only tested address detection once in the past few years and we did not see much of a change in conversion rate. It’s worth testing and not hard to implement. Google Maps API for example lets you easily add autocompletion with your form.
Growth Rock Recommendation: It could be worth testing address detection. Form filling on mobile is a known pain point. In our opinion, this should be a mobile checkout best practice because it simply makes form filling easier on the user. Nonetheless it has yet to catch on as an ecommerce trend (mobile or not) and thus the vast majority of ecommerce sites don’t yet have this feature.
Do They Have Instant Form Field Validation?
Continuing on the theme of making form fills as easy as possible, nothing is more annoying to mobile users than filling out a long form, clicking submit, then figuring out there’s an error and hunting around for it.
Yet, 27.5% of our top 40 sites had this poor UX!
The solution is instant form field validation.
Here is the contrast, on Nordstroms, if I enter “1” in zip code and try to move on, it immediately tells me this is not a valid zip code. I fix it right there.
Whereas on Lowes.com, one of the culprits, I can enter “1” in zip code and be on my way and won’t know until I click “use this address”.
Growth Rock Recommendation: We haven’t done AB tests on instant form validation, but we don’t see a reason not to have it. Again, we think this should for sure be a mobile checkout best practice (really on all device types and sizes). Asking users to wait until they submit a form to see errors is just cumbersome. We recommend AB testing this (instead of implementing it outright) so you can see if any unintended consequences of instant validation may hurt your conversion rate. For example, some forms may be too quick to point out errors, causing users who are in the middle of typing an email address or phone number to see a red warning when they simply haven’t finished. If this hurts conversion rate, an AB test will indicate that for you so you can fix the problem.
Do the Number Fields Use Number Keypads?
Form filling on mobile is a pain because typing is a pain. One way to help is to make sure fields that only require numbers (phone, zip) use the phone’s number keypad with larger, fewer buttons than the regular keypad.
This is also in the category of obvious UX improvements. Yet 12.5% violated this rule including:
For example here is J.Crew. When you click into zip, it immediately gives you a numbered keypad:
In contrast, here is H&M when you click into zip:
The UX is unnecessarily cumbersome. You have to click the number button on the left and then use the tiny number keys at the top of that keyboard.
Growth Rock Recommendation: You don’t need to AB test this, you can just implement this outright. There is no reason why a full keyboard is necessary for a number only field.
Is Site Navigation Hidden on Checkout Pages?
Distractions are the enemy of conversion rates. On that accord, it’s become commonplace to remove normal site navigation on checkout — on mobile and desktop.
On mobile, this means removing the hamburger menu and other icons in the navbar and often also unlinking the brand logo.
To let customer go back, there’s usually just one small link included. H&M has a great example of this:
However, several sites (27.5%) of our top 40 violated this rule and had full navigation available in checkout. Specifically:
- Amazon – As with many things, Amazon is maybe a unique case because the role the site plays in their customers lives is very different than other sites.
- Nordstrom – They have a hamburger menu present, although it’s content is drastically reduced during checkout
Lowes provides a very busy example of this:
Growth Rock Recommendation: We don’t see a need for sites that don’t have navigation to test adding it back in, however for sites on the list above that do, we strongly suggest AB testing a distraction-free alternative like H&M.
Estimated Delivery Date Shown?
The conversion motivated reasons for showing delivery date is two-fold:
- Simply answer a potential question in the customer’s mind. They may need it by a certain date or just be curious.
- Increase desire by making the purchase feel more imminent or real. If they think “I could have this by Monday” they may be more inclined to purchase.
65% of our top 40 sites had some indication of an estimated delivery or ship date, suggesting this has not widely caught on or been accepted yet.
We’ve tested adding estimated dates in the mobile checkout flow and have not seen any statistically significant lifts in conversion rate.
A caveat on the above result is that for that client, the estimated shipping and delivery date for each product is already on the PDP, therefore showing it again in checkout may not have added any additional motivation to complete the purchase (there was no question about this in the customers’ minds).
Mobile Ecommerce Payment Page Trends and Best Practices
- PayPal, Apple Pay, Amazon Pay Options?
- Does the Site Auto Detect Credit Card Type?
- Are There Trust Symbols on the Payment Page?
- What is the Final Payment Submit Button Called?
- Does Final Payment Submit Button Appear Above the Fold?
- Is There a Final Submission Confirmation or Review Page?
- Newsletter Opt-In or Opt-Out Option?
PayPal, Apple Pay, Amazon Pay Options?
This is a category to keep your eyes on closely. This could be the game-changing trend in mobile ecommerce over the next few years.
The entire mobile checkout experience including laborious form fills can be almost entirely skipped with an instant payment option like PayPal, Apple Pay, or Amazon Pay.
Look at how easy it is to checkout with Apple Pay on Kohls:
No need to fill in address. No need to fill in credit card. One click on the Apple Pay button and with my fingerprint I’ve paid for my Grinch who stole Christmas pajama pants.
Although this is the first year Growth Rock has published this study, in our own practice we’ve noticed a sharp rise in brands with Apple Pay, for example.
Growth Rock Recommendation: We are actively testing these payment options across multiple clients and strongly suggest you do the same. These instant payment options like Apple Pay should be a growing mobile ecommerce trend and in our mind should be regarded as a mobile checkout best practice (if they aren’t already).
Does the Site Auto Detect Credit Card Type?
In the category of unnecessary UX friction we have: asking the user to select Visa, Mastercard, Discover, Amex, etc.
You can detect it as they type their number as the abundant discussion when you Google “detect credit card type” would suggest.
Here is a nice StackOverflow discussion with a good summary of the ins and outs of this.
Only 10% of our Top 40 sites did not auto detect credit card type, but even that was surprising.
The culprits were:
Growth Rock Recommendation: We have not tested this because this falls in the category of smaller usability tweaks that may very well help but often don’t “make the cut” at any given time for resources to devote AB testing to. For higher traffic and revenue sites it could very well be worth testing.
Are There Trust Symbols on the Payment Page?
One of the most common CRO “best practices” is to use trust symbols or badges, like these:
They can range from a full BBB or Norton badge to a tiny lock icon like HM.com:
Can’t find it? Exactly, it’s small.
But only 37.5% of our top 40 mobile sites had a trust symbol on the final payment page.
Is that bad? Maybe not. In a few tests we’ve done on trust symbols on checkout pages (not mobile only), we’ve seen largely no significant improvement in conversion rate by including the badges.
Our AB tests show…
Here are 2 AB tests that did not show an increase in conversion rate via trust badges.
In one test we added
- a Geotrust security badge
- a lock icon with SSL encryption copy
- credit card logos
…and saw no change in conversion rate.
This was for a site with the largest two age groups in Google Analytics demographic report being 55 – 64 and 65+ (thus, exactly the demographic you’d expect would need security badges).
In a second test, for a brand where the two largest age buckets were 25 – 34 and 35 – 44, we tested the inclusion of the following on the cart page:
- McAfee Secured badge
- Norton Secured badge
- Lock icon with “Shop with Confidence”
- A few store specific guarantees such as 20 year warranty and made in the USA
We saw no statistically significant difference in conversion rate. We tested multiple variations and the one with none of the above badges performed the worst during the test period, but the reduction in conversion rate from original was only 3.2% and statistical significance was only 68%. In other words, no statistically significant difference.
Growth Rock Recommendation: Do you need badges? Maybe not. Many brands on our list did not have them. They are very easy to test, so we suggest doing so. If for no other reason than to quell the debate about them in the office.
What is the Final Payment Submit Button Called?
Once again, we don’t think the name of this button is likely to matter, but it can be fun to see what competitors are using it, so here’s our histogram:
Growth Rock Recommendation: Pick something and save your mental energy for other things.
Does Final Payment Submit Button Appear Above the Fold?
Another CRO “truism” is placing things above the fold. We’ve seen this work well in many contexts. Mobile checkout is not one of them.
Our AB Tests Show…
In our tests, consumers don’t seem to care where the order or proceed to checkout buttons are, when they are ready, they know where to find them.
In our top 40 sites, slightly more than half (57.5%) did not have their final payment button above the fold.
Is There a Final Submission Confirmation or Review Page?
We find the final “review” or “confirmation” page an interesting discussion point.
Is it necessary?
Can the customer not review on the payment page?
Is it worth the extra page load and moment of pause?
The main arguments for this page are:
- To make sure there is no ambiguity for the customer before submitting their order
- Reduce customer service headaches post purchase if there are mistakes (e.g. wrong address)
- If you give them a chance to review, it will reduce errors on submission thereby increasing conversion rate
The counter-argument is of course that they may just be able to review on the final payment page itself and you don’t need to subject them to an additional page load.
We have not tested this but we think it’s interesting that there is very close to a 50/50 split in the top 40 sites.
Growth Rock Recommendation: This could very well be worth testing.
Newsletter Opt-In or Opt-Out Option?
Who likes email marketing more than ecommerce companies? No one. Well maybe email marketing software companies, but I digress.
55% of our Top 40 sites included an option to join a newsletter during checkout (almost always in the final payment step).
We did not test this (because we didn’t actually buy from all 40 sites) but it’s more than likely that 100% of the sites would put you on a newsletter after purchase even if they did not have a newsletter opt-in.
So it’s interesting to see that around 45% choose not to even give the customer a chance to “uncheck” the newsletter box and opt-out.
Several of these “NO” sites force you to sign up (Amazon, Zappos, Target) but many have a guest checkout option.
For example here is Gap’s final submit page, there was no box to join their newsletter the entire time. You bet Gap will start sending me emails the moment I order.
All 40 Top Ecommerce Site Mobile Checkout Flows
How did you decide the top 40 ecommerce sites were?
There are many lists ranking the top ecommerce retailers in the US and globally.
Many claim to have information on sales volume, but this is questionable as many of the brands don’t release it publically, so it has to be inferred.
We felt the easiest way around this issue is simply to use Alexa.com’s top sites list for the category shopping. We started with 50 and removed 10 that weren’t really “ecommerce” from a traditional UX perspective, or duplicates.
So for example Netflix is #2 on Alexa’s shopping category, obviously analysis of their mobile checkout flow is not particularly useful for typical physical product ecommerce retailers, so we excluded them.
Also amazon.co.uk is largely a duplicate (from a UX perspective) of Amazon, so that was excluded.
The final list of 40 we used are below with screenshots of their mobile checkout flows.
Top 40 eCommerce Mobile Checkout Flows
Recent AB tests we’ve done suggest that many ecommerce sites could see an increase in mobile conversion rate by adding a “bar” of navigation links at the top of their mobile homepage, instead of relying solely on the hamburger menu.
We’re calling this a “Top Nav Link Bar”, or just “Link Bar”.
The Link Bar is an alternative to the much hated “Hamburger Menu”, which hides links behind the famous 3 bars (the hamburger). It’s hated enough to where simply Googling “hamburger menu” returns anti-hamburger menu articles in the top 5 results!
In this article, we’ll discuss the Link Bar concept via two AB test case studies where we saw increases in visits to product pages and purchase conversion rate.
Finally, we’ll also show a set of design examples from popular ecommerce sites that implement a Link Bar concept in different ways.
Our hypothesis is that the Link Bar lets shoppers get to the product pages faster by exposing product and category page links normally hidden behind the hamburger menu. One less click is required and the links are more prominent, so it increases the chances of users proceeding “down funnel” and seeing products.
Let’s get to the two case studies.
Mobile navigation Link Bar increase orders by 5% for an apparel store with 1000 products
First we have an apparel client that has over 1000 products across 9 categories (and multiple subcategories on their site).
So, pretty stereotypical ecommerce company.
What did the mobile homepage look like?
Since we anonymize clients, let’s use the mobile homepage of a well known brand that had a similar layout: Urban Outfitters.
Key characteristics of this mobile homepage (that were true of our client’s mobile homepage):
- Large image based full bleed photos that change depending on the current marketing campaign (about once a month)
- Main navigation hidden inside the hamburger menu
- If you scroll down far enough there are eventually links to categories
Here’s what we tested:
In the variation, we simply added the Link Bar, to the homepage only. There were 9 categories.
Note we didn’t replace the hamburger menu, it’s still there and still is the most thorough way to navigate the different product categories.
But it’s no longer the easiest way — the Link Bar is.
The Link Bar was left-right scrollable and had arrows to help indicate that.
Here are the results.
First, completed orders. After 28 days, we saw a 5% increase with 93% statistical significance:
Note Optimizely’s stat engine uses a more rigorous “two-tailed” statistical significance calculation, which does not give this any significance, but a traditional p-value calculation shows this:
So this is not a “runaway winner” by any means. The industry convention is to declare a winner if it reaches 95% statistical significance or higher when the test reaches your pre-determined number of visitors.
But that is, in the end, a “convention”.
With over 80,000 visitors, 2,300 conversion events per variation, and having run for exactly 4 weeks with the variation leading basically for the entire test, we felt the conclusion was “this is likely a winner and is more likely to perform better by 2% – 5% over longer periods.”
But that’s just one metric (albeit an important one). The story gets more interesting if you look at additional metrics.
Only the exposed listing pages showed an increase in pageviews
Pageviews of the category pages showed clear increases by 10% – 12% (with 99%+ significance), validating one of our critical hypotheses that the Link Bar would send more users “down funnel”.
For example here is the first category link on the left of the quick Link Bar we added (e.g. the “TOPS” link in the “B” mockup above):
The other two category pages showed similar results.
But those pageview increases were only seen for the exposed category links:
What about the links that were “hidden behind the scroll” in other words, you needed to use either the arrow, or scroll to the right to reveal them?
They showed no change in pageviews:
This was consistent for all the category page links that were hidden behind the scroll.
This confirms the original hypothesis of this test: Revealing links to product and category pages will increase the amount of customers reaching them.
Certainly if category pages that were just to the right in our Link Bar didn’t see an increase in pageviews, then hiding all links behind the hamburger menu does the site no favors in terms of getting shoppers to the products.
Takeaways for your mobile site:
- Test putting links to your most popular product categories at the top of your mobile homepage.
- Try making the bar scrollable and see if you can reproduce this result in your store.
- Do you see indications of an increase in completed orders like we did? Maybe your store shows a far more definitive increase in conversion rate than the slight possible lift we saw above.
Case Study 2: Health food brand sees 29% increase with a navigate Link Bar on the homepage
Next we have a very different ecommerce brand, in the health food space with 3 product flavors.
Again, the homepage had copy and images and links but you had to scroll down the page to get links to the 3 PDPs.
So we added the navigation Link Bar just like before:
The variation in this case had links directly to the PDPs of the 3 different flavors (which each had their own PDPs).
After 14 days, we saw a 29% increase in orders with 98% significance.
Traffic to this site was lower, however, so the test got only 139 vs. 107 conversions per variation. This is low. The difference is only about 30 orders, so again we have to put a qualifier that the variation “likely” performed better.
However there was no indication that it would perform worse than not having the links.
Link Bars can help expose customers to new products
In this case, of the 3 flavors, the second and third flavor saw a large increase in PDP pageviews (Chocolate and Strawberry in the mockup above): +25% more visits, and 77% with 99.9% stat significance and over 600 conversions per variation.
But, the most popular flavor did not see much of an increase.
In this case the site was known for their most popular flavor. Historically that was the only flavor for when the brand first launched. Referral links disproportionately went there, blog links disproportionately link to that flavor, and the homepage imagery and copy mostly talked about that flavor
So in this case the Link Bar served to expose more customers to the rest of the company’s offerings.
This is a nice additional benefit of Link Bars. Note that those alternative flavors were inside the hamburger menu also, but as we saw in the first case study, having them exposed on the page (via the Link Bar) showed a definitive increase in visitors to those PDPs.
Conclusions and how to apply this to your own mobile ecommerce site
Taken together both of these tests, on two very different ecommerce stores (1000 products vs. 3 products), suggest a similar theme:
Make it as easy as possible for mobile shoppers to get to your product offerings.
If you have hundreds or thousands of products, put links as close to above the fold as possible to your most popular categories.
In the first example above, a natural iteration of the test (that has not yet been tested) would be to stack the links instead of having them be in one scrollable row.
This will give shoppers an even better overview of exactly what the store offers.
This should send even more visitors “down funnel” and perhaps give the test a more definitive win over the baseline.
If you have only a few products, create top nav links to the product detail pages.
Finally, as always, you should test this yourself. Don’t assume these results will apply to your store.
Both of the AB tests above saw definitive increases in visits to the category or product detail pages, but the increases in order rate weren’t “runaway” winners, which we define as 99%+ significance with hundreds or thousands of conversion events for each variation.
That’s okay though, as we’ve written about before, not all ecommerce stores have the luxury of that much data. That doesn’t mean you throw up your hands and not test anything nor does it mean you should just use the old fashioned method of “debate designs in a room, loudest voice wins, and implement it outright”.
That’s even more dangerous.
Aside: We once had an in house designer form a client ask if they could implement a hamburger menu on desktop because it “looked sleek”. (Facepalm)
This is why testing is important, even if you don’t get picture perfect increases in conversion rate (99% significance, and thousands of conversions over many weeks).
What about desktop? Why is this mobile only?
The reason this isn’t relevant on desktop is because almost all ecommerce sites have exposed links to all categories (and often dropdowns to subcategories, aka a “mega menu”). So this is by definition almost always already implemented on desktop.
It’s just that the space constraints of mobile result in the hamburger menu.
Hopefully this article and this data we shared helps you start to think outside of needing to collapse everything behind the hamburger menu and starts opening up other possibilities.
On that note, our variations aren’t the only way to go about this. Here are several more examples of alternatives to the hamburger menu from different ecommerce mobile sites.
Ecommerce mobile homepage examples
Who is doing this well already?
Here are some other brands that have clear links at the top of the mobile homepage, getting rid of the complete dependence on the hamburger menu:
Gap has a lot of products and categories. They have clear links to the main categories at the top of their mobile homepage:
The use of photos is a nice touch and could possibly increase engagement with the links and clarity for certain stores.
Note the links are not sticky upon scroll, whereas the links to Gap Incs other brands at the top of the page are. Interesting.
Also in the apparel world, Abercrombie chooses to simply split by Men and Women. This is worth testing versus a deeper category split like Gaps above:
For search heavy stores, Lowes.com has a great example of both featuring search and using a suggested area to basically push some category links. We hadn’t been to Lowes.com on this device before so these were likely just categories they wanted to promote (versus a personalized list based on past visits).
Finally, here’s a more bold homepage concept by Cos Clothing, who doesn’t need a thin strip of suggested categories but rather just dedicates the bulk of the homepage to sending shoppers to the right categories.
(Note by the time we published this article the Cosclothing homepage had changed to include a promotion at the top instead of full bleed photos linking to women and men’s departments.)
They have full bleed images for women and men followed by clear links.
We would love to test something this bold with one of our clients.
Final Aside: the homepage is often sacred ground for ecommerce organizations. People fight and negotiate over screen real estate there. So even we, as a third party optimization agency, often have severe restrictions on testing the homepage, much less radically redesigning it. Much thanks to the two clients who let us run the tests featured above.
Not presenting related items at the shopping cart step could be costing many ecommerce stores millions in potential revenue.
In particular I’ve noticed while large, well known brands do this consistently (see examples below), mid-size ecommerce stores often don’t, and that’s likely a mistake.
In this article, we’ll show data from two AB tests where we added a one-click upsells and cross sells.
The first increased average order value (AOV) by $55 (worth millions in annual revenue).
The second increased conversion rate by 13%, which for any 8 figure or greater ecommerce store is also worth 7 figures in extra annual revenue.
Finally, we’ll also show (and analyze) 5 live examples from well known brands of upselling and cross selling related products at the cart stage.
This way we hope you can find an upsell implementation that works for you.
Note: We are a conversion optimization agency exclusively focused on ecommerce. Want our conversion and UX experts to evaluate your upsells or optimize your conversion rate? Learn more about what we do on the homepage or contact us via the red button at the top.
How do upsells and cross selling work in ecommerce?
Some people have all sorts of specific definitions of “upsell”, “cross sale”, and “downsell”.
Quickly, for our purposes, I prefer to use the more general definitions of upselling and cross-selling, which just mean you’re trying to get the customer to increase their order value by presenting additional items they might want.
It may be a more expensive item (upsell). or some add on items (cross sell). But here’s the most common type in ecommerce (discussed in more detail below):
Once you add to cart, Gap is showing 4 additional items I can consider. We’ll discuss the implementation details below (for example here you need to click into each product detail page (PDP), you can’t just add those items to cart) but that’s the idea.
For now, let’s talk strategy.
As the two case studies in this article below show, upsells and cross sells can either:
- Increase AOV
- Increase conversion rate
(If you’re curious how an upsell can increase conversion rate scroll down the second example.)
Let’s start with an AB test that does the former.
Upsells that increase AOV: $2 million/year extra revenue for an online furniture store
Our first example is from an online furniture store. Let’s say in this case that they sell sofas ranging from $850 to $2000+ with an AOV of $1200.
Their most popular sofas are leather, and what’s interesting in this case study is not the sale of the leather sofas, but of a particular upsell: a leather conditioning kit that helps protect the sofa, and costs between $40 – $80.
Something like this:
The conditioning kit is a perfect cross sell for a customer buying a leather sofa. It actively protects and lengthens the life of the thousand dollar or more purchase the customer is already making.
If you’re already spending $1500 on a leather sofa, why not pay $60 to protect it and make it last longer.
But these complimentary accessories were not easy to navigate to on the site at the time of this test. They weren’t promoted heavily.
So we hypothesized that mentioning it as an option at the cart step, and making it very easy to add to cart, would increase AOV.
Building our AB test from the hypothesis
You can turn a hypothesis into an actual UI/UX treatment in many different ways and this step is critical. Our hypothesis was:
Offering a leather conditioning kit as a one click upsell when a customer adds a sofa to cart will increase AOV and thus total revenue.
But how should we actually offer the leather conditioner in the cart?
With a photo?
As a one line item?
Do we add some copy to really “sell” it or keep it low key?
Will any of these decisions possibly hurt sofa conversions itself?
We opted to start low key because we felt that the change of going from not mentioning that leather conditioner at all to mentioning it was a big enough change.
Our variation design:
The pink strip is what we added.
We coded the plus icon to add the conditioning kit to cart on click. If the customer clicked the name of the conditioning kit instead, it took them to its product detail page (PDP).
Typically we run tests for around 2 – 4 weeks, but we ran this test for 41 days (nearly 6 weeks)! Why so long?
Because what we were looking for here was change in AOV, but, the current AOV was above $1000, and the leather conditioning kit costs between $42 and $84.
So we were trying to detect a pretty small change.
After 41 days, over 4000 transactions and $5,600,000 revenue tracked, and AOV increased by $55, with 92% statistical significance.
The AOV increase held steady for the last 4 weeks of the test with statistical significance sitting in the 90% – 95% range the entire time.
Here is a plot of quantity sold per week of the upsell’s product SKU in Google Analytics’ ecommerce report:
Previously they were selling around 40 – 80 conditioning kits per week. Once we turned on the test (which means only 50% of users saw the variation), sales jumped immediately to 150 – 180 per week.
In fact, the warehouse ran out of leather conditioning kits when we turned this test to 100% of traffic and we had to turn it off temporarily until they could order more.
This increase in AOV, on average, was worth an extra $180,000 per month in revenue (that’s over $2,000,000 of extra revenue per year!).
Takeaways for your site
Ask yourself: Are there complimentary, lower priced products that pair with your main product(s) really well?
Walk through the typical buying and checking out funnel.
- Is it obvious to customers that these products exist? It should be.
- Is it easy for them to add them to cart? It should be.
- Does the copy position them in a way that makes it clear they compliment the primary products? It should.
Upsells that increase conversion rate: 13% increase in orders for a health food store
This second case study surprised even us when it happened.
Building on the success of upsell tests like the one above, we decided to test something similar for a online health food brand that sold nutrition bars (same disclaimer).
The key difference from the example above though is this: They only sold that one product in 3 different flavors.
That’s it. There were no other products. All 3 flavors had the same price point.
So how do you offer an upsell when you largely just have one product in 3 flavors?
It’s not the case that customers didn’t know about the other products: On the homepage, all 3 products were mentioned. In the navbar, all 3 products were mentioned. Even on each PDP the other 2 flavors were mentioned.
What we decided to do is this: When a customer adds a product to cart and an add-to-cart “drawer” slides in, we decided to offer a single “pack” of one of the other flavors at a discount (A below).
Packs typically cost around $8, but customers can only buy 2, 6, 12, or 18 packs (AOV for this site was around $57).
So when a customer added one of these to cart, our upsell offered a single pack of another flavor for $6 (B and C). That’s it, you can only add 1 of the alternate flavor, but you get a slight discount on it.
We tested two variations that were functionally similar but had slightly different designs (white border versus colored background).
Results: No change in AOV but an increase in conversion rate
Our hope for this test was that this would get more customers thinking about adding multiple flavors to their cart and thus increase AOV. In other words that they wouldn’t just stop at the single pack but decide “Well let me also add more of the other flavors”.
That didn’t happen.
But what did happen was positive. We simply saw an increase in orders (“conversion rate”) on the site as a whole.
Specifically we saw a 13.4% increase with 95% significance with over 1,000 conversion events (orders).
We tested two variations over 2 weeks with a slight design difference and both showed the 13% increase over the original (no cross sell) with 95% significance.
Why did a cross-sell increase conversion rate?
Why did adding a single nutrition bar of an alternate flavor increase conversion rate?
Our hypothesis is that customers simply wanted to take advantage of the “deal” on the alternate flavor. They get to the site, browse the flavors, pick a flavor, add it to cart.
Stop and think about what your mindset, as a customer, would be at that exact moment: A part of you will have a slight doubt about your flavor choice:
“Hmm, maybe that other flavor was better?”
“I wonder what that would taste like?”
“Should I go through with it and buy this?”
In our variation, at that moment, customers saw a small $6 discount offer on one of the other flavors .
We think for some fraction of customers this was enough to push them over the edge to buy.
Basically, the cross sell acted as a discount or add-on special offer that encouraged more purchases of the main product.
Takeaways for your site
If you don’t have upsells like the first example that compliment the main product and could possibly increase AOV, can you instead offer a similar product at a slight discount?
Are there multiple flavors or varieties of your product that customers likely debate about choosing?
Can you offer one of the other flavors at checkout at a slight discount?
Upsell and Cross sell examples in Ecommerce
Finally, for inspiration, here are a X upsell/cross sell examples from well known (and sometimes well optimized) ecommerce brands (in the U.S.).
Under Armour: Customers Also Bought
The most common types of upsells are in large SKU stores (in particular apparel) where they suggest other similar products when you add one to cart:
If you’re not testing something like this and you have a store with many products (over 20), you should test it immediately. Start without fancy algorithms and just put your most popular items there.
When testing these, try testing these implementation and UI details…
Test different algorithms or logic for suggesting products. This many not be easy to do with front end AB testing alone, but can be done. If you’re curious how, contact us to discuss.
Test number of items presented. Try minimizing carousels like what Under Armour does and show 4, or even 6 items if space allows.
Test showing and not showing product prices or even titles. You may be thinking “What?!” but this has precedence.
Gap: No Prices in Suggested Products
Bare Necessities: No Prices or Product Names
In general our experience is that showing more products and less carousel arrows is better. Requiring clicks will reduce the number of users seeing products.
Wayfair: Accessories Upsells
Exactly like the first case study at the beginning, Wayfair shows very complimentary accessories to large furniture items added to cart:
Try testing these UI and implementation details:
Test the number of upsell items. In our first client example at the beginning, we followed up the test profiled in this article with many other tests that also presented other upsells to the couch. They didn’t make much of a difference. Nothing beat just suggesting the conditioning kit as a single upsell.
Test different add to cart functionality. Above Wayfair lets you choose color and quantity. In a test not profiled here, allowing multiple quantities did worse than simply suggesting adding a single quantity in one click of a particular upsell. But this is very store dependent so should be tested
Harry’s Razors: Modal for Details
Similar to Wayfair, but at much smaller price points, when I add a razor, I get suggestions for sensible, complimentary products I can add.
A click on the plus sign, doesn’t add the balm, however, it pops open a modal where I can choose details:
This is an interesting choice. I’d be curious if they have tested this versus just a one click add to cart that defaults to Quantity 1 and the most popular size.
Especially for products like shave balm where customers aren’t expecting to be able to choose a size (unlike say, jeans) and it’s not obvious why someone would want to add more than 1, I would think this is a must test issue.
Want us to evaluate your upsells or optimize your conversion rate? Contact us on the homepage or via the red button at the top.
I feel like I can spend only spend a few paragraphs and a graph motivating why waiting for statistical significance is important in AB testing.
Here is a test we ran for an ecommerce client:
The goal we are measuring here is successful checkouts (visitors reaching the receipt page). The orange variation is beating the blue by 30.4% (2.96% vs. 2.27%), and the test shows 95% “chance to beat” or “statistical significance”!
This client has an 8 figure ecommerce business. Let’s say it’s $30 million a year in revenue (close enough…me hiding who they are and their revenue should give you confidence that if you work with us, I won’t go announcing your financials all over the internet).
So a 30% lift in orders is worth $9,000,000 in extra revenue a year!
(By the way, this is the power of AB testing).
This lure of extra revenue is enticing. Very much so. So much so that it can lead otherwise well meaning people to make a grave mistake: stopping a test too early.
In this case, you might be tempted to think: well the test has run for reached 95% significance, which everyone says is the cutoff. It had 3000 visitors to each variation. This is convincing. Let’s stop the test and run the orange variation at 100%!
But here’s a secret: both variations were the same.
This was, in CRO parlance, and A/A test. We ran it not to “test significance” which Craig Sullivan, in that preceding link argues is a waste of time (and I agree), not to give me an opportunity to look smart by writing this article, but just to check if revenue goals were measuring properly (they were, thank you for asking).
So how in the world are you supposed to know that this test is not worth stopping? Or as CRO people say, how do you know when you’re supposed to “call” the test?
You actually need two safeguards to make sure you don’t get duped by random fluctuations of data, i.e. statistical noise.
Safeguard 1: Statistical Significance
Safeguard 2: Sample Size
In this case we saw it satisfied the first safeguard, statistical significance. But that’s why the 2nd safeguard, sample size, exists.
We’ll discuss what both mean, in English, not math — well maybe a little math, but mostly English — in future articles (you can join our newsletter here). But for now, take the above example as a warning to not just stop a test the moment it reaches 95% significance.
Finally, in addition to the two safeguards above, there are also a few details you should pay attention to when deciding whether to stop a test:
First, are you tracking multiple goals through your purchase or signup funnel?
You should be paying attention to supporting goals. Do the supporting goals show a consistent result? (They don’t have to, and they may not, but it’s good to know). In the A/A test example above, most actually did, but some did not, for example, here is the goal that tracks clicks on the size dropdown on the PDP, the results show largely no difference at all (note 45%, 55%, even chance of beating each other):
Tracking multiple goals will help give you a more holistic picture of what’s happening. I’m not saying all goals have to show the same result for a test result to be valid. The world is a complex place. But if just one or two goals are showing a big difference but the others aren’t, you should ask why that could be.
Second, pay attention to purchase cycles
Again, I have to word this in vague terms like “pay attention to”. If you’re thing “OMG just tell me what I should do” your frustration is noted.
“Purchase cycles” for most online businesses, means calendar weeks. Conversion rates and audience types vary by the day, so if you start a test on Tuesday and end it on Friday, you’re risking seeing different results than if you ran it for a week. Like anything it’s good to get multiple weeks in so you know the week wasn’t an anomaly.
Second, pay attention to seasonality. For a swimsuit business, things that work in July and things that work in December may be different things. (They may not, but they may be.) Again, when you run test after test after test, you learn to talk in slightly vague terms because you’ve seen so many different results.
I see a lot of clients use just these goals on ecommerce AB tests in Optimizely or VWO (Visual Website Optimizer):
- Engagement (included by default by Optimizely)
That’s a good start (Except engagement, have you ever made an actionable decision based on the engagement goal?).
But you could get a lot more information by adding upstream goals. Here are the most common that you should add:
- Add to cart button clicks
- Proceed to checkout page
- Pageview goals for your entire funnel
- All PDP pageviews
- Cart Pageviews
- Checkout pageviews (this can be multiple different pageview goals if your checkout flow is separate into different pages like /checkout/shipping, /checkout/payment/ etc.)
Here are some common scenarios for why the funnel goals are important
Note: If you’re curious about the difference between Optimizely vs VWO (plus Adobe Target), you can read our article here.
Scenario 1: You run a test and checkouts are up 10% with 92% significance
You are not tracking upstream goals: You are only measuring checkouts and not the upstream goals. You decide the test has been running for a while and is probably significant (this is dangerous, but that’s another topic). You stop the test and declare the variation a winner and implement it.
You are tracking upstream goals: You are measuring the funnel goals and notice that pageviews of the checkout page, pageviews of the cart page, and clicks on the proceed to checkout and add to cart buttons are down. So why are checkouts up? Something doesn’t look right so you run the test for another 2 weeks. After 2 weeks the checkout goal regresses to the other goals and is also down a few percent versus original. Phew, good thing you didn’t call the test early when checkouts were looking up.
Scenario 2: You implement a variation that everyone “just knows” is going to win.
You are not tracking upstream goals: Checkouts tank and are down 30% with 99% significance after 1 week. Uh oh. Why?! You sit around the room and debate a bunch of reasons why but in reality no one really knows.
You are tracking upstream goals: You notice in your upstream goals that add to cart and proceed to checkout goals are up 20% as expected but everyone drops off at the checkout page. Coincidentally your variation made some changes to that page as well. You look further. When looking at session recordings of the checkout page and notice there was some unintended behavior on mobile from your variation. You didn’t realize your variation pushed some important information far down the page. Culprit found! Now you can re-do the test with that mobile issue fix.
There are more scenarios than this, but these are two common situations (one with a positive checkout result and another with a negative) where tracking upstream goals can make a huge difference (possibly costing or making the company millions).
In our experience when a variation is a clear winner, all or most goals will show an upward trend. That is: more add to cart clicks, more views of the checkout page, more successful checkouts, more revenue.
Not all tests will show that and if you have a test that runs for a fixed period of time, reaches 99% significance, with hundreds or thousands of checkouts per variation over multiple purchase cycles (calendar weeks), it’s hard to argue with that, and that does happen.
But most AB tests (as anyone who has done a lot of testing will attest to) are not so textbook clean. Like anything in life, the majority of scenarios are in the grey area. When that happens, tracking more goals than just the end goal (checkouts and revenue) will help interpretation a lot.
Want help setting up goals for your ecommerce site in Optimizely? Contact us.