Not presenting related items at the shopping cart step could be costing many ecommerce stores millions in potential revenue.
In particular I’ve noticed while large, well known brands do this consistently (see examples below), mid-size ecommerce stores often don’t, and that’s likely a mistake.
In this article, we’ll show data from two AB tests where we added a one-click upsells and cross sells.
The first increased average order value (AOV) by $55 (worth millions in annual revenue).
The second increased conversion rate by 13%, which for any 8 figure or greater ecommerce store is also worth 7 figures in extra annual revenue.
Finally, we’ll also show (and analyze) 5 live examples from well known brands of upselling and cross selling related products at the cart stage.
This way we hope you can find an upsell implementation that works for you.
Note: We are a conversion optimization agency exclusively focused on ecommerce. Want our conversion and UX experts to evaluate your upsells or optimize your conversion rate? Learn more about what we do on the homepage or contact us via the red button at the top.
How do upsells and cross selling work in ecommerce?
Some people have all sorts of specific definitions of “upsell”, “cross sale”, and “downsell”.
Quickly, for our purposes, I prefer to use the more general definitions of upselling and cross-selling, which just mean you’re trying to get the customer to increase their order value by presenting additional items they might want.
It may be a more expensive item (upsell). or some add on items (cross sell). But here’s the most common type in ecommerce (discussed in more detail below):
Once you add to cart, Gap is showing 4 additional items I can consider. We’ll discuss the implementation details below (for example here you need to click into each product detail page (PDP), you can’t just add those items to cart) but that’s the idea.
For now, let’s talk strategy.
As the two case studies in this article below show, upsells and cross sells can either:
- Increase AOV
- Increase conversion rate
(If you’re curious how an upsell can increase conversion rate scroll down the second example.)
Let’s start with an AB test that does the former.
Upsells that increase AOV: $2 million/year extra revenue for an online furniture store
Our first example is from an online furniture store. Let’s say in this case that they sell sofas ranging from $850 to $2000+ with an AOV of $1200.
Their most popular sofas are leather, and what’s interesting in this case study is not the sale of the leather sofas, but of a particular upsell: a leather conditioning kit that helps protect the sofa, and costs between $40 – $80.
Something like this:
The conditioning kit is a perfect cross sell for a customer buying a leather sofa. It actively protects and lengthens the life of the thousand dollar or more purchase the customer is already making.
If you’re already spending $1500 on a leather sofa, why not pay $60 to protect it and make it last longer.
But these complimentary accessories were not easy to navigate to on the site at the time of this test. They weren’t promoted heavily.
So we hypothesized that mentioning it as an option at the cart step, and making it very easy to add to cart, would increase AOV.
Building our AB test from the hypothesis
You can turn a hypothesis into an actual UI/UX treatment in many different ways and this step is critical. Our hypothesis was:
Offering a leather conditioning kit as a one click upsell when a customer adds a sofa to cart will increase AOV and thus total revenue.
But how should we actually offer the leather conditioner in the cart?
With a photo?
As a one line item?
Do we add some copy to really “sell” it or keep it low key?
Will any of these decisions possibly hurt sofa conversions itself?
We opted to start low key because we felt that the change of going from not mentioning that leather conditioner at all to mentioning it was a big enough change.
Our variation design:
The pink strip is what we added.
We coded the plus icon to add the conditioning kit to cart on click. If the customer clicked the name of the conditioning kit instead, it took them to its product detail page (PDP).
Typically we run tests for around 2 – 4 weeks, but we ran this test for 41 days (nearly 6 weeks)! Why so long?
Because what we were looking for here was change in AOV, but, the current AOV was above $1000, and the leather conditioning kit costs between $42 and $84.
So we were trying to detect a pretty small change.
After 41 days, over 4000 transactions and $5,600,000 revenue tracked, and AOV increased by $55, with 92% statistical significance.
The AOV increase held steady for the last 4 weeks of the test with statistical significance sitting in the 90% – 95% range the entire time.
Here is a plot of quantity sold per week of the upsell’s product SKU in Google Analytics’ ecommerce report:
Previously they were selling around 40 – 80 conditioning kits per week. Once we turned on the test (which means only 50% of users saw the variation), sales jumped immediately to 150 – 180 per week.
In fact, the warehouse ran out of leather conditioning kits when we turned this test to 100% of traffic and we had to turn it off temporarily until they could order more.
This increase in AOV, on average, was worth an extra $180,000 per month in revenue (that’s over $2,000,000 of extra revenue per year!).
Takeaways for your site
Ask yourself: Are there complimentary, lower priced products that pair with your main product(s) really well?
Walk through the typical buying and checking out funnel.
- Is it obvious to customers that these products exist? It should be.
- Is it easy for them to add them to cart? It should be.
- Does the copy position them in a way that makes it clear they compliment the primary products? It should.
Upsells that increase conversion rate: 13% increase in orders for a health food store
This second case study surprised even us when it happened.
Building on the success of upsell tests like the one above, we decided to test something similar for a online health food brand that sold nutrition bars (same disclaimer).
The key difference from the example above though is this: They only sold that one product in 3 different flavors.
That’s it. There were no other products. All 3 flavors had the same price point.
So how do you offer an upsell when you largely just have one product in 3 flavors?
It’s not the case that customers didn’t know about the other products: On the homepage, all 3 products were mentioned. In the navbar, all 3 products were mentioned. Even on each PDP the other 2 flavors were mentioned.
What we decided to do is this: When a customer adds a product to cart and an add-to-cart “drawer” slides in, we decided to offer a single “pack” of one of the other flavors at a discount (A below).
Packs typically cost around $8, but customers can only buy 2, 6, 12, or 18 packs (AOV for this site was around $57).
So when a customer added one of these to cart, our upsell offered a single pack of another flavor for $6 (B and C). That’s it, you can only add 1 of the alternate flavor, but you get a slight discount on it.
We tested two variations that were functionally similar but had slightly different designs (white border versus colored background).
Results: No change in AOV but an increase in conversion rate
Our hope for this test was that this would get more customers thinking about adding multiple flavors to their cart and thus increase AOV. In other words that they wouldn’t just stop at the single pack but decide “Well let me also add more of the other flavors”.
That didn’t happen.
But what did happen was positive. We simply saw an increase in orders (“conversion rate”) on the site as a whole.
Specifically we saw a 13.4% increase with 95% significance with over 1,000 conversion events (orders).
We tested two variations over 2 weeks with a slight design difference and both showed the 13% increase over the original (no cross sell) with 95% significance.
Why did a cross-sell increase conversion rate?
Why did adding a single nutrition bar of an alternate flavor increase conversion rate?
Our hypothesis is that customers simply wanted to take advantage of the “deal” on the alternate flavor. They get to the site, browse the flavors, pick a flavor, add it to cart.
Stop and think about what your mindset, as a customer, would be at that exact moment: A part of you will have a slight doubt about your flavor choice:
“Hmm, maybe that other flavor was better?”
“I wonder what that would taste like?”
“Should I go through with it and buy this?”
In our variation, at that moment, customers saw a small $6 discount offer on one of the other flavors .
We think for some fraction of customers this was enough to push them over the edge to buy.
Basically, the cross sell acted as a discount or add-on special offer that encouraged more purchases of the main product.
Takeaways for your site
If you don’t have upsells like the first example that compliment the main product and could possibly increase AOV, can you instead offer a similar product at a slight discount?
Are there multiple flavors or varieties of your product that customers likely debate about choosing?
Can you offer one of the other flavors at checkout at a slight discount?
Upsell and Cross sell examples in Ecommerce
Finally, for inspiration, here are a X upsell/cross sell examples from well known (and sometimes well optimized) ecommerce brands (in the U.S.).
Under Armour: Customers Also Bought
The most common types of upsells are in large SKU stores (in particular apparel) where they suggest other similar products when you add one to cart:
If you’re not testing something like this and you have a store with many products (over 20), you should test it immediately. Start without fancy algorithms and just put your most popular items there.
When testing these, try testing these implementation and UI details…
Test different algorithms or logic for suggesting products. This many not be easy to do with front end AB testing alone, but can be done. If you’re curious how, contact us to discuss.
Test number of items presented. Try minimizing carousels like what Under Armour does and show 4, or even 6 items if space allows.
Test showing and not showing product prices or even titles. You may be thinking “What?!” but this has precedence.
Gap: No Prices in Suggested Products
Bare Necessities: No Prices or Product Names
In general our experience is that showing more products and less carousel arrows is better. Requiring clicks will reduce the number of users seeing products.
Wayfair: Accessories Upsells
Exactly like the first case study at the beginning, Wayfair shows very complimentary accessories to large furniture items added to cart:
Try testing these UI and implementation details:
Test the number of upsell items. In our first client example at the beginning, we followed up the test profiled in this article with many other tests that also presented other upsells to the couch. They didn’t make much of a difference. Nothing beat just suggesting the conditioning kit as a single upsell.
Test different add to cart functionality. Above Wayfair lets you choose color and quantity. In a test not profiled here, allowing multiple quantities did worse than simply suggesting adding a single quantity in one click of a particular upsell. But this is very store dependent so should be tested
Harry’s Razors: Modal for Details
Similar to Wayfair, but at much smaller price points, when I add a razor, I get suggestions for sensible, complimentary products I can add.
A click on the plus sign, doesn’t add the balm, however, it pops open a modal where I can choose details:
This is an interesting choice. I’d be curious if they have tested this versus just a one click add to cart that defaults to Quantity 1 and the most popular size.
Especially for products like shave balm where customers aren’t expecting to be able to choose a size (unlike say, jeans) and it’s not obvious why someone would want to add more than 1, I would think this is a must test issue.
Want us to evaluate your upsells or optimize your conversion rate? Contact us on the homepage or via the red button at the top.
Let’s be upfront here, this is, in the end, a features versus benefits case study. But there are a few reasons this one is worth reading.
First, unlike every other marketing expert telling you to focus on benefits over features with no data to back it up, we actually AB tested it, so there’s data here (including tests that didn’t work).
Secondly, this case study is for a SaaS company. (We were trying to optimize their conversion rate via free trial starts from their homepage.) And the thing about software companies is that they love talking about their features.
And frankly, I don’t blame them.
Below, we show user research that supports the assertion that prospective customers’ primary barriers to signup were feature related (“How does [this or that] work?”, “Do you integrate with [such and such software]?”).
In the end we show how following the original user research led us to test adding more features, which did not improve conversion rates (free trial starts from the homepage), but how a simple list of three benefits (inspired by, of all things, a promo video) did improve free trial starts.
We hope various SaaS companies or even other startups (subscription services, other business models) could use these results to test similar benefit summaries to possibly increase their own conversion rates.
User Research: Can we optimize conversions by showing how this software works?
When I Work was a client of Growth Rock that sells employee scheduling software for brick and mortar businesses that have hourly employees.
They are one of the leading platforms for this kind of software in the industry and their offering has a lots of genuinely useful features:
These features are listed out in a separate “Features” page, but not on their homepage, which only included (at the time) a high level overview of key features.
A simple hypothesis thus emerged: Could listing some of these features on the homepage, or key landing pages, improve free trial conversion rates?
Being responsible conversion optimizers, however, we turned to user research first (before running, guns blazing, into an AB test).
We asked users on the homepage:
What else do you want to know about When I Work before signing up?
We categorized their responses via our Sort and Measure Method of quantifying text-based survey responses and got this high level overview:
The “how it works” category was by far the leader.
Note: Cost is almost always a big topic in user polls. But price is price. You can play with ways to frame it, but often you have little you can do on that front. Instead, if you find out ways to increase perceived value, it helps counteract consumers’ obsession with price.
This category of responses included things like this:
- “Will this work for scheduling multiple shifts for one employee?”
- “How can my employees log in using a laptop?”
- “Can I schedule people at different locations?”
It also included a slew of questions that weren’t more specific than “So how does this thing actually work?” That is, we got the sense they just wanted to see the software in action.
Note: This is common. Customers for software want to see the software. Don’t you?
All the more reason, we figured, to show more screenshots. Screenshots were of course abundant on the feature page.
The First Approach: Give the Users What They Want, Features
So with this hypothesis backed by some user voices, we decided we to test adding features and “how it works” explanation onto some landing pages that were near replicas of the homepage (for no other reason than the fact that the homepage had other tests running at the time).
We tested this in two ways:
- via scrollable images and text,
- via a “How It Works” video with a list of feature as icons below the video.
We wanted to make sure we weren’t only going to see effects of a particular layout, format, design, or copy.
After over 30 days and hundreds of signups per variation, there was no statistical difference in conversion rate.
Too bad, no difference. (Note: For client confidentiality, we blur actual conversion rates.)
We even tried other variations to list features on various landing pages and the homepage, such as this one:
Adding this list of features reduced signups by 10%. Argh.
We should mention that these features are genuinely useful to When I Work’s customers. Customers use them and historically they have often requested these features. So we knew they are valuable. Plus they were asking about these things in the user research polls. And yet variation after variation of listing features in different ways did not manage to increase conversion rates.
The Second Approach: Simple benefits
So we stopped and went another route, also built on user research: Testimonial Analysis.
We analyzed video testimonials of When I Work customers talking about what they liked about the product. Keep in mind, these videos are produced by the company and heavily edited. Nonetheless, you can hear customers describe what they like about the product in their own words.
In these videos, customers did mention features they liked, but we noticed two key characteristics:
- They spent a lot of time talking about their frustrations
- Features were almost always wrapped around benefits that it provided their business.
For example, instead of saying “We love that When I Work has sends schedule updates by SMS” (feature), they’d say something like “It’s really easy to send updates to my whole team” (benefit), or “my team loves that they can easily see updates and schedule swaps by text” (benefit that mentions a feature).
Could it be this simple? Or even this cliche? (“Benefits over features!”)
We tested this approach via an experiment that added a short “Benefits” section to the top of their homepage (which was constructed similar to most paid media landing pages).
It was only one variation and it was just adding the above section to the page. That’s it. “Spend more time growing your business” was probably the “strech”-iest benefit on there in that it didn’t allude to any features directly, just a general mention of time savings.
Adding this section increased signups on the homepage by 10%.
(Note: Our AB testing software, Optimizely, is only showing 8% significance because it uses a statistical model that factors in sample size, which could have been larger. The p-value of this experiment, though, was 0.032, i.e. 97% statistical significance. With that plus over 1,000 conversion events per variation across multiple weeks and the variation winning the entire time, we felt comfortable calling this test.)
An observation and a guess
I found a two aspects of this result interesting.
First, I love the simplicity of the benefit section. The features sections are elaborate. Take a look at their features page, you can scroll through a huge list of features, with images, zoom ins, etc. The benefits section, on the other hand, was just 3 points in text with some stock icons on top. Yet it worked.
Second, I’m guessing this result is audience specific. In particular their target customers are small business owners (owners of cafes, restaurants, dentist offices, etc.). They are often not tech savvy. So, it makes sense that they may not be attracted to a long list of tech features. It may even cause overwhelm:
“This looks like it’ll take too long to learn.”
If you’re making software for developers though (e.g. even as simple as Stripe), your customers may very well want a more technical list of features rather than a high level summary of benefits. So, as always, be careful and test it on your own.
More strategies for optimizing SaaS conversion rates
This case study is actually an excerpt from a recent ebook we wrote on optimizing SaaS landing page conversion rates. You can download the PDF at that link, free, no email optin required.
Finally, if you run a SaaS (or other online) business where a 10% increase in conversions would be worth a 6, 7, or 8 figures in annual revenue, hit that Get Proposal button on the top right and we can chat about getting similar results for you.
I feel like I can spend only spend a few paragraphs and a graph motivating why waiting for statistical significance is important in AB testing.
Here is a test we ran for an ecommerce client:
The goal we are measuring here is successful checkouts (visitors reaching the receipt page). The orange variation is beating the blue by 30.4% (2.96% vs. 2.27%), and the test shows 95% “chance to beat” or “statistical significance”!
This client has an 8 figure ecommerce business. Let’s say it’s $30 million a year in revenue (close enough…me hiding who they are and their revenue should give you confidence that if you work with us, I won’t go announcing your financials all over the internet).
So a 30% lift in orders is worth $9,000,000 in extra revenue a year!
(By the way, this is the power of AB testing).
This lure of extra revenue is enticing. Very much so. So much so that it can lead otherwise well meaning people to make a grave mistake: stopping a test too early.
In this case, you might be tempted to think: well the test has run for reached 95% significance, which everyone says is the cutoff. It had 3000 visitors to each variation. This is convincing. Let’s stop the test and run the orange variation at 100%!
But here’s a secret: both variations were the same.
This was, in CRO parlance, and A/A test. We ran it not to “test significance” which Craig Sullivan, in that preceding link argues is a waste of time (and I agree), not to give me an opportunity to look smart by writing this article, but just to check if revenue goals were measuring properly (they were, thank you for asking).
So how in the world are you supposed to know that this test is not worth stopping? Or as CRO people say, how do you know when you’re supposed to “call” the test?
You actually need two safeguards to make sure you don’t get duped by random fluctuations of data, i.e. statistical noise.
Safeguard 1: Statistical Significance
Safeguard 2: Sample Size
In this case we saw it satisfied the first safeguard, statistical significance. But that’s why the 2nd safeguard, sample size, exists.
We’ll discuss what both mean, in English, not math — well maybe a little math, but mostly English — in future articles (you can join our newsletter here). But for now, take the above example as a warning to not just stop a test the moment it reaches 95% significance.
Finally, in addition to the two safeguards above, there are also a few details you should pay attention to when deciding whether to stop a test:
First, are you tracking multiple goals through your purchase or signup funnel?
You should be paying attention to supporting goals. Do the supporting goals show a consistent result? (They don’t have to, and they may not, but it’s good to know). In the A/A test example above, most actually did, but some did not, for example, here is the goal that tracks clicks on the size dropdown on the PDP, the results show largely no difference at all (note 45%, 55%, even chance of beating each other):
Tracking multiple goals will help give you a more holistic picture of what’s happening. I’m not saying all goals have to show the same result for a test result to be valid. The world is a complex place. But if just one or two goals are showing a big difference but the others aren’t, you should ask why that could be.
Second, pay attention to purchase cycles
Again, I have to word this in vague terms like “pay attention to”. If you’re thing “OMG just tell me what I should do” your frustration is noted.
“Purchase cycles” for most online businesses, means calendar weeks. Conversion rates and audience types vary by the day, so if you start a test on Tuesday and end it on Friday, you’re risking seeing different results than if you ran it for a week. Like anything it’s good to get multiple weeks in so you know the week wasn’t an anomaly.
Second, pay attention to seasonality. For a swimsuit business, things that work in July and things that work in December may be different things. (They may not, but they may be.) Again, when you run test after test after test, you learn to talk in slightly vague terms because you’ve seen so many different results.
I see a lot of clients use just these goals on ecommerce AB tests in Optimizely or VWO (Visual Website Optimizer):
- Engagement (included by default by Optimizely)
That’s a good start (Except engagement, have you ever made an actionable decision based on the engagement goal?).
But you could get a lot more information by adding upstream goals. Here are the most common that you should add:
- Add to cart button clicks
- Proceed to checkout page
- Pageview goals for your entire funnel
- All PDP pageviews
- Cart Pageviews
- Checkout pageviews (this can be multiple different pageview goals if your checkout flow is separate into different pages like /checkout/shipping, /checkout/payment/ etc.)
Here are some common scenarios for why the funnel goals are important
Scenario 1: You run a test and checkouts are up 10% with 92% significance
You are not tracking upstream goals: You are only measuring checkouts and not the upstream goals. You decide the test has been running for a while and is probably significant (this is dangerous, but that’s another topic). You stop the test and declare the variation a winner and implement it.
You are tracking upstream goals: You are measuring the funnel goals and notice that pageviews of the checkout page, pageviews of the cart page, and clicks on the proceed to checkout and add to cart buttons are down. So why are checkouts up? Something doesn’t look right so you run the test for another 2 weeks. After 2 weeks the checkout goal regresses to the other goals and is also down a few percent versus original. Phew, good thing you didn’t call the test early when checkouts were looking up.
Scenario 2: You implement a variation that everyone “just knows” is going to win.
You are not tracking upstream goals: Checkouts tank and are down 30% with 99% significance after 1 week. Uh oh. Why?! You sit around the room and debate a bunch of reasons why but in reality no one really knows.
You are tracking upstream goals: You notice in your upstream goals that add to cart and proceed to checkout goals are up 20% as expected but everyone drops off at the checkout page. Coincidentally your variation made some changes to that page as well. You look further. When looking at session recordings of the checkout page and notice there was some unintended behavior on mobile from your variation. You didn’t realize your variation pushed some important information far down the page. Culprit found! Now you can re-do the test with that mobile issue fix.
There are more scenarios than this, but these are two common situations (one with a positive checkout result and another with a negative) where tracking upstream goals can make a huge difference (possibly costing or making the company millions).
In our experience when a variation is a clear winner, all or most goals will show an upward trend. That is: more add to cart clicks, more views of the checkout page, more successful checkouts, more revenue.
Not all tests will show that and if you have a test that runs for a fixed period of time, reaches 99% significance, with hundreds or thousands of checkouts per variation over multiple purchase cycles (calendar weeks), it’s hard to argue with that, and that does happen.
But most AB tests (as anyone who has done a lot of testing will attest to) are not so textbook clean. Like anything in life, the majority of scenarios are in the grey area. When that happens, tracking more goals than just the end goal (checkouts and revenue) will help interpretation a lot.
Want help setting up goals for your ecommerce site in Optimizely? Contact us.
eCommerce Copywriting Case Study: 3 Advanced Tactics We Used to Increase Product Page Conversion Rates 13.9%
We partnered with copywriter Brian Speronello of Accelerated Conversions on this project. This case study was written by Brian and edited by Devesh and Brian.
Writing powerful eCommerce copy that improves conversion rates is one of the most effective ways to generate more revenue for your online business. Just by changing the words your visitors read when they come to your site, you can generate significant increases in sales for your company.
For example, online direct-to-consumer mattress brand Amerisleep was able to achieve a multi-million dollar increase in annual revenues just by rewriting their product details page. A/B testing showed that checkouts increased by 13.9% with 98% confidence.
Note: Our agreement with Amerisleep prevents us from publishing their actual product page traffic and conversion rates but for the integrity of the test we’re showing the confidence level and hundreds of conversion events per variation. The test was also run for multiple buying cycles.
GrowthRock performed the conversion testing for this project, and Brian Speronello from Accelerated Conversions (that’s me) wrote the copy.
In this eCommerce case study, I’m going to share three advanced copywriting techniques that helped drive the 13.9% increase in orders on the new product page for Amerisleep.
eCommerce Conversion Copy Tactic 1: Go Beyond Bush-League Benefits
If you’re reading this post, I’m going to assume that you’re at least moderately interested in conversion and copywriting strategy. You should already know why it’s important to promote benefits ahead of features.
(If not, before you invest time going over this case study, you should read more about the basics of copywriting. That way you’ll be able to get the most out of the strategies presented here).
This section is NOT about benefits versus features.
It’s pretty easy to tell the difference between a benefit and a feature. The former describes a result for the reader. The latter describes a function of the product or service.
Detecting Bush-League Benefits is more difficult, because it’s a matter of tone and perspective.
Bush-League Benefits are actual benefits that the target audience will get from your product or service. But they’re benefits that the end user doesn’t care about.
Effective copy doesn’t just focus on benefits before features. Sometimes it has to go through several layers of benefits to reach the deepest desires of the audience.
As an example, let’s look at a classic marketing and copywriting analogy: the electric drill.
A New Version of a Classic Marketing Tale
As the old saying about marketing a drill goes, the benefit of a drill is a hole in the wall. That’s the immediate result (benefit) that the user gets from the features the drill has. If you wrote copy for a drill that promoted how it puts holes in your wall, you would technically be promoting its benefits, since you’re not talking about features like torque or battery life.
But is “a hole in the wall” really what the reader wants?
Does a drill customer wake up in the morning and say “man, I really need some holes in my wall?” (And if so, why not just buy a few kegs and invite over the local frat? Problem solved.)
So while “a hole in the wall” is certainly a benefit of owning a drill, it’s a Bush-League Benefit because it doesn’t tap into the real desires of the audience. It doesn’t connect with a problem or desire that the audience actively thinks about on a regular basis.
So what’s the real benefit of having the drill?
Is it having holes that are a precise size? Is it the drill’s ability to make holes in a wall with little physical effort? To find out, you have to ask yourself why the customer would want the holes in the wall.
You’d probably come up with an answer like “to hang family portraits.” This response is far more emotional than “holes in the wall.” And it makes for a more compelling benefit to the reader.
For example, you can imagine a family moving into a new home and dreaming about hanging their family portraits above the fireplace. That’s how you know you’ve gone deeper than a Bush-League Benefit. From there you could write your copy targeting “hanging family portraits” as the real benefit of your drill.
But you’d be wrong. (Sort of…)
Because while the benefit of “hanging your family portraits” is definitely more emotional and compelling to the reader than holes in their wall, you can still go deeper.
What would happen if you asked “why?” again, researching the reason a family would want to hang their family portraits? They might say something like “We just moved into a new house, and we want to make it feel like home.”
Now you’re getting somewhere.
Which drill would you rather buy?
- Drill A: Our high-powered electric drill will put precision-sized holes in your walls.
- Drill B: A new drill from our company will let you hang your family portraits so your new house finally feels like home.
No contest, you’d take Drill B. Even though Drill A mentions a benefit (holes in your walls) it’s a Bush-League Benefit because it doesn’t connect to the emotions and desires of the audience.
The Challenge with Bush-League Benefits
The challenge with going beyond Bush-League Benefits is deciding when you’ve discovered a benefit that connects with the reader’s emotions enough to make them want to purchase. It’s also why I said Bush-League Benefits are a matter of tone and perspective, so you’d only be “sort of” wrong to use “hanging family portraits” as the main benefit in copy for the drill.
Maybe your audience wants their house to feel like home because they’re part of an elite social circle and care about their status or impressing their neighbors. Or maybe it’s because they’re looking to be featured in Better Homes & Gardens magazine or on HGTV.
Those desires are even more compelling than “making the house feel like home.” You could write your copy to focus on them, if you found they applied to the majority of your audience, and probably get even better conversions.
So it’s up to you to decide how deep you need to go based on your product or service and your audience. As a rule of thumb, the more expensive, complicated, or important your offer is for the reader, the more your copy needs to connect with their deepest hopes and fears.
But if you want your copy to convert well, it must connect the benefits of your product or service with needs and desires that are emotional and top-of-mind for the reader.
You can’t settle for Bush-League Benefits and expect to get a lot of buyers.
Going Beyond Bush-League Benefits with Amerisleep
If you asked novice copywriters to tell you the benefit of a new mattress, they’d probably say “getting more sleep.” And they’d be getting tricked by a Bush-League Benefit.
Don’t get me wrong, sleep is great. And because most people understand that, copy that uses sleep as the main benefit can get decent results all by itself.
However even though sleep can be an effective benefit on its own — in fact because it’s so easy to settle for using sleep as your main benefit — it’s actually a Bush-League Benefit that has tricked many copywriters. That’s because probing further into the reader’s desire for sleep reveals even more powerful motivators.
One of the biggest changes that we made on the Amerisleep product page was adding the following section. It’s short, but it plays a key role in connecting the Bush-League Benefit of “more sleep” with deeper aspirations of the audience. It relates “more sleep” with two highly emotional and nearly universal desires: An improvement in physical health and better mental performance at work.
This section positions sleep as the solution to better health, better job performance, and a higher quality of life. And it’s a mistake to trust the reader to make these connection on their own.
You have to remind readers about the important role sleep plays in how we feel each and every day before they start to care about getting more of it.
Only once the reader is actively craving sleep will “more sleep” move from a Bush-League Benefit to a real one. This section helps create that desire for sleep in the reader, and in turn it makes the other parts of the page where we talk about increasing sleep more persuasive.
So when it comes to your own copy, make sure that you connect the immediate benefit of your product or service to deeper and more compelling desires the audience has.
Otherwise you’ll just be peddling Bush-League Benefits.
eCommerce Copy Tactic 2: How to Shut Down the One Competitor Every Business Has in Common
There is one competitor that every single business in the entire world has in common. One competitor who is taking money out of the pockets of every company on the planet.
And my guess is that this competitor is not on your radar, even if you spend a massive amount of your time and effort analyzing your place in the market.
Can you name it?
It’s the status quo. Otherwise known as inaction or doing nothing.
If you analyzed the behavior of prospects who ultimately do not buy from you, which of the following situations do you think is more common?
- They spend their money with one of your competitors instead of you.
- They decide to save their money and not to buy from anyone.
Let’s say you run a website and you convert 10% of your visitors to customers. That’s an outstanding conversion rate, and yet 90% of your visitors still do not purchase.
Does that mean 90% of your visitors wind up buying from your competitors?
Obviously not! Otherwise their own conversion rates would be off the charts.
The market of people who explore their options but ultimately decide to do nothing is much larger than the market of people that choose your competition over you.
Rather than spending the majority of your time butting heads with your competitors, you should spend more of your time working to convert prospects who would otherwise do nothing at all. Not only is this a bigger market, but because your competition is likely focusing on trying to beat you, it’s also relatively uncontested.
Copy Techniques for Overcoming Inaction
No matter who you are or what you sell, your clients and customers are always going to prefer doing nothing to buying from you. Doing nothing is cheaper, easier, less uncertain, less mentally demanding, less socially and politically risky, and less stressful.
The only way to compete with the status quo is to vividly show the reader how doing nothing will lead to him or her being worse off. This is usually a great place to leverage loss aversion, where you show readers what they will lose or miss if they don’t take action. Another option is to ask the reader a question that follows the general outline “If you don’t do this, what will you do that is better?”
Here are some hypothetical examples from a cross-section of industries that show how you could use loss aversion and questions about alternative choices to make readers aware that doing nothing is actually worse for them:
- Appliances: If you don’t upgrade to a new, energy-efficient washing machine, it will cost you up to $1,000 more every year on your water and electricity bill. (Notice I didn’t say our energy-efficient washing machine. This is any of them. We’ll get to your company’s specific offer later.)
- Conferences: Last year people who attended our conference on average were able to add $300,000 to their bottom line by the end of the year using the techniques we teach. It’s your choice if you decide to join us or not — we’re not going to give you the hard sell. But if you don’t attend, do you have a better plan for adding $300,000 to your profits in the next 12 months?
- Dating Coach: I know the idea of working with a dating coach can make some people feel embarrassed — after all, shouldn’t we just naturally know what to do? The hard truth is that it’s not natural, and the social systems that used to guide us have gone away. So ask yourself what’s more embarrassing, working with someone to help you improve an important (maybe the most important) part of your life? Or doing nothing and continuing to get rejected by the people you’re attracted to and waking up next to an empty pillow every morning?
Overcoming Inaction with Amerisleep’s Copy
We applied this principle on the Amerisleep product page in the same segment of copy from the last example. We talked about the benefits of sleep, waking up without pain, not feeling groggy at work, and an overall better lifestyle. Then we said:
The number of daily problems that go away with a good night’s sleep is astounding — but that only happens if you actually do something to improve your sleep.
If you just keep things the same, you’ll keep getting the same disappointing results. And don’t you deserve better?
By clearly addressing how doing nothing makes readers worse off, it takes away a lot of the mental excuses they can use to justify maintaining the status quo.
It’s only after you’ve convinced readers that they need to take action to address a problem or desire they have — period — that positioning yourself as the best choice among your competitors pays off.
How to do that effectively is what we’ll be discussing next.
eCommerce Copy Tactic 3: Position Yourself as the Market Leader Using Comparisons
Once your prospects decide to take action on a problem or desire they have, their mindset changes. Before that, their number one question is “Should I even worry about this right now?”
After your prospects make up their mind to move forward though, they begin to ask “Who should I hire/How should I solve this problem?” instead.
When your prospects reach this stage in the decision-making process, it finally becomes effective for you to spend time proving how your offer is superior to your competition.
Here are a few techniques for making prospects see your company as their best choice, even if you operate in a crowded market.
Absolute Comparative Statements
Absolute comparative statements say that one thing is absolutely better than the other. Claims like better, bigger, and the best are absolute comparatives.
(If you’re wondering “doesn’t that make all comparative statements absolute?” I’ll show you how that’s not true in the next section on Faux Comparatives.)
The biggest mistake copywriters make when using comparatives is not being specific. If you’re going to use a comparative statement, you need to explain compared to what.
Too many companies will say “Our product is the best!” But that doesn’t answer the question “the best compared to what?” And without the “compared to what” piece, the comparison is ineffective.
Here are two examples of absolute comparatives from the Amerisleep product display page. I’ve bolded the comparative term and underlined the “compared to what” part of the sentence.
- On top of that, our foam is also the most environmentally friendly. Our patented foam-making process uses plants to replace some petroleum, and is the only manufacturing method that exceeds the standards of the Clean Air Act.
- Our foam is also better than traditional memory foam because it recovers its shape faster.
Absolute comparative statements are the most powerful way to make your offer appear superior than the competition. But what if you can’t say your product or service is absolutely better than the competition in any measurable way?
In that case, some copy trickery can help you give the impression you’re better than your competition, when really you’re only claiming to be equal.
I call these copy tricks “Faux Comparatives.”
How “Faux Comparatives” Let You Turn Equality into Superiority
Tell me what this sentence means:
- No other mattress is more supportive than Amerisleep’s Revere bed.
Most people would believe it says the Amerisleep Revere bed is more supportive than any other mattress…
But in reality it only says that no one is better — which means there could be many others that are equal.
Take another look at the sentence, this time with the implied meaning in parenthesis:
- No other mattress is more supportive than Amerisleep’s Revere bed (but there are several others that are equally supportive).
If you’re in a market where the offers are all relatively equal, you can claim that no one is better and be factually accurate. No one can say you’re making false advertising claims.
If the audience interprets “no one is better” to mean that “we are the best,” that’s their misunderstanding. Your job is to make the case for your product or service in the most compelling — and truthful — manner possible. If that misunderstanding works out in your favor, lucky you.
In some respects all marketing and copywriting is presenting the truth in the most flattering light. As long as the statements you make are true, choosing the words that make your copy the most convincing to the audience is just doing your job as a writer.
Here’s an example of how we used this Faux Comparative strategy on the Amerisleep product display page in the headline before describing the materials inside each mattress:
This headline suggests that Amerisleep’s mattresses are more carefully engineered than any other brand. It’s a great message to introduce before explaining what goes inside each mattress. However in reality all it says is that, while there aren’t any mattresses more carefully engineered, there may be others that are of equal quality.
Another Faux Comparative method that gives the impression of superiority, when you are actually only claiming equality, is adding “one of” to an otherwise absolute comparison.
Instead of saying that you are the best, the most, or the biggest, you can say you are one of the best, the most, or the biggest in your category.
Like with the last Faux Comparative, the reader will focus on the comparison that you are the best, most, or biggest in your category. They will think you are one of a select few who are at the front of the market, but in reality you could be one of many who are all equal.
We used this approach in the sub-headline for the copy from the previous Faux Comparative:
- Our innovative and proprietary materials let us build the one of most comfortable mattresses ever
We also used it in the eco-friendly section:
In both cases, it makes Amerisleep seem like they’re at the top of their market. But in reality, it could actually mean they are just one of many companies with similar characteristics.
Increasing Conversions from Your eCommerce Copy
You’ve just learned three eCommerce copy techniques that can increase your sales by double-digit percentages. But your conversions only go up if you invest the time to rewrite your copy. If you simply move on with your day now, nothing changes and your sales will stay the same.
So before you close this page, set aside time in your calendar or add an item to your to-do list to incorporate the three copy strategies from this article. Because if you’re not going to do anything with these ideas, why did you spend 15 minutes reading this case study? Without applying the tactics it covered, the all you just did was 15 minutes of mental masturbation.
(And if this section feels familiar…remember what I said about your biggest competitor being inaction?)
You can use the following checklist to apply the lessons from this case study:
- Highlight the primary benefits in your copy. Ask if they are emotionally engaging. If not, come up with benefits that your audience relates to more deeply — and if you can ask actual customers for feedback, even better.
- Add a section to your copy that discusses the costs, risks, and problems that your prospects will experience if they don’t buy from you.
- Find comparison terms, like more, better, and the best. Make sure that each one includes a “compared to what?” part of the statement.
- Where possible, add “no one is more…” and “one of the…” Faux Comparative statements so readers think aspects of your business that are only equal to the competition are actually superior.
But what if you’re legitimately focused on other priorities and can’t make the time to rewrite your copy — even though you want to? Or what if you want to go beyond these three tactics and get the maximum increase in your site’s sales and conversion rate?
An issue that’s come up with more than one client for us recently is when a large site redesign is “in the pipeline” and people in the company disagree about whether or not they should test the large redesigns or feature release.
The arguments for not testing large site redesigns is usually some flavor of the following:
These changes have been things we’ve wanted to do for a long time, we know we’re going to implement them, so why test them?
Sometimes I’ve heard it veiled in phrases like “This change is too important.” or “This will be too difficult to test.” But in the end the real reason is that some people in the company just don’t want to test it.
You shouldn’t just agree blindly to this idea of not testing large redesigns, though, because if your site is bringing in enough revenue, the ROI of continuous AB testing can be significant.
Why large or inevitable site changes should still be AB tested
I’ll list my arguments for why you should still test large site changes in response to the various objections I listed above that we often here in our work.
My hope is, if you’re the person in the organization arguing for testing, you can use these arguments to help with your battle.
“This change is going to happen regardless of the test outcome”
I have a couple responses to this, one is nicer than the other.
The nicer response: That’s okay if the change will be implemented regardless, but don’t we want to know what the effect on revenue will be?
For example (and this is common): Say an 8-figure ecommerce company is finally updating their entire site design. Their site was made 6 years ago, parts were added piecemeal over time, sales have grown tremendously, but the site hasn’t caught up. It doesn’t have modern design, the checkout process is definitely not optimal, and it’s not mobile friendly. The company knows it needs to update the site. So they hired an expensive agency to totally redesign it.
Anyone who has AB tested elements of an ecommerce site knows that seemingly simple changes to single pages can swing orders by 10%. A 10% change in orders is worth millions for an 8-figure revenue company.
Even if you’re going to implement the large change regardless of the outcome, don’t you want to know if it will reduce revenues by over a million dollars?
If it does hurt sales, you can delay the launch a bit, hypothesize what parts of the new design could be causing the decrease, retest just those elements, and isolate the problem.
The not as nice response: Rolling out a large change without testing it first is irresponsible. See arguments above for why.
The change is too big to test. AB testing is for front end changes, and this changes a lot more.
You don’t have to code the test in Optimizely or whatever testing platform you use. In fact, it’s not recommended that you do this for large tests. Instead, you should code and deploy the new design on your own servers with slightly modified URLs (site.com/home, site.com/page-1, etc) and have the AB testing program simply redirect users to both versions of the site. Most programs can do this.
This method can handle large changes as well.
It will be confusing if customers see two different experiences at the same time
First, every AB testing platform I’ve heard of cookies users so they’ll always see the same variation unless they clear cookies or use a different browser or device.
Second, to protect against confusion if they use separate devices (checking something at work vs. at home, etc.), on the new variation, you can always install a soft popup or bottom of page slider that says “Hey, we’re testing a new site design and would love your feedback. Let us know how you like it by…”.
Lastly, it’s just not that big of a deal. Modern companies test sites all the time now. Sites get updated, they change. Frankly, this idea that if you test a new site for a limited amount of time, and a fraction of customers see an inconsistent site experience for a while will “hurt your brand” is old fashioned marketing thinking.
This thinking relies more on “gut instinct” and opinions of people in conference rooms rather than data from the only opinions that matter: your customers.
Heard any other objections to testing large redesigns or features? Ask away in the comments.
If your business brings in 7 to 8-figures of annual revenue online and you’re interested in getting a conversion audit of your site to being increasing its conversion rate, email us or fill out the form on our homepage.
I have a simple 2-step criteria for determining if AB testing has a high enough ROI to be seriously considered for your marketing team:
- Criteria #1: Your business makes $2,000,000 or more in revenue
- Criteria #2: You get 100,000 monthly unique visitors or more to your site
Yes, like any numerical “cutoff”, these numbers are somewhat arbitrary (e.g. In the U.S. at 15.5 years old, you are too young to drive, but at 16.1 years, you’re not).
But, when you zoom out, cutoffs have reasoning behind them: You wouldn’t trust an 8 year old to drive, and waiting until people are 30 to drive is also ridiculous.
Similarly, instead of debating the numbers themselves, let’s discuss the reasoning so you can adjust the above numbers as needed for your business. (Fortunately, unlike a driver’s license, our 2 step criteria above aren’t hard rules).
Finally, I’ll also discuss a 3rd corollary rule that falls out of the reasoning behind the first two: If you fulfill the first two criteria and begin investing in AB testing but aren’t getting routine lifts in orders or sales of 5% – 10% or more, the ROI on your testing investment may also not be worth it.
Why a Revenue a Cutoff? Because AB testing isn’t free.
AB testing isn’t free. Even if you include software costs, the majority of AB testing costs are for the people needed to run the operation. This can come in two forms:
- An outside agency, that charges you a monthly rate. From my un-scientific survey of other CRO agencies, they typically charge between $2,000/m – $15,000/m, with most experienced ones being typically towards the middle/high end of that range: $5,000 – $10,000/month.
- Using internal employees, which also burn cash (almost always more than an agency).
I’ll soon have an entire article that dives into these costs and discusses how to choose between using in-house resources vs. outside. But for now, as an example, say your monthly AB testing costs are $5,000.
Deciding if a dedicated CRO program is “worth it” at $5,000/month is a matter of estimating:
- How much of a revenue lift you’re likely to achieve
- How long you’ll have to spend to achieve it
Before you get worked up about this, let me make this clear: it’s impossible to know the answers to these questions a priori.
But luckily, you’re not the first company to do AB testing, so we can look at what’s typical, conservative, aggressive, etc., based on past experience.
Because this is all speculation, however, I’m going to just boil things down to this general rule: assume you can get (1) a 10% increase in revenue in (2) 6 months of testing.
So, that means you’d spend $30,000 ($5,000/m for 6 months) and, if you were making $2 million in revenue before (bare minimum of the criteria above), you’ve increased annual revenues by $200,000/year.
That’s a 567% ROI from the first year of revenues alone. Pretty good.
At this point, let me re-iterate a point I made above about arbitrariness in different words: You may not get a 10% lift in 6 months, or, you may get way more than a 10% lift in revenue in 6 months, or you may get the lift in 2 months.
I can’t say.
You can’t say.
No one can predict this.
But, I know that 10% revenue lift in 6 months is reasonable and achievable — our agency has achieved this multiple times for multiple businesses.
Now, similar to the driver’s license example, instead of arguing about whether you you should get a 10% or 25% or 7.72% lift in this time period, let’s zoom out and look at some extremes.
If you told me “For a 6 month AB testing campaign, we’ll need to see a 150% increase in sitewide revenue for it to be worth it.” I’d respond with “Maybe you should focus on something else.”
I’m not saying that’s impossible.
Anything is possible.
It’s possible you could do some user research for a month or two, discover some immense barrier to purchasing that was due to on-site elements, UI/UX, or product related issues, fix them, and see an increase in revenue of 150%.
That is possible.
But it’s just not common, and I’d go as far as to say it’s not even reasonable to expect 150% increase in revenue in a few months.
But 10%? That’s reasonable.
Examining Different Revenue Ranges
Revenue of $500,000 – $1,000,000
Let’s look at lower revenue numbers to see why I put our arbitrary criteria at $2 million.
At $500,000, 10% is $50,000 a year. That’s awfully close to the $30,000 you’d spend doing this for 6 months. Sure maybe you’d see that 10% lift in 3 months, but would you then stop your testing spend just to make sure your ROI was good? No, you’d keep going.
Or, you may only see 5% lift, which is only $25,000.
Either way, it’s not likely that at $500,000 sitewide revenue you’re going to get an outstanding, no-brainer ROI on AB testing.
After months of testing, you may only see an increase in revenue of between $50,000 – $150,000, and spend about $60,000 a year in AB testing costs. So although the ROI could be positive, it’s not a no brainer.
More importantly, for businesses in this revenue range, there are often bigger wins.
In my experience, these bigger wins are usually SEO or paid media optimization. Those two drivers of traffic can move the needle significantly.
I’ve seen in multiple client’s analytics, traffic double from one year to the next. If that fruit is still hanging for your business, pick it first. Don’t worry about trying to eek out 10% or even 35% lifts via CRO.
(Yes, it’d be ideal to do both, but resources are finite.)
Or, your business may be hitting the $500,000 or $1,000,000 revenue mark with some initial paid ad spend that is by no means saturated. If that’s the case, can you double paid ad spend and still maintain sufficient profitability?
The answer may be yes, the answer may be no, but that question should be asked and explored in great detail before starting up a CRO program from scratch.
Revenue Less than $500,000
Startups that are just starting to make money often ask us about AB testing.
If your company is making less than $500,000 in revenue, regardless of how much traffic you have, it’s hard to make the math work for CRO. If you make say $250,000/year, you could easily spend 6 months and $30,000 to get a 10% lift and you wouldn’t be making your money back.
More importantly, even if you did make your money back (or had a positive ROI) from CRO, it’s not likely to produce a step change in growth. The increase will be incremental, and your business is likely at a stage where you’re looking to find significant growth — so focusing there makes more sense. (Typically that opportunity is better product/market fit or traffic generation.)
Revenues of More than $10,000,000
Now, let’s look at 8 figure businesses and above with the same criteria.
A 10% lift, achieved in 3 – 6 months (reasonable), would yield over a $1,000,000 in annual revenue.
You’re not likely going to spend anywhere near $1,000,000 for 3 – 6 months of testing. Even if you paid $10,000 a month to an agency for 6 months, your ROI is significant: 1567% on a $60,000 investment.
Alternatively, even if you increased revenues by just 5%, that’s $500,000 of extra annual revenue — a 733% ROI.
This math, combined with our observation that 8 figure businesses have more often than not spent years optimizing traffic and paid advertising (and thus either don’t have super low hanging fruit or already have a healthy operation focused on cranking out consistent wins on those two fronts), means that starting up a CRO program for an 8-figure+ business is a no brainer.
The word “starting” in the previous sentence is particularly important because like any marketing initiative, the easiest wins are there for the taking at the beginning, so a site that has never before been formally “optimized” likely has some easy wins that can be realized by CRO (user research + hypothesis generation + ab testing) in the first 3 – 6 months.
Corollary Rule #3: You also need to be seeing consistent conversion lifts to get a good ROI on testing
So we see from the above analysis that a certain revenue range is required for reasonable lifts in conversion rate to yield increases in revenue that make testing “worth it”. But that means the inverse is also true: you need to achieve reasonable lifts in conversion rate for testing to be worth it!
If you check the boxes on the first two criteria:
- We have more than $2MM in revenue
- We have more than 100,000 monthly uniques
And you start investing in AB testing, great. That means the potential of seeing a great ROI on testing are there. But it doesn’t mean you will achieve that potential.
Continuing with the numbers in my examples above, you’d need to achieve a 10% lift in orders or sales on a consistent basis to realize your ROI potential.
If you are testing with a typical ad hoc or “conference room” approach (everyone sits around a conference room and throws up their favorite pet ideas to test), then it’s quite likely you won’t achieve that result. Feel free to reach out to us at the link below if you want to discuss was to solve this problem in your organization.
Again, before you start drafting your comment arguing that my numbers are arbitrary, I want to emphasize (for the 15th time) that I’m fully aware they are arbitrary. Don’t use them to the exact digit. Adjust them for your business, and use the guiding principle, which is:
Before investing in CRO, first compare reasonably achievable revenue increases that CRO could produce versus the equivalent increases you could possibly get from other channels or initiatives (e.g. Improving the product, increasing traffic via SEO, paid ads, content marketing, etc.)
Want updates when articles like this come out? Join our email list (at the top of this page).
Truth be told, optimizing a website is a daunting task.
The same anxiety that makes writers stare at a blank page for hours or a musician futz around with their instrument for an afternoon is what makes it so hard. It’s this question that gnaws at you:
Where should we start?
There are big things: Let’s redesign the whole site! We need a new ecommerce platform!
There are little things: We need to reword this. What if we put the logos above the diagram?
And a million options in between.
Now, you can read case study after case study online (and they can often be great for inspiration and sparking creativity), but in my experience, random guesses only take you so far. Having a framework for tackling large projects like this can be transformative in turning a situation from “the loudest people’s opinions get tested first” into a systematic process for improving conversion rates and thus revenue.
This article isn’t going to lay out an entire framework for conversion optimization, but it will outline a foundational piece. (more…)
If you have a payment or checkout flow on your site, you should test asking for Zip Code first, and fill in the buyer’s City and State for them.
This should make it easier for them to fill in the form and, ideally, increase checkout rate and thus revenue. Doing this is not as hard since the brains behind it (filing in city/state from zip) has been done by other people.
Let me explain… (more…)
Note: This is part of a new series of posts I’m calling Conversion Teardowns, where I walk through how to optimize sites from various business types for conversions. I outline my process in the first one. To get updates of future ones, Click Here.
This week, we’re breaking down Dealficks, a movie ticket and concession deals site that honestly seems to be a pretty cool way to score movie tickets for cheap.
What they sell is pretty straightforward: tickets and food at a discount. But their site presents some interesting challenges and questions that would make for good test hypotheses:
- Do users want to search by location or by movie?
- They have display ads, is the revenue worth the distraction?
- Are testimonials relevant for this product?
- Are app store download CTAs important on the web version?