Do Reviews or Star Ratings on Product Detail Pages Increase Conversion Rate? (A/B Test Case Study)

Published on July 27, 2020 By (Follow on ) No Comments

We’re noticing that it’s become trendy for a lot of fashion brands to not place reviews on their product pages (“PDP”). You don’t see “stars” on the top right like you’re used to on most ecommerce sites.

A notable brand that does this is LouisVuitton.com:

No reviews anywhere for this product. This does give a brand the feeling of luxury. Louis Vuitton is basically implying: 

“Why even have reviews?” This is a $2,210 backpack we’re talking about here. You know who we are.”

Why cheapen a site like that with quotes by Susan from Missouri or Steven from New York complaining that the shipping was slow or the zipper got stuck? We get it. 

But we’re increasingly noticing that smaller brands are also doing this. For example, the menswear brand buckmason.com also has no reviews: 

In this case, this is a brand that’s younger (est. 2013), is digitally native, and most everyone has not heard of. Can they get away with it too? 

Does this help conversion rate? 

Does it also give them an air of luxury? 

Or are they leaving conversions on the table by forgoing product reviews? 

These are complex questions but recently one of our clients that also sells apparel ran into a similar situation. In their case, they did have product reviews, but did not show the summary of review stars in the top right of their PDP like you’re used to in most e-commerce stores, like Amazon:

So we tested it by adding a review star rating summary for each product:

As per our Question Mentality framework, instead of making a hypothesis (which causes bias and reduces learning), we asked a series of questions that we wanted the test to help answer. Specifically:

  • If reviews are at the bottom of the page, does putting the star rating summary at the top of the PDP help?
  • Do customers care enough to have it affect conversion rate? 
  • Or will they just scroll down? 
  • Does this change AOV (promote purchase of more expensive products)?

Adding the review star summary (Variation B) increased conversion rate by +15% with 94% statistical significance and increased revenue per session by 17% with 97% statistical significance. 

Average order value (AOV) was slightly higher in the variation +2.4% but not by much so most of the revenue increase was from the increase in conversion rate.  

So clearly, customers did care, and having the review star rating summary at the top of PDPs does affect their decision to purchase. 

But we dug further…

Why Product Star Ratings May Affect Purchases But Not Add To Carts

Again, our question mentality framework encourages us to ask further questions of the results so we get more learnings than typical A/B testing teams who would pat themselves on the back on this test and move on to another one. 

So we wanted to know more. 

And we noticed that the add to cart rate was the same in the control and the variation. 

Why would this summary of the star rating increase purchase rate but not add to carts? 

We think it’s because ecommerce shopping behavior is not linear the way many management teams and designers commonly think about it. 

Those of us working on an e-commerce site think of the shoppers’ path like this:

Homepage > Listing page > Product page > Add to Cart > Cart page > Checkout > Purchase! 

But customers don’t do this. They don’t do each step sequentially, one neatly after the other.  In the case of this particular A/B test, a common behavior pattern we’ve seen from e-commerce shoppers is: 

Add to cart > Browse some more > Think about purchasing > Come back to the PDP for products you already added to cart to browse further and decide if you actually want this product enough to buy > Decide to purchase or not 

That bolded step is common and important. We’ve seen it in many user recordings. 

In particular, this client is selling $100+ luxury fashion items. So not an essential purchase nor usually an impulse purchase. It is likely to be “mulled over” and thought about quite a bit before purchasing:

“Do I really want this? Is this the best option? Will I look good in it? What if it doesn’t fit? Should I buy now or wait? Can I afford it?”

During this mull over process, customers often come back to the PDP to look at photos, read reviews, read descriptions, and try to convince themselves to buy. 

Clicking add to cart is simply a way for shoppers to “bookmark” an item. We’ve seen this in many other clients’ sites as well. It doesn’t necessarily mean purchase intent is there…yet. That’s why add to cart rates are so much higher than purchase rates. We routinely see add to cart rates as high as 15% from the PDP when purchase rates are maybe 5%. 

What did the remaining 10% do? They added to cart and decided not to buy. They were not convinced enough. You can imagine how many of them went back to the product page (PDP) to think about it some more. 

So our best interpretation of these results is that this summary of reviews for this store helped convince shoppers to purchase after they had added to cart. It emphasized the social proof aspect of the product and increased product appeal that way. 

Using our Purposes Framework to Connect This Test to a Larger CRO Strategy for The Client

As per our Purposes Framework for ecommerce CRO, we analyze the purpose of each test and afterwards think through the implications of the test result on the rest of the store and its implications on our overall CRO strategy. 

This way each A/B test is not a one-off test, done in a silo, unrelated to the subsequent tests. That’s how most CRO teams do A/B testing and it limits A/B tests by preventing them from cohesively working together to form a larger CRO strategy.

Via our Purposes Framework, this test has purposes of “Product Appeal” and “Brand” because seeing a positive star rating increases credibility and desire for both the product and the brand. 

So, since these seemed to move the needle, instead of just saying good job and walking away as most CRO teams tend to do with a single A/B test, we think through what other similar Brand and Product Appeal tests we could run to further poke at these purposes and see if they move the needle. For example: 

  • Can we emphasize social proof and positive product reviews elsewhere on the site? 
  • What if the homepage had featured reviews? Would this kind of social proof make a difference at that part of the site?
  • How can we better add review star emphasis at the listing page besides just the star rating for every product near its photos? 
  • Can we show a cumulative star rating for the entire brand on the homepage or listing page? 
  • What if we show the star rating for a product in the cart or during checkout? Would that further increase conversion rate? 

This is how you transform one off, haphazard, unrelated A/B tests into a CRO strategy. We connect them via our 6 purposes we have defined that all ecommerce A/B tests fall into, and we track over time which purposes move the needle more or less for each company we work with.

As a result, we get more learnings from this test and more test ideas rather than just moving on to the next, unrelated test. 

Implications for Your Store

This test sparks a few questions worth asking for your store: 

  • If you don’t have reviews, consider: is that really “up leveling” your brand image? How can you test this? It is possible to A/B test the existence of reviews in your store entirely. Email us if you want to discuss this further. 
  • If you have reviews, is the summary of the star rating high enough on the page? In particular, on mobile, is it visible above the fold for common phone sizes? 

If you’re interested in working with us on your ecommerce CRO,  you can learn more about our service or fill out the form here

You can also read these related articles: