In many articles and case studies, we anonymize our client’s brands. That means we identify the general industry they are in, and use analogous products (e.g. a client may sell desks but an article will say they sell chairs), but keep the actual brand and actual UI hidden.
Instead of the actual original and variation, we’ll draw up wireframes that are more less exactly what we tested but don’t reveal to the reader what the exact site is.
Why do we do this?
This is because much of ecommerce industry has fierce competition.
Conversion rate optimization is hard work, and a key insight into customer tendencies is valuable. Knowing some UI detail that increases conversion rates is valuable. Learning these things can take weeks, months, or even longer.
So, understandably, many brands don’t want to reveal these hard won nuggets of wisdom to competition.
The purpose of this blog is to teach other ecommerce managers, marketers, and store owners the lessons we learn as we work with clients, and we feel we can do that just fine while keeping the brand identities private.
The wireframes we draw up and the analogous industries we tell stories with let us convey the key lessons just fine.
But is the data real?
Yes, 100% of the AB test data and analytics data we report is real. Once we anonymize the client, there’s no need to make up mock data.
In addition anyone who has done long term conversion optimization knows the details of test data matter. That is, “14% increase” doesn’t mean a lot if the original and variation got only 5 and 7 conversion events each, so we always try to report on the sample size.
Sometimes, even then, contractually we’ve promised to never report a client’s true conversion rate (yes, even though we don’t reveal the client, many ecommerce executives still don’t want their actual conversion rates published anywhere.).
That means instead of “A: 345/2356…B: 457/2431” we may say “We saw an x% increase with 97% statistical significance and a total of 4000 orders taken over a 4 week test period”. This will give the reader a sense of the sample sizes in our experiments and how robust the test result is.
If a sample size is low or statistical significance wasn’t as strong, we will call that out in a blog post as well and discuss how it affected our interpretation: “We only saw 92% significance on this, but the test had 12,000 conversion events and the variation was leading at this increase for 3 weeks.”