CoursesA/B TestingIntroduction to A/B Testing

A/B Testing

Lesson
1

Introduction to A/B Testing

Why would we want to A/B test, and how do we plan for an A/B test
Jon Burbridge
Jon BurbridgeSenior Solution Architect‍​‍​‍‌‍ at Sanity‌​‍‌‍‍‌‌‍‌‌‍‍‌‌‍‍​‍​‍​‍‍​‍​‍‌​‌‍​‌‌‍‍‌‍‍‌‌‌​‌‍‌​‍‍‌‍‍‌‌‍​‍​‍​‍​​‍​‍‌‍‍​‌​‍‌‍‌‌‌‍‌‍​‍​‍​‍‍​‍​‍​‍‌‍​‌‌‍‌​‌‍‌‌‍‍‌‌‍‍​‍‌‍‍‌‌‍‍‌‌​‌‍‌‌‌‍‍‌‌​​‍‌‍‌‌‌‍‌​‌‍‍‌‌‌​​‍‌‍‌‌‍‌‍‌​‌‍‌‌​‌‌​​‌​‍‌‍‌‌‌​‌‍‌‌‌‍‍‌‌​‌‍​‌‌‌​‌‍‍‌‌‍‌‍‍​‍‌‍‍‌‌‍‌​​‌​‌‍​​‌​‍​​‌​​‌‌‍​‍‌‍‌​‌‍​​‍‌​‍​‌‍​‌​​‌‌‍‌​​‍‌​‌​​​​​​‌‌‍‌​​‍‌​‍‌‌‍​​​‌‍‌‌​‍‌‌‍‌‌​​​​‍​‌‍‌‌‌‍‌​‌‍​‌‍​‌‌‍​​​‍​‌‌​​‌​‌​‍‌‌​‌‍‌‌​​‌‍‌‌​‌‌​​‌‍‌‌‌​‍‌​‌‍‌‍‍​‍‌​​‌‍​‌‌‌​‌‍‍​​‌‌‌​‌‍‍‌‌‌​‌‍​‌‍‌‌​‌‍​‍‌‍​‌‌​‌‍‌‌‌‌‌‌‌​‍‌‍​​‌​‍‌‌​​‍‌​‌‍‌‍​‌‌‍‌​‌‍‌‌‍‍‌‌‍‍​‍‌‍‌‍‍‌‌‍‌​​‌​‌‍​​‌​‍​​‌​​‌‌‍​‍‌‍‌​‌‍​​‍‌​‍​‌‍​‌​​‌‌‍‌​​‍‌​‌​​​​​​‌‌‍‌​​‍‌​‍‌‌‍​​​‌‍‌‌​‍‌‌‍‌‌​​​​‍​‌‍‌‌‌‍‌​‌‍​‌‍​‌‌‍​​​‍​‌‌​​‌​‌​‍‌‍‌‌​‌‍‌‌​​‌‍‌‌​‌‌​​‌‍‌‌‌​‍‌​‌‍‌‍‍​‍‌‍‌​​‌‍​‌‌‌​‌‍‍​​‌‌‌​‌‍‍‌‌‌​‌‍​‌‍‌‌​‍​‍‌‌
Before starting this Course it may be beneficial to have completed:
Log in to mark your progress for each Lesson and Task

In this course you will learn:

  • What is A/B Testing
  • How to A/B Test
  • How to setup Field level experiments using @sanity/personalization-plugin
  • Adding an experiment to a field
  • Connecting an external service to get running experiments
  • Getting data for a running experiment
  • Running an experiment on a page.

Hi, I'm Jon, a Senior Solutions Architect at Sanity.

I work with our customers to enable them to get the full value out of the Sanity platform. As part of my role I have been developing the @sanity/personalization-plugin and working with our customers on how to implement it.

Prior to joining Sanity, I was a customer and worked on implementing Sanity into new and existing projects. I made sure we were making data driven decisions by implementing A/B testing of new features, and helped content editors to test their changes.

We often want to make changes to our content, and often this is because we think this change will help our website, app, or other platform perform better. Now this could just be a hunch, but we need to check that we are actually improving the platform.

A/B testing is a method that has been used to test out hypothesis for over 100 years (although it might not have had that name) from things like medicine to farming, to advertising. An A/B test is a basic kind of randomized controlled experiment, where you compare two versions of something to figure out which performs better. There are many ways to measure performance of a version, from increased conversion rate, decreased bounce rate, scroll depth, retention rate, average order value, customer satisfaction. What you choose will depend on what you are testing, but you are essentially seeing if users who have viewed the variant have improved against your chosen metric compared with those that have viewed the control. A/B testing should help us to better understand our customers, make more effective choices and increase conversions.

A/B tests can be as simple as the choice of wording on a button (“buy now” vs “add to cart”), or it could be two completely separate versions of a page (this is split testing which often gets lumped in with A/B testing). A/B tests don't just need to have two versions of content. A/B/N testing has N number of versions of content that would all be included in the same experiment.

It is now common practice for companies to run A/B testing on their digital platforms, with companies like Amazon, Facebook, and Google each conducting more than 10,000 experiments per year.

We need to start with a clear goal of what we want to measure, and a hypothesis of how we think we can make a change to this goal.

In order to work out if our test is a success we need to be collecting data that we want to track, this could be conversion rates, user engagement, or bounce rate, among others.

Once we have that we can create the versions of content we want to test (usually a control and a variant).

We assign a user to a group and show each group a version of the content or a page, and then use statistical analysis of the data we have collected to determine which version performs better.

Normally we would assign the user at random but we need to be aware of other factors that may influence the results of your test. For example you may be testing the wording of a button but that button may only appear of the desktop version of your website. In this case you might only split users by device first and then assign their group.

Now we know a bit about A/B Testing, lets configure fields for experimentation using the @sanity/personalization-plugin .

Mark lesson as complete
You have 1 uncompleted task in this lesson
0 of 1