multivariate testing

8月 122017
 

Optimization is a core competency for digital marketers. As customer interactions spread across fragmented touch points and consumers demand seamless and relevant experiences, content-oriented marketers have been forced to re-evaluate their strategies for engagement. But the complexity, pace and volume of modern digital marketing easily overwhelms traditional planning and design approaches that rely on historical conventions, myopic single-channel perspectives and sequential act-and-learn iteration.

SAS Customer Intelligence 360 Engage was released last year to address our client needs for a variety of modern marketing challenges. Part of the software's capabilities revolve around:

Regardless of the method, testing is attractive because it is efficient, measurable and serves as a machete cutting through the noise and assumptions associated with delivering effective experiences. The question is: How does a marketer know what to test?

There are so many possibilities. Let's be honest - if it's one thing marketers are good at, it's being creative. Ideas flow out of brainstorming meetings, bright minds flourish with motivation and campaign concepts are born. As a data and analytics geek, I've worked with ad agencies and client-side marketing teams on the importance of connecting the dots between the world of predictive analytics (and more recently machine learning) with the creative process. Take a moment to reflect on the concept of ideation.

Is it feasible to have too many ideas to practically try them all? How do you prioritize? Wouldn't it be awesome if a statistical model could help?

Let's break this down:

  • Predictive analytic or machine learning projects always begin with data. Specifically training data which is fed to algorithms to address an important business question.
  • Ultimately, at the end of this exercise, a recommendation can be made prescriptively to a marketer to take action. This is what we refer to as a hypothesis. It is ready to be tested in-market.
  • This is the connection point between analytics and testing. Just because a statistical model informs us to do something slightly different, it still needs to be tested before we can celebrate.

Here is the really sweet part. The space of visual analytics has matured dramatically. Creative minds dreaming of the next digital experience cannot be held back by hard-to-understand statistical greek. Nor can I condone the idea that if a magical analytic easy-button is accessible in your marketing cloud, one doesn't need to understand what's going on behind the scene.That last sentence is my personal opinion, and feel free to dive into my mind here.

Want a simple example? Of course you do. I'm sitting in a meeting with a bunch of creatives. They are debating on which pages should they run optimization tests on their website. Should it be on one of the top 10 most visited pages? That's an easy web analytic report to run. However, are those the 10 most important pages with respect to a conversion goal? That's where the analyst can step up and help. Here's a snapshot of a gradient boosting machine learning model I built in a few clicks with SAS Visual Data Mining and Machine Learning leveraging sas.com website data collected by SAS Customer Intelligence 360 Discover on what drives conversions.

I know what you're thinking. Cool data viz picture. So what? Take a closer look at this...

The model prioritizes what is important. This is critical, as I have transparently highlighted (with statistical vigor I might add) that site visitor interest in our SAS Customer Intelligence product page is popping as an important predictor in what drives conversions. Now what?

The creative masterminds and I agree we should test various ideas on how to optimize the performance of this important web page. A/B test? Multivariate test? As my SAS colleague Malcolm Lightbody stated:

"Multivariate testing is the way to go when you want to understand how multiple web page elements interact with each other to influence goal conversion rate. A web page is a complex assortment of content and it is intuitive to expect that the whole is greater than the sum of the parts. So, why is MVT less prominent in the web marketer’s toolkit?

One major reason – cost. In terms of traffic and opportunity cost, there is a combinatoric explosion in unique versions of a page as the number of elements and their associated levels increase. For example, a page with four content spots, each of which have four possible creatives, leads to a total of 256 distinct versions of that page to test.

If you want to be confident in the test results, then you need each combination, or variant, to be shown to a reasonable sample size of visitors. In this case, assume this to be 10,000 visitors per variant, leading to 2.56 million visitors for the entire test. That might take 100 or more days on a reasonably busy site. But by that time, not only will the marketer have lost interest – the test results will likely be irrelevant."

SAS Customer Intelligence 360 provides a business-user interface which allows the user to:

  • Set up a multivariate test.
  • Define exclusion and inclusion rules for specific variants.
  • Optimize the design.
  • Place it into production.
  • Examine the results and take action.

Continuing with my story, we decide to set up a test on the sas.com customer intelligence product page with four content spots, and three creatives per spot. This results in 81 total variants and an estimated sample size of 1,073,000 visits to get a significant read at a 90 percent confidence level.

Notice that Optimize button in the image? Let's talk about the amazing special sauce beneath it. Methodical experimentation has many applications for efficient and effective information gathering. To reveal or model relationships between an input, or factor, and an output, or response, the best approach is to deliberately change the former and see whether the latter changes, too. Actively manipulating factors according to a pre-specified design is the best way to gain useful, new understanding.

However, whenever there is more than one factor – that is, in almost all real-world situations – a design that changes just one factor at a time is inefficient. To properly uncover how factors jointly affect the response, marketers have numerous flavors of multivariate test designs to consider. Factorial experimental designs are more common, such as full factorial, fractional factorial, and mixed-level factorial. The challenge here is each method has strict requirements.

This leads to designs that, for example, are not orthogonal or that have irregular design spaces. Over a number of years SAS has developed a solution to this problem. This is contained within the OPTEX procedure, and allows testing of designs for which:

  • Not all combinations of the factor levels are feasible.
  • The region of experimentation is irregularly shaped.
  • Resource limitations restrict the number of experiments that can be performed.
  • There is a nonstandard linear or a nonlinear model.

The OPTEX procedure can generate an efficient experimental design for any of these situations and website (or mobile app) multivariate testing is an ideal candidate because it applies:

  • Constraints on the number of variants that are practical to test.
  • Constraints on required or forbidden combinations of content.

The OPTEX procedure is highly flexible and has many input parameters and options. This means that it can cover different digital marketing scenarios, and it’s use can be tuned as circumstances demand. Customer Intelligence 360 provides the analytic heavy lifting behind the scenes, and the marketer only needs to make choices for business relevant parameters. Watch what happens when I press that Optimize button:

Suddenly that scary sample size of 1,070,000 has reduced to 142,502 visits to perform my test. The immediate benefit is the impractical multivariate test has become feasible. However, if only a subset of the combinations are being shown, how can the marketer understand what would happen for an untested variant? Simple! SAS Customer Intelligence 360 fits a model using the results of the tested variants and uses them to predict the outcomes for untested combinations. In this way, the marketer can simulate the entire multivariate test and draw reliable conclusions in the process.

So you're telling me we can dream big in the creative process and unleash our superpowers? That's right my friends, you can even preview as many variants of the test's recipe as you desire.

The majority of today’s technologies for digital personalization have generally failed to effectively use predictive analytics to offer customers a contextualized digital experience. Many of today’s offerings are based on simple rules-based recommendations, segmentation and targeting that are usually limited to a single customer touch point. Despite some use of predictive techniques, digital experience delivery platforms are behind in incorporating machine learning to contextualize digital customer experiences.

At the end of the day, connecting the dots between data science and testing, no matter which flavor you select, is a method I advocate. The challenge I pose to every marketing analyst reading this:

Can you tell a good enough data story to inspire the creative minded?

How does a marketer know what to test? was published on Customer Intelligence Blog.

5月 112017
 

Multivariate testing (MVT) is another “decision helper” in SAS® Customer Intelligence 360 that is geared at empowering digital marketers to be smarter in their daily job. MVT is the way to go when you want to understand how multiple different web page elements interact with each other to influence goal conversion rate. A web page is a complex assortment of content and it is intuitive to expect that the whole is greater than the sum of the parts. So, why is MVT less prominent in the web marketer’s toolkit?

One major reason – cost. In terms of traffic and opportunity cost, there is a combinatoric explosion in unique versions of a page as the number of elements and their associated levels increases. For example, a page with four content spots, each of which have four possible creatives, leads to a total of 256 distinct versions of that page to test.

If you want to be confident in the test results, then you need each combination, or variant, to be shown to a reasonable sample size of visitors. In this case, assume this to be 10,000 visitors per variant, leading to 2.5 million visitors for the entire test. That might take 100 or more days on a reasonably busy site. But by that time, not only will the web marketer have lost interest – the test results will likely be irrelevant.

A/B testing: The current standard

Today, for expedience, web marketers often choose simpler, sequential A/B tests. Because an A/B test can only tell you about the impact of one element and its variations, it is a matter of intuition when deciding which elements to start with when running sequential tests.

Running a good A/B test requires consideration of any confounding factors that could bias the results. For example, someone changing another page element during a set of sequential A/B tests can invalidate the results. Changing the underlying conditions can also reduce reliability of one or more of the tests.

The SAS Customer Intelligence 360 approach

The approach SAS has developed is the opposite of this. First, you run an MVT across a set of spots on a page. Each spot has two or more candidate creatives available. Then you look to identify a small number of variants with good performance. These are then used for a subsequent A/B test to determine the true winner. The advantage is that underlying factors are better accounted for and, most importantly, interaction effects are measured.

But, of course, the combinatoric challenge is still there. This is not a new problem – experimental design has a history going back more than 100 years – and various methods were developed to overcome it. Among these, Taguchi designs are the best known. There are others as well, and most of these have strict requirements on the type of design. safety consideration.

SAS Customer Intelligence 360 provides a business-user interface which allows the marketing user to:

  • Set up a multivariate test.
  • Define exclusion and inclusion rules for specific variants.
  • Optimize the design.
  • Place it into production.
  • Examine the results and take action.

The analytic heavy lifting is done behind the scenes, and the marketer only needs to make choices for business relevant parameters.

MVT made easy

The immediate benefit is that that multivariate tests are now feasible. The chart below illustrates the reduction in sample size for a test on a page with four spots. The red line shows the number of variants required for a conventional test, and how this increase exponentially with the number of content items per spot.


In contrast, the blue line shows the number of variants required for the optimized version of the test. Even with three content items per spot, there is a 50 percent reduction in the number of unique variants, and this percentage grows larger as the number of items increase. We can translate these numbers into test duration by making reasonable assumptions about the required sample size per variant (10,000 visitors) and about the traffic volume for that page (50,000 visitors per day). The result is shown below.

A test that would have taken 50 days will only take18 days using SAS’ optimized multivariate testing feature. More impressively, a test that would take 120 days to complete can be completed in 25 days.

What about those missing variants?

If only a subset of the combinations are being shown, how can the marketer understand what would happen for an untested variant? Simple. SAS Customer Intelligence 360 fits a model using the results for the tested variants and uses this to predict the outcomes for untested combinations. You can simulate the entire multivariate test and draw reliable conclusions in the process.

The Top Variant Performance report in the upper half of the results summary above indicates the lift for the best-performing variants relative to a champion variant (usually the business-as-usual version of the page). The lower half of the results summary (Variant Metrics) represents each variant as a point located according to a measured or predicted conversion rate. Each point also has a confidence interval associated with the measurement. In the above example, it’s easy to see that there is no clear winner for this test. In fact, the top five variants cannot reliably be separated. In this case, the marketer can use the results from this multi-variate test to automatically set up an A/B test. Unlike the A/B-first approach, narrowing down the field using an optimized multivariate test hones in on the best candidates while accounting for interaction effects.

Making MVT your go-to option

Until now, multivariate testing has been limited to small experiments for all but the busiest websites. SAS Customer Intelligence 360 brings the power of multi-variate testing to more users, without requiring them to have intimate knowledge of design of experiment theory. While multivariate testing will always require larger sample sizes than simple A/B testing, the capabilities presented here show how many more practical use cases can be addressed.

Multivariate Testing: Test more in less time was published on Customer Intelligence Blog.