A/B Testing

8月 122017
 

Optimization is a core competency for digital marketers. As customer interactions spread across fragmented touch points and consumers demand seamless and relevant experiences, content-oriented marketers have been forced to re-evaluate their strategies for engagement. But the complexity, pace and volume of modern digital marketing easily overwhelms traditional planning and design approaches that rely on historical conventions, myopic single-channel perspectives and sequential act-and-learn iteration.

SAS Customer Intelligence 360 Engage was released last year to address our client needs for a variety of modern marketing challenges. Part of the software's capabilities revolve around:

Regardless of the method, testing is attractive because it is efficient, measurable and serves as a machete cutting through the noise and assumptions associated with delivering effective experiences. The question is: How does a marketer know what to test?

There are so many possibilities. Let's be honest - if it's one thing marketers are good at, it's being creative. Ideas flow out of brainstorming meetings, bright minds flourish with motivation and campaign concepts are born. As a data and analytics geek, I've worked with ad agencies and client-side marketing teams on the importance of connecting the dots between the world of predictive analytics (and more recently machine learning) with the creative process. Take a moment to reflect on the concept of ideation.

Is it feasible to have too many ideas to practically try them all? How do you prioritize? Wouldn't it be awesome if a statistical model could help?

Let's break this down:

  • Predictive analytic or machine learning projects always begin with data. Specifically training data which is fed to algorithms to address an important business question.
  • Ultimately, at the end of this exercise, a recommendation can be made prescriptively to a marketer to take action. This is what we refer to as a hypothesis. It is ready to be tested in-market.
  • This is the connection point between analytics and testing. Just because a statistical model informs us to do something slightly different, it still needs to be tested before we can celebrate.

Here is the really sweet part. The space of visual analytics has matured dramatically. Creative minds dreaming of the next digital experience cannot be held back by hard-to-understand statistical greek. Nor can I condone the idea that if a magical analytic easy-button is accessible in your marketing cloud, one doesn't need to understand what's going on behind the scene.That last sentence is my personal opinion, and feel free to dive into my mind here.

Want a simple example? Of course you do. I'm sitting in a meeting with a bunch of creatives. They are debating on which pages should they run optimization tests on their website. Should it be on one of the top 10 most visited pages? That's an easy web analytic report to run. However, are those the 10 most important pages with respect to a conversion goal? That's where the analyst can step up and help. Here's a snapshot of a gradient boosting machine learning model I built in a few clicks with SAS Visual Data Mining and Machine Learning leveraging sas.com website data collected by SAS Customer Intelligence 360 Discover on what drives conversions.

I know what you're thinking. Cool data viz picture. So what? Take a closer look at this...

The model prioritizes what is important. This is critical, as I have transparently highlighted (with statistical vigor I might add) that site visitor interest in our SAS Customer Intelligence product page is popping as an important predictor in what drives conversions. Now what?

The creative masterminds and I agree we should test various ideas on how to optimize the performance of this important web page. A/B test? Multivariate test? As my SAS colleague Malcolm Lightbody stated:

"Multivariate testing is the way to go when you want to understand how multiple web page elements interact with each other to influence goal conversion rate. A web page is a complex assortment of content and it is intuitive to expect that the whole is greater than the sum of the parts. So, why is MVT less prominent in the web marketer’s toolkit?

One major reason – cost. In terms of traffic and opportunity cost, there is a combinatoric explosion in unique versions of a page as the number of elements and their associated levels increase. For example, a page with four content spots, each of which have four possible creatives, leads to a total of 256 distinct versions of that page to test.

If you want to be confident in the test results, then you need each combination, or variant, to be shown to a reasonable sample size of visitors. In this case, assume this to be 10,000 visitors per variant, leading to 2.56 million visitors for the entire test. That might take 100 or more days on a reasonably busy site. But by that time, not only will the marketer have lost interest – the test results will likely be irrelevant."

SAS Customer Intelligence 360 provides a business-user interface which allows the user to:

  • Set up a multivariate test.
  • Define exclusion and inclusion rules for specific variants.
  • Optimize the design.
  • Place it into production.
  • Examine the results and take action.

Continuing with my story, we decide to set up a test on the sas.com customer intelligence product page with four content spots, and three creatives per spot. This results in 81 total variants and an estimated sample size of 1,073,000 visits to get a significant read at a 90 percent confidence level.

Notice that Optimize button in the image? Let's talk about the amazing special sauce beneath it. Methodical experimentation has many applications for efficient and effective information gathering. To reveal or model relationships between an input, or factor, and an output, or response, the best approach is to deliberately change the former and see whether the latter changes, too. Actively manipulating factors according to a pre-specified design is the best way to gain useful, new understanding.

However, whenever there is more than one factor – that is, in almost all real-world situations – a design that changes just one factor at a time is inefficient. To properly uncover how factors jointly affect the response, marketers have numerous flavors of multivariate test designs to consider. Factorial experimental designs are more common, such as full factorial, fractional factorial, and mixed-level factorial. The challenge here is each method has strict requirements.

This leads to designs that, for example, are not orthogonal or that have irregular design spaces. Over a number of years SAS has developed a solution to this problem. This is contained within the OPTEX procedure, and allows testing of designs for which:

  • Not all combinations of the factor levels are feasible.
  • The region of experimentation is irregularly shaped.
  • Resource limitations restrict the number of experiments that can be performed.
  • There is a nonstandard linear or a nonlinear model.

The OPTEX procedure can generate an efficient experimental design for any of these situations and website (or mobile app) multivariate testing is an ideal candidate because it applies:

  • Constraints on the number of variants that are practical to test.
  • Constraints on required or forbidden combinations of content.

The OPTEX procedure is highly flexible and has many input parameters and options. This means that it can cover different digital marketing scenarios, and it’s use can be tuned as circumstances demand. Customer Intelligence 360 provides the analytic heavy lifting behind the scenes, and the marketer only needs to make choices for business relevant parameters. Watch what happens when I press that Optimize button:

Suddenly that scary sample size of 1,070,000 has reduced to 142,502 visits to perform my test. The immediate benefit is the impractical multivariate test has become feasible. However, if only a subset of the combinations are being shown, how can the marketer understand what would happen for an untested variant? Simple! SAS Customer Intelligence 360 fits a model using the results of the tested variants and uses them to predict the outcomes for untested combinations. In this way, the marketer can simulate the entire multivariate test and draw reliable conclusions in the process.

So you're telling me we can dream big in the creative process and unleash our superpowers? That's right my friends, you can even preview as many variants of the test's recipe as you desire.

The majority of today’s technologies for digital personalization have generally failed to effectively use predictive analytics to offer customers a contextualized digital experience. Many of today’s offerings are based on simple rules-based recommendations, segmentation and targeting that are usually limited to a single customer touch point. Despite some use of predictive techniques, digital experience delivery platforms are behind in incorporating machine learning to contextualize digital customer experiences.

At the end of the day, connecting the dots between data science and testing, no matter which flavor you select, is a method I advocate. The challenge I pose to every marketing analyst reading this:

Can you tell a good enough data story to inspire the creative minded?

How does a marketer know what to test? was published on Customer Intelligence Blog.

5月 112017
 

Multivariate testing (MVT) is another “decision helper” in SAS® Customer Intelligence 360 that is geared at empowering digital marketers to be smarter in their daily job. MVT is the way to go when you want to understand how multiple different web page elements interact with each other to influence goal conversion rate. A web page is a complex assortment of content and it is intuitive to expect that the whole is greater than the sum of the parts. So, why is MVT less prominent in the web marketer’s toolkit?

One major reason – cost. In terms of traffic and opportunity cost, there is a combinatoric explosion in unique versions of a page as the number of elements and their associated levels increases. For example, a page with four content spots, each of which have four possible creatives, leads to a total of 256 distinct versions of that page to test.

If you want to be confident in the test results, then you need each combination, or variant, to be shown to a reasonable sample size of visitors. In this case, assume this to be 10,000 visitors per variant, leading to 2.5 million visitors for the entire test. That might take 100 or more days on a reasonably busy site. But by that time, not only will the web marketer have lost interest – the test results will likely be irrelevant.

A/B testing: The current standard

Today, for expedience, web marketers often choose simpler, sequential A/B tests. Because an A/B test can only tell you about the impact of one element and its variations, it is a matter of intuition when deciding which elements to start with when running sequential tests.

Running a good A/B test requires consideration of any confounding factors that could bias the results. For example, someone changing another page element during a set of sequential A/B tests can invalidate the results. Changing the underlying conditions can also reduce reliability of one or more of the tests.

The SAS Customer Intelligence 360 approach

The approach SAS has developed is the opposite of this. First, you run an MVT across a set of spots on a page. Each spot has two or more candidate creatives available. Then you look to identify a small number of variants with good performance. These are then used for a subsequent A/B test to determine the true winner. The advantage is that underlying factors are better accounted for and, most importantly, interaction effects are measured.

But, of course, the combinatoric challenge is still there. This is not a new problem – experimental design has a history going back more than 100 years – and various methods were developed to overcome it. Among these, Taguchi designs are the best known. There are others as well, and most of these have strict requirements on the type of design. safety consideration.

SAS Customer Intelligence 360 provides a business-user interface which allows the marketing user to:

  • Set up a multivariate test.
  • Define exclusion and inclusion rules for specific variants.
  • Optimize the design.
  • Place it into production.
  • Examine the results and take action.

The analytic heavy lifting is done behind the scenes, and the marketer only needs to make choices for business relevant parameters.

MVT made easy

The immediate benefit is that that multivariate tests are now feasible. The chart below illustrates the reduction in sample size for a test on a page with four spots. The red line shows the number of variants required for a conventional test, and how this increase exponentially with the number of content items per spot.


In contrast, the blue line shows the number of variants required for the optimized version of the test. Even with three content items per spot, there is a 50 percent reduction in the number of unique variants, and this percentage grows larger as the number of items increase. We can translate these numbers into test duration by making reasonable assumptions about the required sample size per variant (10,000 visitors) and about the traffic volume for that page (50,000 visitors per day). The result is shown below.

A test that would have taken 50 days will only take18 days using SAS’ optimized multivariate testing feature. More impressively, a test that would take 120 days to complete can be completed in 25 days.

What about those missing variants?

If only a subset of the combinations are being shown, how can the marketer understand what would happen for an untested variant? Simple. SAS Customer Intelligence 360 fits a model using the results for the tested variants and uses this to predict the outcomes for untested combinations. You can simulate the entire multivariate test and draw reliable conclusions in the process.

The Top Variant Performance report in the upper half of the results summary above indicates the lift for the best-performing variants relative to a champion variant (usually the business-as-usual version of the page). The lower half of the results summary (Variant Metrics) represents each variant as a point located according to a measured or predicted conversion rate. Each point also has a confidence interval associated with the measurement. In the above example, it’s easy to see that there is no clear winner for this test. In fact, the top five variants cannot reliably be separated. In this case, the marketer can use the results from this multi-variate test to automatically set up an A/B test. Unlike the A/B-first approach, narrowing down the field using an optimized multivariate test hones in on the best candidates while accounting for interaction effects.

Making MVT your go-to option

Until now, multivariate testing has been limited to small experiments for all but the busiest websites. SAS Customer Intelligence 360 brings the power of multi-variate testing to more users, without requiring them to have intimate knowledge of design of experiment theory. While multivariate testing will always require larger sample sizes than simple A/B testing, the capabilities presented here show how many more practical use cases can be addressed.

Multivariate Testing: Test more in less time was published on Customer Intelligence Blog.

2月 202017
 

Marketers today use varying adaptations of the customer journey to describe a circular, looped decision pathway with four distinct phases.

Mapping the right data to specific stages of the customer journey is all about getting to know your customers and developing initiatives to put that knowledge into action. Applying analytical models across the key customer journey phases uncovers opportunities to cultivate value generating behaviors and extend the customer’s lifetime value.

  • Initial Consideration Set (Research/Discover). Data and analytics in this phase help you gain deeper customer understanding of customers and prospects. Segmentation surfaces stated and unmet customer needs and buying motivations. Reach the right prospects with look-alike acquisition models and evaluate prospects with lead scoring techniques.
  • Active Evaluation (Explore/Consider). Data and analytics in this phase help you dynamically adapt marketing efforts to customer response – in real-time. Offer optimization techniques can match the appropriate offer based on historical customer response. Amazon’s recommendation engine is a familiar example. Also, A/B and multivariate testing can assess various marketing variables, such as messaging and content types before you roll out initiatives on a wider scale.
  • Moment of Purchase (Buy/Convert). Data and analytics help you understand how and when customers will purchase. Predictive techniques such as propensity models help marketers predict the likelihood that a customer will respond to a specific offer or message and convert. Expand share of wallet with cross-sell and affinity models; or, understand future buying behavior through propensity models.

Post-purchase experience (Use/Maintain/Advocate). Data and analytics in this phase help you uncover patterns of usage behavior and further drive customer engagement. For example, a retail site may tell you the status of your recent order the moment you land on the home page. Churn models such as uplift modeling and survival analysis can provide early warning signs of defection. Preempt customer churn with corrective actions, such as special offers or free upgrades.

Open, unified capabilities needed

Brands that build the most effective customer journeys master three interrelated capabilities: unified customer data platforms, proactive analytics and contextual interactions.

  • Unified customer data platforms: This capability unifies a company's customer data from online and offline channels to extract customer insights and steer customer experience. This includes the ability to cleanse, normalize and aggregate data from disparate systems – within the enterprise and externally – at an individual level.
  • Proactive analytics: Purpose-built data collection and analytics capabilities that incorporates both customer analytics (give brands the customer insight necessary to provide offers that are anticipated, relevant and timely) and marketing analytics (evaluate marketing performance using metrics, such as ROI, channel attribution, and overall marketing effectiveness).
  • Contextual interactions: This capability involves using real-time insights about where a customer is in a journey digitally (browsing product reviews) or physically (entering a retail outlet) or to draw her forward into subsequent actions the company wants her to pursue.

The results are dramatic when marketers can combine data management, analytics and insights execution into unified marketing platform.

Consider gourmet gift retailing icon, Harry & David. By combining data-driven marketing with enriched customer insight, the company transformed its catalog heritage into a contemporary, digital retailing powerhouse. In the past three years, customer retention has increased by 14 percent and sales per customer have gone up 7 percent.

The largest retail group in Switzerland, Migros, used data and analytics to further optimize the customer journey.

The upshot: Change perception to reality

“If change is happening on the outside faster than on the inside the end is in sight.” – Jack Welch

Digitally-empowered prospects and customers are calling the shots, going after what they want when they want it. With a unified view of data and analytics, brands can position themselves in front of their customers’ paths as they navigate the customer journey.

For the brands that can see the world as their customers do – and shape the customer journey accordingly--the reward is higher brand preference, revenue and cost improvements, and a lasting competitive advantage.

Assess your marketing confidence

Take stock of your digital marketing approach with the Marketing Confidence Quotient. This assessment tool quickly identifies and scores your company's strengths and weaknesses across four marketing dimensions: data management, analytics use, process integration and business alignment. It's like having your own personal marketing adviser.

A better approach: Align data and analytics across the customer journey was published on Customer Intelligence Blog.

1月 272017
 

There are so many ways in which a customer’s journey of experiences can be negatively affected, from forms on websites that are unclear or complicated, to inconsistent or non-relevant interactions over many channels. It is important that these interactions are measured and reduced to maximize customer engagement and increase customer satisfaction over the long run.

You can tackle this challenge from several directions, from A/B testing and Multi-Armed Bandit tests that optimize interactions at specific points in a journey (these are available in SAS 360), to approaches that optimize the full customer journeys over many sequential points in the journey.

Optimizing full analytically-driven customer journeys

This year I was involved in a project for a large retailer and the retailer believed there were a significant number of interactions with customers that had an impact on response rates – i.e., positive (halo) effects and negative (cannibalization) effects. These are difficult to deal with using standard optimization techniques that assume independence of contacts, and therefore full customer journey optimization was used to identify these effects and address this complexity successfully.

As always, the first step was to get the consolidated data, at the individual customer level. We were able to accomplish this because we had good, quality data for the project – customer-level demographic data, and contact history data.

The stages of customer journey optimization

The journey optimization was then carried out in three stages:

Stage 1 –Creating analytically driven customer journeys is an important advancement towards truly effective analytically driven omnichannel marketing. We used decision trees (in SAS Enterprise Miner) on the customer history data to map widely varied journeys that customers were taking. Traditionally, decision trees are used on a wider set of data, but by using just the history, the paths of significant activity that led to purchases were identified.

Stage 2 – Next, these analytically driven journey maps were used as inputs for optimization. For every journey identified by the decision trees, a predictive model was created (using SAS Enterprise Miner) to predict spending, so that for every customer, for every journey, we can predict how much they will spend.

Stage 3 – Finally, the data was optimized using SAS Marketing Optimization, and constraints were applied to establish a final set of scenarios that the retailer agreed would be appropriate to implement.

This illustrates how decision trees can be used to map customer journeys, and these journeys can then be optimized; to replace the disjointed and disconnected results of traditional optimization methods. We are also beginning to use the cutting-edge machine learning technique of deep reinforcement learning to further optimize customer journeys. These techniques will be incorporated into SAS Customer Intelligence solutions to ensure that SAS users can explore this complicated and increasingly important area of customer intelligence.

tags: A/B Testing, customer journey, customer journey mapping, multi-armed bandit test, optimization, retail, sas customer intelligence, sas marketing optimization

Customer journey optimization: A real-world example was published on Customer Intelligence.

8月 272016
 

For the uninitiated, SAS 360 Engage enables organizations to interact with consumers by allowing them to create, manage and deliver digital content over web and mobile channels.  Wait a minute. SAS does more than the analytics? That is correct. SAS 360 Engage is a marketing super force serving as a one-stop shop for data capture all the way through delivering highly-targeted, personalized digital experiences.

360 Engage 1

Being able to dynamically place content and offers into digital channels – across devices and points in time – is nothing new for savvy marketing brands focused on optimization. As customer journeys spread across fragmented touch points while customers are demanding seamless and relevant experiences, content-oriented marketers have been forced to reevaluate their strategies for engagement. But the complexity, pace and volume of modern marketing easily overwhelms traditional planning and design approaches that rely on historical conventions, myopic single-channel perspectives and sequential act-and-learn iteration.

Presently, marketers primarily use a variety of content optimization approaches that include A/B testing and multivariate testing. A/B testing, at its simplest, is a method of website or mobile optimization in which the conversion rates of two versions of a page are compared using visitor response rates. By tracking the way visitors interact with the content– the videos they watch, the buttons they click, or whether they sign up for a newsletter – you can infer which version of the content is most effective.

Due to the popularity of this technique, SAS 360 Engage supports A/B/n testing.  A/B/n testing is an extension of A/B testing, where “N” refers to the number of versions being tested, anywhere from two versions to the “nth” version. For example, when a brand has more than one idea for what the ideal digital experience should be, A/B/n can be used to compare each hypothesis and produce an optimized decision based on data, not subjectivity.

360 Engage 2

Testing is attractive because it is efficient, measurable and serves as a machete cutting through the noise and assumptions associated with delivering effective experiences. In parallel, the evolving marketing landscape is driving a greater mandate for testing: to operate in more channels, handle more data and support more users. Testing must mature beyond traditional on-site experimentation to fully optimize a multifaceted customer journey.

360 Engage 3

The majority of today’s technologies for personalization have generally failed to effectively use data science to offer consumers a contextualized digital experience. Many of today’s offerings are based on simple rules-based segmentation to drive recommendations. Building off the benefits of multi-channel A/B/n testing, this is where SAS 360 Engage injects its analytical muscle to differentiate from other personalization technologies.  Let's break this down:

  • At the conclusion of an A/B/n test, there is usually a winner and one or more losers.
  • Is there really one superior experience for your entire marketable audience? Is it possible that experiences should vary by segment?

Performing algorithmic segmentation sounds awesome, but who really has the time to do it? We have so many tests to run.360 Engage 4

360 Engage 5

360 Engage 6

The time has arrived for predictive marketing to have its moment in the sun, and with Forrester recently naming SAS the leader in customer analytics, it's official - the 800-pound gorilla in advanced analytics is locked in on solving complex issues facing the space of data-driven marketing. Making digital personalization more relevant for target audiences is just like preparing a delicious meal; it all comes down to the ingredients and preparation process to rise to the occasion!

A beautiful and interpretable visualization is generated highlighting what is unique about this segment, as compared to everyone else who was exposed to the test. If the brand wants to target this audience in future campaigns, a single click populates this segment in the platform for future journey orchestration.

If you look closely at the image, you will note in the upper half of the report that the winner of the A/B/n test is variant A. However, the lower half of the report showcases a newly discovered segment. It turns out that when a specific customer segment with recent purchase, stay and amenity activity interacts with this hospitality brand, variant B produces better results. How did SAS 360 Engage do this? By applying automated firepower (i.e. algorithmic clustering) to produce this prescriptive and actionable insight. To learn more about this segment, marketers can profile the audience:

SAS 360 Engage was built with the recognition that some marketing teams don't have data scientists available, and have real needs for analytical automation. To improve upon the concept of A/B/n testing, augmenting this capability with automated, algorithmic segmentation with prescriptive results addresses an important need. Let's assume you've run an A/B/n test with four versions of a page, and variant A was crowned the champion. Wouldn't it be nice to know that if a specific segment arrived at your website, an alternative experience would facilitate a better result?

tags: A/B Testing, Campaign Management, customer journey, data science, digital marketing, Digital Personalization, marketing analytics, predictive analytics, Predictive Marketing, Prescriptive Analytics, SAS 360 Engage, SAS Customer Intelligence 360, segmentation

SAS 360 Engage: A/B testing and algorithmic segmentation was published on Customer Intelligence.