Suneel Grover

6月 202018
 

Regardless of industry, it has become a frequent occurrence that behind every data-driven marketer is an analytical ninja. Together, they formulate recipes in addressing the customer-centric paradigm that considers the different actions that a brand can take for a specific individual, and decides on the “best” one. The goal of [...]

SAS Customer Intelligence 360: Predictive next best actions was published on Customer Intelligence Blog.

5月 302018
 

Why does your organization’s website or mobile app exist? What are you hoping to accomplish with your business by being digital? What are the most important priorities for your digital presence? Here are three goals that most organizations share: Sell more stuff. Make marketing more effective. Delight customers. Goals are [...]

SAS Customer Intelligence 360: Goals, propensity models, and machine learning was published on Customer Intelligence Blog.

5月 232018
 

Path analysis is an exploration of a chain of consecutive events that a given user or cohort performs during a set period while using a website, online game or mobile app (although other use cases can apply outside of digital analytics). As a subset of behavioral analytics, path analysis is [...]

SAS Customer Intelligence 360: Path analysis for re-engagement was published on Customer Intelligence Blog.

5月 182018
 

When data meets geography, use cases revolve around mapping and spatial analytics. But what happens when you combine digital analytics and powerful visualization for customer location analysis? Leveraging data collection mechanisms, SAS 360 Discover captures first-party behavioral information across the entire digital customer experience with a brand’s websites and mobile [...]

SAS Customer Intelligence 360: Location analytics meets digital intelligence was published on Customer Intelligence Blog.

4月 062018
 

In May of 2016, I expressed my personal excitement for the release of SAS 360 Discover to the SAS platform, and the subject of predictive marketing analytics. Since then, I have worked with brands across industries like sports, non-profits, and financial services to learn about valuable use cases. Before diving [...]

SAS 360 Discover: Enhance your marketing automation platform was published on Customer Intelligence Blog.

12月 192017
 

If you’ve ever used Amazon or Netflix, you’ve experienced the value of recommendation systems firsthand. These sophisticated systems identify recommendations autonomously for individual users based on past purchases and searches, as well as other behaviors. By supporting an automated cross-selling approach, they empower brands to offer additional products or services [...]

Customer Intelligence 360: The digital shapeshifter of recommendation systems was published on Customer Intelligence Blog.

9月 262017
 

In Part 1 and Part 2 of this blog posting series, we discussed: Our current viewpoints on marketing attribution and conversion journey analysis in 2017. The selection criteria of the best measurement approach. Introduced our vision on handling marketing attribution and conversion journey analysis. We would like to conclude this [...]

Algorithmic marketing attribution and conversion journey analysis [Part 3] was published on Customer Intelligence Blog.

9月 192017
 

In Part 1 of this blog posting series, we discussed our current viewpoints on marketing attribution and conversion journey analysis in 2017. We concluded on a cliffhanger, and would like to return to our question of which attribution measurement method should we ultimately focus on. As with all difficult questions [...]

Algorithmic marketing attribution and conversion journey analysis [Part 2] was published on Customer Intelligence Blog.

9月 142017
 

Editor's note: This blog post was authored by Malcolm Lightbody (SAS Customer Intelligence Product Management) and Suneel Grover (SAS Principal Solutions Architect).

Everyone has a marketing attribution problem, and all attribution measurement methods are wrong. We hear that all the time. Like many urban myths, it is founded in truth. Most organizations believe they can do better on attribution. They all understand that there are gaps, for example, missing touchpoint data, multiple identities across devices, arbitrary decisions on weightings for rules, and uncertainty about what actions arise from the results.

Broadly speaking, the holy grail of media measurement is to analyze the impact and business value of all company-generated marketing interactions across the complex customer journey. In this post, our goal is to take a transparent approach in discussing how SAS is building data-driven marketing technology to help customers progress beyond typical attribution methods to make the business case for customer journey optimization.

Being SAS, we advocate an analytic approach to addressing the operational and process-related obstacles that we commonly hear from customers. We want to treat them as two sides of the same coin. The output of attribution analytics informs marketers about what touch points and sequence of activities drive conversions. This leads marketers to make strategic decisions about future investment levels, as well as more tactical decisions about what activities to run. In an ideal world, the results of subsequent actions are fed back into the attribution model to increase not only its explanatory power, but also its predictive abilities, as shown below:

The diagram above shows the main parts of an attribution project. The actual analysis is just part of the process, with upstream and downstream dependencies. But this doesn’t always happen as it should. Consider a standard attribution report. Let us for the moment ignore what technique was used to generate the result and place ourselves in the shoes of the marketer trying to figure out what to do next.

In the graph above, we see the results of an attribution analysis based on a variety of measurement methods. Before answering the question of which method should we focus on, let's do a quick review of rules-based and algorithmic measurement techniques.

Last-touch and first-touch attribution

This type of attribution allocates 100 percent of the credit to either the last or first touch of the customer journey. This approach has genuine weaknesses, and ignores all other interactions with your brand across a multi-touch journey.

Linear attribution


Linear attribution arbitrarily allocates an equal credit weight to every interaction along the customer journey. Although slightly better than the last- and first-touch approaches, linear attribution will undercredit and overcredit specific interactions.

Time-decay and position-based attribution

Time-decay attribution arbitrarily biases the channel weighting based on the recency of the channel touches across the customer journey. If you support the concept of recency within RFM analysis, there is some merit to approach. Position-based attribution places more weight on the first and last touches, while providing less value to the interactions in between.

Algorithmic attribution

In contrast, algorithmic attribution (sometimes referred to as custom models) assigns data-driven conversion credit across all touch points preceding the conversion, and uses math typically associated with predictive analytics or machine learning  to identify where credit is due. It analyzes both converting and non-converting consumer paths across all channels. Most importantly, it uses data to uncover the correlations and success factors within marketing efforts. Here is a video summarizing a customer case study example to help demystify what we mean.

Why doesn’t everyone use algorithmic attribution?

Although many marketers recognize the value and importance of algorithmic attribution, adopting it hasn’t been easy. There are several reasons:

  • Much-needed modernization. The volume of data that you can collect is massive and may overwhelm outdated data management and analytical platforms. Especially when you’ll need to integrate multiple data sources. Organizations have a decision to make regarding modernization.
  • Scarcity of expertise. Some believe the talent required to unlock the marketing value in data is scarce. However, there are more than 150 universities offering business analytic and data science programs. Talent is flooding into industry. The synergy between analysts and strategically minded marketers is the key to unlock this door.
  • Effective use of data. Organizations are rethinking how they collect, analyze and act on important data sources. Are you using all your crucial marketing data? How do you merge website and mobile app visitor data with email and display campaign data? If you accomplish all of this, how do you take prescriptive action between data, analytics and your media delivery end points?
  • Getting business buy-in. Algorithmic attribution is often perceived as a black box, which vested interest groups can use as a reason to maintain the status quo.

Returning to our question of which method should we ultimately focus on, the answer is it depends. An attribution report on its own cannot decide this. And it doesn’t even matter if the attribution report is generated using the most sophisticated algorithmic techniques. There are four things that the report won't tell you:

  1. The elasticities of a single touch point.
  2. The interdependencies between different touch points.
  3. Cause and effect and timing dependencies.
  4. Differences between different groups of customers.

In Part 2 of this blog posting series, we will dive into specific detail within these areas, as well as introduce our vision within SAS Customer Intelligence 360 on handling algorithmic marketing attribution and conversion journey analysis.

Algorithmic marketing attribution and conversion journey analysis [Part 1] was published on Customer Intelligence Blog.

8月 122017
 

Optimization is a core competency for digital marketers. As customer interactions spread across fragmented touch points and consumers demand seamless and relevant experiences, content-oriented marketers have been forced to re-evaluate their strategies for engagement. But the complexity, pace and volume of modern digital marketing easily overwhelms traditional planning and design approaches that rely on historical conventions, myopic single-channel perspectives and sequential act-and-learn iteration.

SAS Customer Intelligence 360 Engage was released last year to address our client needs for a variety of modern marketing challenges. Part of the software's capabilities revolve around:

Regardless of the method, testing is attractive because it is efficient, measurable and serves as a machete cutting through the noise and assumptions associated with delivering effective experiences. The question is: How does a marketer know what to test?

There are so many possibilities. Let's be honest - if it's one thing marketers are good at, it's being creative. Ideas flow out of brainstorming meetings, bright minds flourish with motivation and campaign concepts are born. As a data and analytics geek, I've worked with ad agencies and client-side marketing teams on the importance of connecting the dots between the world of predictive analytics (and more recently machine learning) with the creative process. Take a moment to reflect on the concept of ideation.

Is it feasible to have too many ideas to practically try them all? How do you prioritize? Wouldn't it be awesome if a statistical model could help?

Let's break this down:

  • Predictive analytic or machine learning projects always begin with data. Specifically training data which is fed to algorithms to address an important business question.
  • Ultimately, at the end of this exercise, a recommendation can be made prescriptively to a marketer to take action. This is what we refer to as a hypothesis. It is ready to be tested in-market.
  • This is the connection point between analytics and testing. Just because a statistical model informs us to do something slightly different, it still needs to be tested before we can celebrate.

Here is the really sweet part. The space of visual analytics has matured dramatically. Creative minds dreaming of the next digital experience cannot be held back by hard-to-understand statistical greek. Nor can I condone the idea that if a magical analytic easy-button is accessible in your marketing cloud, one doesn't need to understand what's going on behind the scene.That last sentence is my personal opinion, and feel free to dive into my mind here.

Want a simple example? Of course you do. I'm sitting in a meeting with a bunch of creatives. They are debating on which pages should they run optimization tests on their website. Should it be on one of the top 10 most visited pages? That's an easy web analytic report to run. However, are those the 10 most important pages with respect to a conversion goal? That's where the analyst can step up and help. Here's a snapshot of a gradient boosting machine learning model I built in a few clicks with SAS Visual Data Mining and Machine Learning leveraging sas.com website data collected by SAS Customer Intelligence 360 Discover on what drives conversions.

I know what you're thinking. Cool data viz picture. So what? Take a closer look at this...

The model prioritizes what is important. This is critical, as I have transparently highlighted (with statistical vigor I might add) that site visitor interest in our SAS Customer Intelligence product page is popping as an important predictor in what drives conversions. Now what?

The creative masterminds and I agree we should test various ideas on how to optimize the performance of this important web page. A/B test? Multivariate test? As my SAS colleague Malcolm Lightbody stated:

"Multivariate testing is the way to go when you want to understand how multiple web page elements interact with each other to influence goal conversion rate. A web page is a complex assortment of content and it is intuitive to expect that the whole is greater than the sum of the parts. So, why is MVT less prominent in the web marketer’s toolkit?

One major reason – cost. In terms of traffic and opportunity cost, there is a combinatoric explosion in unique versions of a page as the number of elements and their associated levels increase. For example, a page with four content spots, each of which have four possible creatives, leads to a total of 256 distinct versions of that page to test.

If you want to be confident in the test results, then you need each combination, or variant, to be shown to a reasonable sample size of visitors. In this case, assume this to be 10,000 visitors per variant, leading to 2.56 million visitors for the entire test. That might take 100 or more days on a reasonably busy site. But by that time, not only will the marketer have lost interest – the test results will likely be irrelevant."

SAS Customer Intelligence 360 provides a business-user interface which allows the user to:

  • Set up a multivariate test.
  • Define exclusion and inclusion rules for specific variants.
  • Optimize the design.
  • Place it into production.
  • Examine the results and take action.

Continuing with my story, we decide to set up a test on the sas.com customer intelligence product page with four content spots, and three creatives per spot. This results in 81 total variants and an estimated sample size of 1,073,000 visits to get a significant read at a 90 percent confidence level.

Notice that Optimize button in the image? Let's talk about the amazing special sauce beneath it. Methodical experimentation has many applications for efficient and effective information gathering. To reveal or model relationships between an input, or factor, and an output, or response, the best approach is to deliberately change the former and see whether the latter changes, too. Actively manipulating factors according to a pre-specified design is the best way to gain useful, new understanding.

However, whenever there is more than one factor – that is, in almost all real-world situations – a design that changes just one factor at a time is inefficient. To properly uncover how factors jointly affect the response, marketers have numerous flavors of multivariate test designs to consider. Factorial experimental designs are more common, such as full factorial, fractional factorial, and mixed-level factorial. The challenge here is each method has strict requirements.

This leads to designs that, for example, are not orthogonal or that have irregular design spaces. Over a number of years SAS has developed a solution to this problem. This is contained within the OPTEX procedure, and allows testing of designs for which:

  • Not all combinations of the factor levels are feasible.
  • The region of experimentation is irregularly shaped.
  • Resource limitations restrict the number of experiments that can be performed.
  • There is a nonstandard linear or a nonlinear model.

The OPTEX procedure can generate an efficient experimental design for any of these situations and website (or mobile app) multivariate testing is an ideal candidate because it applies:

  • Constraints on the number of variants that are practical to test.
  • Constraints on required or forbidden combinations of content.

The OPTEX procedure is highly flexible and has many input parameters and options. This means that it can cover different digital marketing scenarios, and it’s use can be tuned as circumstances demand. Customer Intelligence 360 provides the analytic heavy lifting behind the scenes, and the marketer only needs to make choices for business relevant parameters. Watch what happens when I press that Optimize button:

Suddenly that scary sample size of 1,070,000 has reduced to 142,502 visits to perform my test. The immediate benefit is the impractical multivariate test has become feasible. However, if only a subset of the combinations are being shown, how can the marketer understand what would happen for an untested variant? Simple! SAS Customer Intelligence 360 fits a model using the results of the tested variants and uses them to predict the outcomes for untested combinations. In this way, the marketer can simulate the entire multivariate test and draw reliable conclusions in the process.

So you're telling me we can dream big in the creative process and unleash our superpowers? That's right my friends, you can even preview as many variants of the test's recipe as you desire.

The majority of today’s technologies for digital personalization have generally failed to effectively use predictive analytics to offer customers a contextualized digital experience. Many of today’s offerings are based on simple rules-based recommendations, segmentation and targeting that are usually limited to a single customer touch point. Despite some use of predictive techniques, digital experience delivery platforms are behind in incorporating machine learning to contextualize digital customer experiences.

At the end of the day, connecting the dots between data science and testing, no matter which flavor you select, is a method I advocate. The challenge I pose to every marketing analyst reading this:

Can you tell a good enough data story to inspire the creative minded?

How does a marketer know what to test? was published on Customer Intelligence Blog.