Many businesses recognise the value of using customer decisioning models. Most also recognise that the fresher the model, the better the decision. But how important is it to keep the models fresh, and is it worth investing in automation? Many clients struggle with building that business case. Below are some [...]
The analytical community is getting increasingly interested the concept of attribution. And while much of this is focused on digital marketing attribution, I want to take a step back to describe the wider application of attribution and the traditional techniques that can be used to solve a range of attribution challenges.
I’ll use as an example a UK bank that I worked with that was collecting debt on customers that had defaulted on loans. For many businesses, there is a clear result in customer engagements (i.e., the customer responded, purchased, etc.) In this instance, while it’s known which customers made a payment, it might be unclear which action (or combination of actions) triggered payment.
Collections teams often have paths of activities that are analogous to customer journeys (how’s that for lateral thinking?). It may be that a high-risk customer will receive a text message five days after they have defaulted, a letter at 10 days, and a telephone call at 15 days. Collections teams will often have different paths for high-risk and low-risk customer. A low-risk customer may receive the same contact escalation path (text/letter/phone call) but on a longer cycle (e.g., at 10/15/20 days).
Multivariate testing helps reveal hidden relationships
My banking client had wisely created a test of the two different paths using two similar sets of customers. The more aggressive path collected more debt, but the costs were higher. However, there were options to improve upon this, and we chose to build two customer-level predictive models – one for each path – to predict the likelihood of payment.
This led to the creation of four segments that enabled much more effective decisions:
- Model 1: High-probability to pay (for both paths)
- Action: Apply low-risk path to save cost
- Model 2: Low probability to pay (for both paths)
- Action: Apply low-risk path to save cost (but also test an accelerated strategy)
- Model 3: High probability to pay for high-risk path (but low probability to pay on low-risk path)
- Action: Apply high risk path to maximise uplift
- Low probability to pay for high-risk path, (but high probability to pay for low-risk path)
- Action: Apply low-risk path
These models uncovered ways to improve collected debt by 2 percent with no increase in cost, which is a significant performance improvement. More importantly, it highlights the value of attributing outcomes to actions because many collections departments focus on perceived risk as they begin collection paths and do not attribute collected debt to specific activities.
Lessons for marketers
This example is a good first step towards a detailed attribution solution because by treating the path as one action, the halo and cannibalisation effects of the different actions are accounted for. Of course, there are opportunities to dive deeper and understand which of the underlying activities are driving the behaviour and multivariate testing can support this.
There are a few clear takeaways here for marketers:
- Good test data is essential to achieve good attribution results.
- Customer-level attribution can deliver results that are applicable across all channels.
- Traditional modelling techniques can support building conditional models or net-lift models using different paths.
- Simplifying the problem (as much as possible) will give you results that are more easily incorporated into your campaign strategies.
To learn more about how SAS Customer Intelligence 360 uses multivariate testing to create better customer experiences, read this blog post by Suneel Grover.
How is debt collection like attribution modelling? was published on Customer Intelligence Blog.
Marketers today use varying adaptations of the customer journey to describe a circular, looped decision pathway with four distinct phases.
Mapping the right data to specific stages of the customer journey is all about getting to know your customers and developing initiatives to put that knowledge into action. Applying analytical models across the key customer journey phases uncovers opportunities to cultivate value generating behaviors and extend the customer’s lifetime value.
- Initial Consideration Set (Research/Discover). Data and analytics in this phase help you gain deeper customer understanding of customers and prospects. Segmentation surfaces stated and unmet customer needs and buying motivations. Reach the right prospects with look-alike acquisition models and evaluate prospects with lead scoring techniques.
- Active Evaluation (Explore/Consider). Data and analytics in this phase help you dynamically adapt marketing efforts to customer response – in real-time. Offer optimization techniques can match the appropriate offer based on historical customer response. Amazon’s recommendation engine is a familiar example. Also, A/B and multivariate testing can assess various marketing variables, such as messaging and content types before you roll out initiatives on a wider scale.
- Moment of Purchase (Buy/Convert). Data and analytics help you understand how and when customers will purchase. Predictive techniques such as propensity models help marketers predict the likelihood that a customer will respond to a specific offer or message and convert. Expand share of wallet with cross-sell and affinity models; or, understand future buying behavior through propensity models.
Post-purchase experience (Use/Maintain/Advocate). Data and analytics in this phase help you uncover patterns of usage behavior and further drive customer engagement. For example, a retail site may tell you the status of your recent order the moment you land on the home page. Churn models such as uplift modeling and survival analysis can provide early warning signs of defection. Preempt customer churn with corrective actions, such as special offers or free upgrades.
Open, unified capabilities needed
Brands that build the most effective customer journeys master three interrelated capabilities: unified customer data platforms, proactive analytics and contextual interactions.
- Unified customer data platforms: This capability unifies a company's customer data from online and offline channels to extract customer insights and steer customer experience. This includes the ability to cleanse, normalize and aggregate data from disparate systems – within the enterprise and externally – at an individual level.
- Proactive analytics: Purpose-built data collection and analytics capabilities that incorporates both customer analytics (give brands the customer insight necessary to provide offers that are anticipated, relevant and timely) and marketing analytics (evaluate marketing performance using metrics, such as ROI, channel attribution, and overall marketing effectiveness).
- Contextual interactions: This capability involves using real-time insights about where a customer is in a journey digitally (browsing product reviews) or physically (entering a retail outlet) or to draw her forward into subsequent actions the company wants her to pursue.
The results are dramatic when marketers can combine data management, analytics and insights execution into unified marketing platform.
Consider gourmet gift retailing icon, Harry & David. By combining data-driven marketing with enriched customer insight, the company transformed its catalog heritage into a contemporary, digital retailing powerhouse. In the past three years, customer retention has increased by 14 percent and sales per customer have gone up 7 percent.
The largest retail group in Switzerland, Migros, used data and analytics to further optimize the customer journey.
The upshot: Change perception to reality
“If change is happening on the outside faster than on the inside the end is in sight.” – Jack Welch
Digitally-empowered prospects and customers are calling the shots, going after what they want when they want it. With a unified view of data and analytics, brands can position themselves in front of their customers’ paths as they navigate the customer journey.
For the brands that can see the world as their customers do – and shape the customer journey accordingly--the reward is higher brand preference, revenue and cost improvements, and a lasting competitive advantage.
Assess your marketing confidence
Take stock of your digital marketing approach with the Marketing Confidence Quotient. This assessment tool quickly identifies and scores your company's strengths and weaknesses across four marketing dimensions: data management, analytics use, process integration and business alignment. It's like having your own personal marketing adviser.
Ronald Snee and Roger Hoerl have written a book called Strategies for Formulations Development. It is intended to help scientists and engineers be successful in creating formulations quickly and efficiently.
The following tip is from this new book, which focuses on providing the essential information needed to successfully conduct formulation studies in the chemical, biotech and pharmaceutical industries:
Although most journal articles present mixture experiments and models that only involve the formulation components, most real applications also involve process variables, such as temperature, pressure, flow rate and so on. How should we modify our experimental and modeling strategies in this case? A key consideration is whether the formulation components and process variables interact. If there is no interaction, then an additive model, fitting the mixture and process effects independently, can be used:
c(x,z) = f(x) + g(z), where 1
f(x) is the mixture model, and g(z) is the process variable model. Independent designs could also be used. However, in our experience, there is typically interaction between mixture and process variables. What should we do in this case? Such interaction is typically modeled by replacing the additive model in Equation 1 with a multiplicative model:
c(x,z) = f(x)*g(z) 2
Note that this multiplicative model is actually non-linear in the parameters. Most authors, including Cornell (2002), therefore suggest multiplying out the individual terms in f(x) and g(z) from Equation 2, creating a linear hybrid model. However, this tends to be a large model, since the number of terms in linearized version of c(x,z) will be the number in f(x) times the number in g(z). In Cornell’s (2002) famous fish patty experiment, there were three mixture variables (7 terms) and three process variables (8 terms), but the linearized c(x,z) had 7*8 = 56 terms, requiring a 56-run hybrid design.
Recent research by Snee et al. (2016) has shown that by considering hybrid models that are non-linear in the parameters, the number of terms required, and therefore the size of designs required, can be significantly reduced, often on the order of 50%. For example, if we fit equation 2 directly as a non-linear model, then the number of terms to estimate is the number in f(x) plus the number in g(z); 7 + 8 = 15 in the fish patty case. Snee et al. (2016) showed using real data that this approach can often provide reasonable models, allowing use of much smaller fractional hybrid designs. We therefore recommended an overall sequential strategy involving initial use of fractional designs and non-linear models, but with the option of moving to linearized models if necessary.
In JMP 12, an interactive HTML Profiler was added, as I had previously blogged about. That change mainly updated the existing Flash functionality to HTML5 technology, making it available on mobile devices like an iPad, but it also introduced a few new features. Among these was the option of exporting the Fit Model Least Squares platform report as a whole with an interactive Profiler embedded within it.
After users got to try this tool, the response was overwhelmingly positive. They found it a great way to explore cross-sections of predicted responses across multiple factors with other people who don’t have JMP yet. However, the feedback was that users would like to see Profilers available in other platforms as well.
In JMP 13, three more platforms have embedded Profilers that are available in interactive HTML.
In JMP, you can analyze your data using Neural Networks. I will use the Diabetes data set from the sample data library to illustrate some of the differences between this platform and Generalized Regression below. Note the curved responses for Age, BMI, and BP as well as the elongated report (only the first five factors out of 10 are shown).
Generalized Regression embedded Profilers are supported for export from JMP Pro 13. This example also shows an additional enhancement for Interactive HMTL in JMP 13 that allows you to pick how many plots are displayed in a row when you have a lot of factors. You'd do this in JMP by selecting the red triangle, going to Appearance and selecting Arrange in Rows to provide the number you want before exporting. This allows you to explore many factors in Interactive HTML with a nice layout (which can be useful on a mobile device with a smaller screen). You can see the same factors analyzed as in the Neural platform above, but more are visible in the same width display due to this feature.
Generalized Linear Model
Generalized Linear Model is the third platform to support interactive HTML embedded Profilers in JMP 13.
In addition to making embedded Profilers in those three platforms available in interactive HTML, JMP 13 includes new features to make exploring your data a little easier. That's what I'll cover in the following sections.
Adapt Y Axis
In JMP 12, you could explore data outside of the initial range of the numeric factors by typing in a value in the edit box below the curve. But what if this causes the curve to move outside the initial range of the response? You could see the value displayed in red on the Y axis, but no longer see the curve itself. Now there is an option to have the Y axis automatically adapt to show the min and max values of the curve. Simply click the menu button above the Profiler and check “Adapt Y Axis”.
Some data requires analyzing a formatted X factor such as a date, time, or geographic location. In JMP 12, you could click or drag anywhere within the Profiler to change the value, but there was no way to provide a precise value for this type of data. Now X variables in these formats are displayed as a button that, when clicked, launches a dialog to enter the individual fields of the format.
Similarly, in JMP 12, if you tried to precisely set a Profiler with a mixture constraint to a set of values that you knew satisfied the constraint, you couldn’t do it; every time you set one value, the others were altered to satisfy the mixture. In JMP 13, mixture values are applied by clicking an apply button.
For example, the amounts of three ingredients used to make a plastic in the following Profiler must sum to 1 and stay within the ranges shown. The values 0.7, 0.1, and 0.2 sum to 1 exactly. So, by entering these values in the edit boxes and then clicking apply, the Profiler is set to those precise values.
The images shown here as well as a few other examples are available as live interactive HTML files to explore on the web.
JMP offers a wide variety of math functions, special features and powerful algorithms that haven’t all been implemented in HTML, so not every Profiler will come out interactively. If you need to share work with someone who doesn’t have JMP and export your reports to Interactive HTML, we’ve added messages to the log to try to indicate why a particular Profiler has come out as a static image. Armed with this knowledge, we hope you will try your own Profilers and give us feedback on what features and platforms you want to see in the future.
The post Interactive HTML: Profilers in 3 more platforms in JMP 13 appeared first on JMP Blog.
From time to time, the addition of new features requires a review of how capabilities are organized and presented in JMP. Are they located where it makes the most sense and where users would expect to find them? For example, in JMP 12 there was enough new material combined with […]
The post JMP 13 Preview: Improvements to the Analyze menu for a better user experience appeared first on JMP Blog.
Scientists and engineers who work with high-density sensors face data problems that can make getting insights from data more challenging. Whether you’re trying to make sense of information from industrial devices living on the Internet of Things or monitoring health and fitness parameters, JMP provides an ideal sandbox for sifting […]
I had the privilege of participating in JMP’s Analytically Speaking series a couple of weeks ago (June 8, 2016). While I was able to answer many questions submitted during the live broadcast, there were additional questions that are answered in this blog post. In addition, look for future blog posts […]
Dr. Karen Copeland will be our featured guest on Analytically Speaking on June 8. She is the owner of Boulder Statistics, a successful consultancy to a wide array of industry sectors around the world — medical device, diagnostics, chemicals, marketing, environmental, consumer and food products, pharmaceuticals, and web analytics, among […]
I’ve posted here in the JMP Blog about the American Marketing Association’s Advanced Research Techniques (ART) Forum and the impressive work that’s presented there every year. As co-chair, I am doubly excited for this year’s conference, which will take place June 26 – 29 in Boston, MA. We had an […]
The post AMA Advanced Research Techniques (ART) early registration ends Thursday! appeared first on JMP Blog.