If you’ve ever used Amazon or Netflix, you’ve experienced the value of recommendation systems firsthand. These sophisticated systems identify recommendations autonomously for individual users based on past purchases and searches, as well as other behaviors. By supporting an automated cross-selling approach, they empower brands to offer additional products or services [...]
In Part 1 and Part 2 of this blog posting series, we discussed: Our current viewpoints on marketing attribution and conversion journey analysis in 2017. The selection criteria of the best measurement approach. Introduced our vision on handling marketing attribution and conversion journey analysis. We would like to conclude this [...]
In Part 1 of this blog posting series, we discussed our current viewpoints on marketing attribution and conversion journey analysis in 2017. We concluded on a cliffhanger, and would like to return to our question of which attribution measurement method should we ultimately focus on. As with all difficult questions [...]
Everyone has a marketing attribution problem, and all attribution measurement methods are wrong. We hear that all the time. Like many urban myths, it is founded in truth. Most organizations believe they can do better on attribution. They all understand that there are gaps, for example, missing touchpoint data, multiple identities across devices, arbitrary decisions on weightings for rules, and uncertainty about what actions arise from the results.
Broadly speaking, the holy grail of media measurement is to analyze the impact and business value of all company-generated marketing interactions across the complex customer journey. In this post, our goal is to take a transparent approach in discussing how SAS is building data-driven marketing technology to help customers progress beyond typical attribution methods to make the business case for customer journey optimization.
Being SAS, we advocate an analytic approach to addressing the operational and process-related obstacles that we commonly hear from customers. We want to treat them as two sides of the same coin. The output of attribution analytics informs marketers about what touch points and sequence of activities drive conversions. This leads marketers to make strategic decisions about future investment levels, as well as more tactical decisions about what activities to run. In an ideal world, the results of subsequent actions are fed back into the attribution model to increase not only its explanatory power, but also its predictive abilities, as shown below:
The diagram above shows the main parts of an attribution project. The actual analysis is just part of the process, with upstream and downstream dependencies. But this doesn’t always happen as it should. Consider a standard attribution report. Let us for the moment ignore what technique was used to generate the result and place ourselves in the shoes of the marketer trying to figure out what to do next.
In the graph above, we see the results of an attribution analysis based on a variety of measurement methods. Before answering the question of which method should we focus on, let's do a quick review of rules-based and algorithmic measurement techniques.
Last-touch and first-touch attribution
This type of attribution allocates 100 percent of the credit to either the last or first touch of the customer journey. This approach has genuine weaknesses, and ignores all other interactions with your brand across a multi-touch journey.
Linear attribution arbitrarily allocates an equal credit weight to every interaction along the customer journey. Although slightly better than the last- and first-touch approaches, linear attribution will undercredit and overcredit specific interactions.
Time-decay and position-based attribution
Time-decay attribution arbitrarily biases the channel weighting based on the recency of the channel touches across the customer journey. If you support the concept of recency within RFM analysis, there is some merit to approach. Position-based attribution places more weight on the first and last touches, while providing less value to the interactions in between.
In contrast, algorithmic attribution (sometimes referred to as custom models) assigns data-driven conversion credit across all touch points preceding the conversion, and uses math typically associated with predictive analytics or machine learning to identify where credit is due. It analyzes both converting and non-converting consumer paths across all channels. Most importantly, it uses data to uncover the correlations and success factors within marketing efforts. Here is a video summarizing a customer case study example to help demystify what we mean.
Why doesn’t everyone use algorithmic attribution?
Although many marketers recognize the value and importance of algorithmic attribution, adopting it hasn’t been easy. There are several reasons:
- Much-needed modernization. The volume of data that you can collect is massive and may overwhelm outdated data management and analytical platforms. Especially when you’ll need to integrate multiple data sources. Organizations have a decision to make regarding modernization.
- Scarcity of expertise. Some believe the talent required to unlock the marketing value in data is scarce. However, there are more than 150 universities offering business analytic and data science programs. Talent is flooding into industry. The synergy between analysts and strategically minded marketers is the key to unlock this door.
- Effective use of data. Organizations are rethinking how they collect, analyze and act on important data sources. Are you using all your crucial marketing data? How do you merge website and mobile app visitor data with email and display campaign data? If you accomplish all of this, how do you take prescriptive action between data, analytics and your media delivery end points?
- Getting business buy-in. Algorithmic attribution is often perceived as a black box, which vested interest groups can use as a reason to maintain the status quo.
Returning to our question of which method should we ultimately focus on, the answer is it depends. An attribution report on its own cannot decide this. And it doesn’t even matter if the attribution report is generated using the most sophisticated algorithmic techniques. There are four things that the report won't tell you:
- The elasticities of a single touch point.
- The interdependencies between different touch points.
- Cause and effect and timing dependencies.
- Differences between different groups of customers.
In Part 2 of this blog posting series, we will dive into specific detail within these areas, as well as introduce our vision within SAS Customer Intelligence 360 on handling algorithmic marketing attribution and conversion journey analysis.
Optimization is a core competency for digital marketers. As customer interactions spread across fragmented touch points and consumers demand seamless and relevant experiences, content-oriented marketers have been forced to re-evaluate their strategies for engagement. But the complexity, pace and volume of modern digital marketing easily overwhelms traditional planning and design approaches that rely on historical conventions, myopic single-channel perspectives and sequential act-and-learn iteration.
SAS Customer Intelligence 360 Engage was released last year to address our client needs for a variety of modern marketing challenges. Part of the software's capabilities revolve around:
Regardless of the method, testing is attractive because it is efficient, measurable and serves as a machete cutting through the noise and assumptions associated with delivering effective experiences. The question is: How does a marketer know what to test?
There are so many possibilities. Let's be honest - if it's one thing marketers are good at, it's being creative. Ideas flow out of brainstorming meetings, bright minds flourish with motivation and campaign concepts are born. As a data and analytics geek, I've worked with ad agencies and client-side marketing teams on the importance of connecting the dots between the world of predictive analytics (and more recently machine learning) with the creative process. Take a moment to reflect on the concept of ideation.
Is it feasible to have too many ideas to practically try them all? How do you prioritize? Wouldn't it be awesome if a statistical model could help?
Let's break this down:
- Predictive analytic or machine learning projects always begin with data. Specifically training data which is fed to algorithms to address an important business question.
- Ultimately, at the end of this exercise, a recommendation can be made prescriptively to a marketer to take action. This is what we refer to as a hypothesis. It is ready to be tested in-market.
- This is the connection point between analytics and testing. Just because a statistical model informs us to do something slightly different, it still needs to be tested before we can celebrate.
Here is the really sweet part. The space of visual analytics has matured dramatically. Creative minds dreaming of the next digital experience cannot be held back by hard-to-understand statistical greek. Nor can I condone the idea that if a magical analytic easy-button is accessible in your marketing cloud, one doesn't need to understand what's going on behind the scene.That last sentence is my personal opinion, and feel free to dive into my mind here.
Want a simple example? Of course you do. I'm sitting in a meeting with a bunch of creatives. They are debating on which pages should they run optimization tests on their website. Should it be on one of the top 10 most visited pages? That's an easy web analytic report to run. However, are those the 10 most important pages with respect to a conversion goal? That's where the analyst can step up and help. Here's a snapshot of a gradient boosting machine learning model I built in a few clicks with SAS Visual Data Mining and Machine Learning leveraging sas.com website data collected by SAS Customer Intelligence 360 Discover on what drives conversions.
I know what you're thinking. Cool data viz picture. So what? Take a closer look at this...
The model prioritizes what is important. This is critical, as I have transparently highlighted (with statistical vigor I might add) that site visitor interest in our SAS Customer Intelligence product page is popping as an important predictor in what drives conversions. Now what?
The creative masterminds and I agree we should test various ideas on how to optimize the performance of this important web page. A/B test? Multivariate test? As my SAS colleague Malcolm Lightbody stated:
"Multivariate testing is the way to go when you want to understand how multiple web page elements interact with each other to influence goal conversion rate. A web page is a complex assortment of content and it is intuitive to expect that the whole is greater than the sum of the parts. So, why is MVT less prominent in the web marketer’s toolkit?
One major reason – cost. In terms of traffic and opportunity cost, there is a combinatoric explosion in unique versions of a page as the number of elements and their associated levels increase. For example, a page with four content spots, each of which have four possible creatives, leads to a total of 256 distinct versions of that page to test.
If you want to be confident in the test results, then you need each combination, or variant, to be shown to a reasonable sample size of visitors. In this case, assume this to be 10,000 visitors per variant, leading to 2.56 million visitors for the entire test. That might take 100 or more days on a reasonably busy site. But by that time, not only will the marketer have lost interest – the test results will likely be irrelevant."
SAS Customer Intelligence 360 provides a business-user interface which allows the user to:
- Set up a multivariate test.
- Define exclusion and inclusion rules for specific variants.
- Optimize the design.
- Place it into production.
- Examine the results and take action.
Continuing with my story, we decide to set up a test on the sas.com customer intelligence product page with four content spots, and three creatives per spot. This results in 81 total variants and an estimated sample size of 1,073,000 visits to get a significant read at a 90 percent confidence level.
Notice that Optimize button in the image? Let's talk about the amazing special sauce beneath it. Methodical experimentation has many applications for efficient and effective information gathering. To reveal or model relationships between an input, or factor, and an output, or response, the best approach is to deliberately change the former and see whether the latter changes, too. Actively manipulating factors according to a pre-specified design is the best way to gain useful, new understanding.
However, whenever there is more than one factor – that is, in almost all real-world situations – a design that changes just one factor at a time is inefficient. To properly uncover how factors jointly affect the response, marketers have numerous flavors of multivariate test designs to consider. Factorial experimental designs are more common, such as full factorial, fractional factorial, and mixed-level factorial. The challenge here is each method has strict requirements.
This leads to designs that, for example, are not orthogonal or that have irregular design spaces. Over a number of years SAS has developed a solution to this problem. This is contained within the OPTEX procedure, and allows testing of designs for which:
- Not all combinations of the factor levels are feasible.
- The region of experimentation is irregularly shaped.
- Resource limitations restrict the number of experiments that can be performed.
- There is a nonstandard linear or a nonlinear model.
The OPTEX procedure can generate an efﬁcient experimental design for any of these situations and website (or mobile app) multivariate testing is an ideal candidate because it applies:
- Constraints on the number of variants that are practical to test.
- Constraints on required or forbidden combinations of content.
The OPTEX procedure is highly flexible and has many input parameters and options. This means that it can cover different digital marketing scenarios, and it’s use can be tuned as circumstances demand. Customer Intelligence 360 provides the analytic heavy lifting behind the scenes, and the marketer only needs to make choices for business relevant parameters. Watch what happens when I press that Optimize button:
Suddenly that scary sample size of 1,070,000 has reduced to 142,502 visits to perform my test. The immediate benefit is the impractical multivariate test has become feasible. However, if only a subset of the combinations are being shown, how can the marketer understand what would happen for an untested variant? Simple! SAS Customer Intelligence 360 fits a model using the results of the tested variants and uses them to predict the outcomes for untested combinations. In this way, the marketer can simulate the entire multivariate test and draw reliable conclusions in the process.
So you're telling me we can dream big in the creative process and unleash our superpowers? That's right my friends, you can even preview as many variants of the test's recipe as you desire.
The majority of today’s technologies for digital personalization have generally failed to effectively use predictive analytics to offer customers a contextualized digital experience. Many of today’s offerings are based on simple rules-based recommendations, segmentation and targeting that are usually limited to a single customer touch point. Despite some use of predictive techniques, digital experience delivery platforms are behind in incorporating machine learning to contextualize digital customer experiences.
At the end of the day, connecting the dots between data science and testing, no matter which flavor you select, is a method I advocate. The challenge I pose to every marketing analyst reading this:
Can you tell a good enough data story to inspire the creative minded?
How can you tell if your marketing is working? How can you determine the cost and return of your campaigns? How can you decide what to do next? An effective way to answer these questions is to monitor a set of key performance indicators, or KPIs.
KPIs are the basic statistics that give you a clear idea of how your website (or app) is performing. KPIs vary by predetermined business objectives, and measure progress towards those specific objectives. In the famous words of Avinash Kaushik, KPIs should be:
- Instantly useful.
An example that fits this description, with applicability to profit, nonprofit, and e-commerce business models, would be the almighty conversion rate. In digital analytics this metric is interpreted as the proportion of visitors to a website or app who take action to go beyond a casual content view or site visit, as a result of subtle or direct requests from marketers, advertisers, and content creators.
Although successful conversions can be defined differently based on your use case, it is easy to see why this KPI is uncomplex, relevant, timely, and useful. We can even splinter this metric into two types:
Micro conversion – An indicator that a visitor is moving towards a macro conversion (like progressing through a multi-step sales funnel to eventually make you some money)
Regardless of the conversion type, I have always found that reporting on this KPI is a popular request for analysts from middle management and executives. However, it isn't difficult to anticipate what is coming next from the most important person in your business world:
"How can we improve our conversion rate going forward?"
You can report, slice, dice, and segment away in your web analytics platform, but needles in haystacks are not easily discovered unless we adapt. I know change can be difficult, but allow me to make the case for machine learning and hyperparameters within the discipline of digital analytics. A trendy subject for some, a scary subject for others, but my intent is to lend a practitioner's viewpoint. Analytical decision trees are an excellent way to begin because of their frequent usage within marketing applications, primarily due to their approachability, and ease of interpretation.
Whether your use case is for supervised segmentation, or propensity scoring, this form of predictive analytics can be labeled as machine learning due to algorithm's approach to analyzing data. Have you ever researched how trees actually learn before arriving to a final result? It's beautiful math. However, it doesn't end there. We are living in a moment where more sophisticated machine learning algorithms have emerged that can comparatively increase predictive accuracy, precision, and most importantly – marketing-centric KPIs, while being just as easy to construct.
Using the same data inputs across different analysis types like Forests, Gradient Boosting, and Neural Networks, analysts can compare model fit statistics to determine which approach will have the most meaningful impact on your organization's objectives. Terms like cumulative lift or misclassification may not mean much to you, but they are the keys to selecting the math that best answers how conversion rate can be improved by transparently disclosing accurate views of variable importance.
So is that it? I can just drag and drop my way through the world of visual analytics to optimize against KPIs. Well, there is a tradeoff to discuss here. For some organizations, simply using a machine learning algorithm enabled by an easy-to-use software interface will help improve conversion rate tactics on a mobile app screen experience as compared to not using an analytic method. But an algorithm cannot be expected to perform well as a one size fits all approach for every type of business problem. It is a reasonable question to ask oneself if opportunity is being left on the table to motivate analysts to refine the math to the use case. Learning to improve how an algorithm arrives at a final result should not be scary because it can get a little technical. It's actually quite the opposite, and I love learning how machine learning can be elegant. This is why I want to talk about hyperparameters!
Anyone who has ever built a predictive model understands the iterative nature of adjusting various property settings of an algorithm in an effort to optimize the analysis results. As we endlessly try to improve the predictive accuracy, the process becomes painfully repetitive and manual. Due to the typical length of time an analyst can spend on this task alone - from hours, days, or longer - the approach defies our ability as humans to practically arrive at an optimized final solution. Sometimes referred to as auto tuning, hyperparameters address this issue by exploring different combinations of algorithm options, training a model for each option in an effort to find the best model. Imagine running 1000s of iterations of a website conversion propensity model across different property threshold ranges in a single execution. As a result, these models can improve significantly across important fit statistics that relate directly to your KPIs.
At the end of running an analysis with hyperparameters, the best recipe will be identified. Just like any other modeling project, the ability to action off of the insight is no different, from traditional model score code to next-best-action recommendations infused into your mobile app's personalization technology. That's genuinely exciting, courtesy of recent innovations in distributed analytical engines with feature-rich building blocks for machine-learning activities.
If the subject of hyperparameters is new to you, I encourage you to watch this short video.
This will be one of the main themes of my presentations at Analytics Experience 2017 in Washington DC. Using digital data collected by SAS Customer Intelligence 360 and analyzing it with SAS Visual Data Mining & Machine Learning on VIYA, I want to share the excitement I am feeling about digital intelligence and predictive personalization. I hope you'll consider joining the SAS family for an awesome agenda between September 18th-20th in our nation's capital.
Think big, start small, take the analytics-driven approach
You want to be a customer-first organisation, but are the benefits worth it? Forrester reports that customer experience leaders enjoy 17 percent CAGR (compound annual growth rate) as opposed to laggards at 3 percent.
Organisations of all shapes and sizes are embarking on digital transformation – a term that’s become synonymous with putting a slick digital front-end on traditional processes. In reality, true digital transformation is about adapting business culture and processes to work with new technology. This isn’t simple and presents many challenges that must be overcome in order to put the customer first, including:
- Functional silos: Beneath the glossy front-end of the customer experience machine sit functional and data silos created because many companies organise themselves around products or channels, not the customer.
- Legacy systems: Systems of record and channel-specific technologies, often with their own rules and logic, and little ability to talk to each other, fragment customer journeys.
- Cultural change: The various departments that contribute to creating a customer-first organisation have different objectives and key performance indicators. This undermines the collaboration and cultural change necessary to put the customer at the core.
Unfortunately, customers don’t care that your organisation is built on complex legacy structures in the back-end. When they interact with you they expect accurate and timely responses and decisions, regardless of the channel through which they engage.
What time is real time?
These days, organisations need to be able to respond to changing customer expectations and provide a seamless joined-up customer experience at every point of interaction, often in real time. The issue is that "real time" means different things to different organisations.
Many believe that a good real-time customer experience constitutes the ability to react immediately to what the customer is doing right now in a specific channel. Displaying a banner ad based on where a customer clicks on your website, or triggering an encouraging email when someone abandons their cart are nice tactics, but fall short of delivering a customer-first experience.
Excellent real-time customer experiences can only be delivered when you truly understand your customers: and their wants and needs; their price sensitivity and preferences; their propensity to buy; their lifetime value; and their service expectations.
Being a true customer-first organisation requires the capability to collect and analyse the data that customers make available to you, then use it (responsibly) to deliver value back to them. Today, these sources are expanding to include structured and unstructured data from social and multimedia feeds, streaming data from beacons and devices, voice calls, transactions and browsing histories.
Better faster, real-time decisioning
Once you’ve analysed the data to uncover valuable insights about your customers, you need a decisioning framework that allows analytical insights to be applied to both historical and real-time contextual data. It must encompass your organisational goals, all the potential offers and actions that a customer could be presented with, eligibility, budgetary and other constraints in order to infuse deep customer understanding into the decision-making process for each individual customer. Only then will you be empowered to make highly accurate decisions across your business about the right next action, next offer, next content or next recommendation and deliver that real time. Not having these capabilities could signal the loss of competitive ground.
Leading retailers, financial services, telco and media organisations have seen significant improvements in customer experience, profitability and reduced costs by using a customer decision hub.
Where do you start?
Choose a use case; a business challenge you would like to overcome. Once you have achieved your intended goals, replicate the model across other use cases or business problems. This is best illustrated with some of the work we have implemented with a leading European broadcaster and for a well-known insurer.
The broadcaster wanted to use analytical-driven decisions to increase conversion rates. Within weeks its customer decision hub was up and running and over a 6-week period the organisation saw a significant increase in the uptake of online upsell recommendations.
A global insurer used a customer decision hub approach to automate complex claims decisions that were being handled in the call centre. They were able to cut average settlement decisions from 28 days to making decision in real time, and saw a 26 percent improvement in decision-making accuracy while also providing a superior real-time experience for customers.
We can help you brainstorm your first project and get started with less risk.
Find out how we can help you to become a customer-first enterprise - read Customer intelligence for the always-on economy.
 Customer Experience Drives Revenue Growth, Forrester Research, Inc., June 2016
Multivariate testing (MVT) is another “decision helper” in SAS® Customer Intelligence 360 that is geared at empowering digital marketers to be smarter in their daily job. MVT is the way to go when you want to understand how multiple different web page elements interact with each other to influence goal conversion rate. A web page is a complex assortment of content and it is intuitive to expect that the whole is greater than the sum of the parts. So, why is MVT less prominent in the web marketer’s toolkit?
One major reason – cost. In terms of traffic and opportunity cost, there is a combinatoric explosion in unique versions of a page as the number of elements and their associated levels increases. For example, a page with four content spots, each of which have four possible creatives, leads to a total of 256 distinct versions of that page to test.
If you want to be confident in the test results, then you need each combination, or variant, to be shown to a reasonable sample size of visitors. In this case, assume this to be 10,000 visitors per variant, leading to 2.5 million visitors for the entire test. That might take 100 or more days on a reasonably busy site. But by that time, not only will the web marketer have lost interest – the test results will likely be irrelevant.
A/B testing: The current standard
Today, for expedience, web marketers often choose simpler, sequential A/B tests. Because an A/B test can only tell you about the impact of one element and its variations, it is a matter of intuition when deciding which elements to start with when running sequential tests.
Running a good A/B test requires consideration of any confounding factors that could bias the results. For example, someone changing another page element during a set of sequential A/B tests can invalidate the results. Changing the underlying conditions can also reduce reliability of one or more of the tests.
The SAS Customer Intelligence 360 approach
The approach SAS has developed is the opposite of this. First, you run an MVT across a set of spots on a page. Each spot has two or more candidate creatives available. Then you look to identify a small number of variants with good performance. These are then used for a subsequent A/B test to determine the true winner. The advantage is that underlying factors are better accounted for and, most importantly, interaction effects are measured.
But, of course, the combinatoric challenge is still there. This is not a new problem – experimental design has a history going back more than 100 years – and various methods were developed to overcome it. Among these, Taguchi designs are the best known. There are others as well, and most of these have strict requirements on the type of design. safety consideration.
SAS Customer Intelligence 360 provides a business-user interface which allows the marketing user to:
- Set up a multivariate test.
- Define exclusion and inclusion rules for specific variants.
- Optimize the design.
- Place it into production.
- Examine the results and take action.
The analytic heavy lifting is done behind the scenes, and the marketer only needs to make choices for business relevant parameters.
MVT made easy
The immediate benefit is that that multivariate tests are now feasible. The chart below illustrates the reduction in sample size for a test on a page with four spots. The red line shows the number of variants required for a conventional test, and how this increase exponentially with the number of content items per spot.
In contrast, the blue line shows the number of variants required for the optimized version of the test. Even with three content items per spot, there is a 50 percent reduction in the number of unique variants, and this percentage grows larger as the number of items increase. We can translate these numbers into test duration by making reasonable assumptions about the required sample size per variant (10,000 visitors) and about the traffic volume for that page (50,000 visitors per day). The result is shown below.
A test that would have taken 50 days will only take18 days using SAS’ optimized multivariate testing feature. More impressively, a test that would take 120 days to complete can be completed in 25 days.
What about those missing variants?
If only a subset of the combinations are being shown, how can the marketer understand what would happen for an untested variant? Simple. SAS Customer Intelligence 360 fits a model using the results for the tested variants and uses this to predict the outcomes for untested combinations. You can simulate the entire multivariate test and draw reliable conclusions in the process.
The Top Variant Performance report in the upper half of the results summary above indicates the lift for the best-performing variants relative to a champion variant (usually the business-as-usual version of the page). The lower half of the results summary (Variant Metrics) represents each variant as a point located according to a measured or predicted conversion rate. Each point also has a confidence interval associated with the measurement. In the above example, it’s easy to see that there is no clear winner for this test. In fact, the top five variants cannot reliably be separated. In this case, the marketer can use the results from this multi-variate test to automatically set up an A/B test. Unlike the A/B-first approach, narrowing down the field using an optimized multivariate test hones in on the best candidates while accounting for interaction effects.
Making MVT your go-to option
Until now, multivariate testing has been limited to small experiments for all but the busiest websites. SAS Customer Intelligence 360 brings the power of multi-variate testing to more users, without requiring them to have intimate knowledge of design of experiment theory. While multivariate testing will always require larger sample sizes than simple A/B testing, the capabilities presented here show how many more practical use cases can be addressed.
In the word of digital marketing, one of the more controversial moves I’ve seen recently was from U.K. car insurer Admiral. The company recently announced that it would begin offering car insurance discounts to less risky customers based on voluntarily provided social media data. The insurer would analyze Facebook likes […]
Digital footprints in the sand … a source of rich behavioural data was published on SAS Voices.
As data-driven marketers, you are now challenged by senior leaders to have a laser focus on the customer journey and optimize the path of consumer interactions with your brand. Within that journey there are three trends (or challenges) to focus on:
- Deeply understanding your target audience to anticipate their needs and desires.
- Meeting customers’ expectations (although aiming higher can help differentiate your brand from the pack).
- Addressing their pain points to increase your brand's relevance.
No matter who you chat with, or what marketing conference you recently attended, it's safe to say that the intersection of digital marketing, analytics, optimization and personalization is a popular subject of conversation. Let's review the popular buzzwords at the moment:
- Predictive personalization
- Data science
- Machine learning
- Self-learning algorithms
- Segment of one
- Contextual awareness
- Real time
- Artificial intelligence
There’s a lot of confusion created by these terms and what they mean. For instance, there is hubbub around so-called ‘easy button’ solutions that marketing cloud companies are selling for customer analytics and data-drive personalization. In reaction to this, I set off on a personal quest to research questions like:
- Does every technology perform analytics and personalization equally?
- What are the benefits and drawbacks to analytic automation?
- What are the downstream impacts to the predictive recommendations marketers depend on for personalized interactions across channels?
- Should I be comfortable trusting a black-box algorithm and how it impacts the facilitated experiences my brand delivers to customers and prospects?
- Do you need a data scientist to be successful in modern marketing?
- Is high quality analytic talent extremely difficult to find?
- How valid is the complaint of a data science talent shortage?
- How do I balance the needs of my marketing organization with recent analytic technology trends?
Have I captivated your interest? If yes, check out this on-demand webcast.
It's time to dive in deep and unleash on these questions. During the video, I share the results of my investigation into these questions, and reactive viewpoints. In addition, you will be introduced to new SAS Customer Intelligence 360 technology addressing these challenges. I believe in a future where approachable technology and analytically-curious people come together to deliver intelligent customer interactions. Analytically curious people can be data scientists, citizen data scientists, statisticians, marketing analysts, digital marketers, creative super forces and more. Building teams of these individuals armed with modern customer analytics software tools will help you differentiate and compete in today's marketing ecosystem.