3月 312022

Editors note: This is the first in a series of articles.

According to the Global McKinsey Survey on the State of AI in 2021, the adoption of AI is continuing to grow at an unprecedented rate. Fifty-six percent of all respondents reported AI adoption – including machine learning (ML) – in at least one function, up from 50% in 2020.

Businesses are deploying ML models for everything from data exploration, prediction and learning business logic to more accurate decision making and policymaking. ML is also solving problems that have stumped traditional analytical approaches, such as those involving unstructured data from graphics, sound, video, computer vision and other high-dimensional, machine-generated sources.

But as organizations build and scale their use of models, governance challenges increase as well. Most notably, they struggle with:

    Managing data quality and exploding data volumes. Most large enterprises host their data in a mix of modern and legacy databases, data warehouses, and ERP and CRM services – both on-premises and in the cloud. Unless organizations have ongoing data management and quality systems supporting them, data scientists may inadvertently use inaccurate data to build models.
    Collaborating across business and IT departments. Model development requires multidisciplinary teams of data scientists, IT infrastructure and line-of-business experts across the organization working together. This can be a difficult task for many enterprises due to poor workflow management, skill gaps between roles, and unclear divisions of roles and responsibilities among stakeholders.
    Building models with existing programming skills. Learning a programming language can take years to perfect, so if developers can build models using the skills they already have, they can deploy new models faster. Modern machine learning services must empower developers to build using their language of choice and provide a low-code/no-code user interface that nontechnical employees can use to build models.
    Scaling models. Enterprises must have the ability to deploy models anywhere – in applications, in the cloud or on the edge. To ensure the best performance, models need to be deployed as lightweight as possible and have access to a scalable compute engine.
    Efficiently monitoring models. Once deployed, ML models will begin to drift and degrade over time due to external real-world factors. Data scientists must be able to monitor models for degradation, and quickly retrain and redeploy the models into production to ensure companies are returning maximum productivity.
    Using repeatable, traceable components. To minimize the time a given model is out of production during rescoring and training, models must be built using repeatable and traceable components. Without a component library and documented version history, there is no way to understand which components were used to build a model, which means it must be rebuilt from scratch.

To help you address these challenges, SAS develops its services – and integrations in the Microsoft Cloud – with the analytics life cycle in mind. The analytics life cycle enables businesses to move seamlessly from questions to decisions by connecting DataOps, artificial intelligence and ModelOps in a continuous and deeply interrelated process (see Figure 1). Let us take a closer look at each of these elements:

    DataOps. Borrowing from agile software development practices, DataOps provides an agile approach to data access, quality, preparation and governance. It enables greater reliability, adaptability, speed and collaboration in your efforts to operationalize data and analytics workflows.
    Artificial intelligence. Data scientists use a combination of techniques to understand the data and build predictive models. They use statistics, machine learning, deep learning, natural language processing, computer vision, forecasting, optimization, and other techniques to answer real-world questions.
    ModelOps. ModelOps focuses on getting AI models through validation, testing and deployment phases as quickly as possible while ensuring quality results. It also focuses on ongoing monitoring, retraining and governance to ensure peak performance and transparent decisions.

Figure 1

So how can we apply the analytics life cycle to help us solve the challenges we listed above? To answer that, we will have to take a closer look at ModelOps.

Based on longstanding DevOps principles, the SAS ModelOps process allows you to move to validation, testing and deployment as quickly as possible while ensuring quality results. It enables you to manage and scale models to meet demand, and continuously monitor them to spot and fix early signs of degradation.

ModelOps also increases confidence in ML models while reducing risk through an efficient and highly automated governance process. This ensures high-quality analytics results and the realization of expected business value. At every step, ModelOps ensures that deployment-ready models are regularly cycled from the data science team to the IT operations team. And, when needed, model retraining occurs promptly based on feedback received during model monitoring.

    Managing data quality and exploding data volumes. ModelOps ensures that the data used to train models aligns with the operational data that will be used in production. Managing data in a data warehouse, such as Azure Synapse Analytics, helps you ingest data from multiple sources and perform all ELT/ETL steps, so data is ready to explore and model.
    Collaborating across business and IT departments. ModelOps empowers data scientists, IT infrastructure and line-of-business experts work in harmony thanks to a mutual understanding of their counterparts and ultimate end users.
    Building models with existing programming skills. Make it easier for everyone on your team to build models using their preferred programming language including SAS, Python and R, in addition to visual drag-and-drop tools for a faster building experience.
    Scaling models. Deploy your models anywhere in the Microsoft Cloud including applications, services, containers and edge devices.
    Efficiently monitoring models. Models are developed with a deployment mindset and deployed with a monitoring mindset so data scientists and analysts can monitor and quickly retrain models as they degrade.
    Using repeatable, traceable components. There are no black box models anymore because the business always knows the data it uses to train the model, monitors that model for efficacy, tracks the history of the code used in training the models, and uses automation for deployment and repeatability.

Next time you will learn how together, SAS and Microsoft empower your ModelOps steps.

To learn more about ModelOps and our partnership with Microsoft, see our whitepaper: ModelOps with SAS Viya on Azure.

How ModelOps addresses your biggest machine learning challenges was published on SAS Users.

2月 292016

Consider the last email or digital ad you received from a favorite retailer. It may have included an offer to save 20 percent on your next online purchase, or an invitation to shop in store during an exclusive sale. You don't think too much about this brand’s customer relationship management (CRM) or marketing capabilities, because you don’t have to.

Why? Because the most sophisticated brands employ tools that can tailor an email or a social media post to their buyer’s sweet spot. Powered by data and analytics, these CRM tools do the heavy lifting for marketers to engage their customers in more personalized, authentic ways.

CRM Watchlist 2016

Often recognized as a forerunner in CRM software, SAS Customer Intelligence has added a new accolade to its trophy case as a winner oCRM-Watchlist-Winner-2016-2.jpgn the 2016 CRM Watchlist. The annual list – curated by leading CRM industry analyst, Paul Greenberg – includes the dominant companies to watch in the CRM market. As Greenberg notes in his announcement blog post on ZDNet, the competition was especially stiff this year, with 131 vendors vying for the winners spot. With each submission, Greenberg reads and scores the company (which is weighted), which is then followed up with extensive research analyzing the vendor in the markets it addresses.

One important distinction of the Watchlist is the winner's impact within the CRM space. Greenberg cites that “the impact has to be obvious, both in the prior year and in the anticipated next two or three years.” And “that there is no doubt at all that your company is making a major impression on a market and actually changing or strengthening that market by its presence.”

The impact is not only from the strength of our SAS customer intelligence offerings,  but as a whole company. Greenberg states, “To have an impact, the company has to be pretty much a complete company who has been doing this long enough to have established a rhythm that leads to impact. The company has to be well rounded -- it has financial stability, solid management, excellent products and services, culture, and a strong partner ecosystem to help sustain its efforts.”

The SAS customer intelligence team is honored to earn a spot on the winners list for 2016, demonstrating SAS's commitment to helping brands deliver customer experiences that matter.

tags: CRM, customer intelligence, marketing

SAS named a winner on the 2016 CRM Watchlist was published on Customer Intelligence.

1月 192016

Marketers have used segmentation as a technique to target customers for communications, products, and services since the introduction of  customer relationship management (i.e., CRM) and database marketing. Within the context of segmentation, there are a variety of applications, ranging from consumer demographics, geography, behavior, psychographics, events and cultural backgrounds. Over time, segmentation has proven its value, and brands continue to use this strategy across every stage of the customer journey:

  • Acquisition
  • Upsell/cross-sell
  • Retention
  • Winback

Let's provide a proper definition for this marketing technique. As my SAS peer and friend Randy Collica stated in his influential book on this subject:

"Segmentation is in essence the process by which items or subjects are categorized or classified into groups that share similar characteristics. These techniques can be beneficial in classifying customer groups. Typical marketing activities seek to improve their relationships with prospective and current customers. The better you know about your customer's needs, desires, and their purchasing behaviors, the better you can construct marketing programs designed to fit their needs, desires, and behaviors."

Moving beyond the academic interpretation, in today's integrated marketing ecosystem, SAS Global Customer Intelligence director Wilson Raj provides a modern viewpoint:

"In an era of big data, hyperconnected digital customers and hyper-personalization, segmentation is the cornerstone of customer insight and understanding across the modern digital business. The question is: Is your segmentation approach antiquated or advanced?"

This provides a nice transition to review the types of segmentation methods I observe with clients. It ultimately boils down to two categories:

  1. Business rules for segmentation (i.e., non-quantitative)
  2. Analytical segmentation (i.e., quantitative)

Let's dive deeper into each of these...

Business Rules For Segmentation

This technique centers on a qualitative, or non-quantitative, approach leveraging various customer attributes conceptualized through conversations with business stakeholders and customer focus groups to gather pointed data. This information represents consumer experiential behavior, and analysts will assign subjective segments for targeted campaign treatments. Although directionally useful, in this day and age of data-driven marketing, it is my opinion that this approach will have suboptimal results.

Analytical Segmentation

Within this category, there are two approaches marketing analysts can select from:

  1. Supervised (i.e., classification)
  2. Unsupervised (i.e., clustering)

Supervised segmentation is typically referred to as a family of pattern analysis approaches. Supporters of this method stress that the actionable deliverable from the analysis classifies homogeneous segments that can be profiled, and informs targeting strategies across the customer lifecycle. The use of the term supervised refers to specific data mining (or data science) techniques, such as decision trees, random forests, gradient boosting or neural networks.  One key difference in supervised approaches is that the analysis requires a dependent (or target) variable, whereas no dependent variable is designated in unsupervised models. The dependent variable is usually a 1-0 (or yes/no) flag-type variable that matches the objective of the segmentation. Examples of this include:

  • Product purchase to identify segments with higher probabilities to convert on what you offer.
  • Upsell/cross-sell to identify segments who are likely to deepen their relationship with your brand.
  • Retention to identify segments most likely to unsubscribe, attrite, or defect.
  • Click behavior to identify segments of anonymous web traffic likely to click on your served display media.

After applying these techniques, analysts can deliver a visual representation of the segments to help explain the results to nontechnical stakeholders. Here is a video demonstration example of SAS Visual Analytics within the context of supervised segmentation being applied to a brand's digital traffic through the use of analytical decision trees:


Critics of this approach argue that the resulting model is actually a predictive model rather than a segmentation model because of the probability prediction output. The distinction lies in the use of the model. Segmentation is classifying customer bases into distinct groups based on multidimensional data and is used to suggest an actionable roadmap to design relevant marketing, product and customer service strategies to drive desired business outcomes.  As long as we stay focused on this premise, there is nothing to debate.

On the other hand, unsupervised approaches, such as clustering, association/apriori, principal components or factor analysis point to a subset of multivariate segmentation techniques that group consumers based on similar characteristics. The goal is to explore the data to find intrinsic structures. K-means cluster analysis is the most popular technique I view with clients for interdependent segmentation, in which all applicable data attributes are simultaneously considered, and there is no splitting of dependent (or target) and independent (or predictor) variables. Here is a video demonstration example of SAS Visual Statistics for unsupervised segmentation being applied to a brand's digital traffic (including inferred attributes sourced from a digital data management platform) through the use of K-means clustering:


Keep in mind that unsupervised applications are not provided training examples (i.e., there isn't a 1-0 or yes/no flag type variable to bias the formation of the segments). Subsequently, it is fair to make the interpretation that the results of a K-means clustering analysis is more data driven, hence more natural and better suited to the underlying structure of the data. This advantage is also its major drawback: it can be difficult to judge the quality of clustering results in a conclusive way without running live campaigns.

Naturally, the question is which technique is better to use in practice – supervised or unsupervised approaches for segmentation? In my opinion, the answer is both (assuming you have access to data that can be used as the dependent or target variable). When you think about it, I can use an unsupervised technique to find natural segments in my marketable universe, and then use a supervised technique (or more than one via champion-challenger applications) to build unique models on how to treat each cluster segment based on goals defined by internal business stakeholders.

Now, let me pose a question I have been receiving more frequently from clients over the past couple of years.

"Our desired segmentation strategies are outpacing our ability to build supporting analytic models. How can we overcome this?"

Does this sound familiar? For many of my clients, this is a painful reality limiting their potential. That's why I'm personally excited about new SAS technology to address this challenge. SAS Factory Miner allows marketers to dream bigger when it comes to analytical segmentation. It fosters an interactive, approachable environment to support working relationships between strategic visionaries and analysts/data scientists. The benefit for the marketer campaign manager is the ability to expand your segmentation strategies from 5 or 10 segments to 100's or 1000's, while remaining actionable within the demands of today's modern marketing ecosystem. The advantage for the supporting analyst team is the ability to be more efficient, and exploit modern analytical methods and processing power, without the need for incremental resources.

Here is a video demonstration example of SAS Factory Miner for supersizing your data-driven segmentation capabilities:


I'll end this posting by revisiting a question we shared in the beginning:

Is your segmentation approach antiquated or advanced?

Dream bigger my friends. The possibilities are inspiring!

If you enjoyed this article, be sure to check out my other work here. Lastly, if you would like to connect on social media, link with me on Twitter or LinkedIn.


tags: Clustering, CRM, Data Driven Marketing, Data Mining, data science, Decision Trees, marketing analytics, personalization, segmentation

Analytical segmentation for data-driven marketing was published on Customer Analytics.

8月 192015

I've spent a great deal of time in my consulting career railing against multiple systems of record, data silos and disparate versions of the truth. In the mid-1990s, I realized that Excel could only do so much. To quickly identify and ultimately ameliorate thorny data issues, I had to up […]

The post Big data integration: The case against an "all-in" approach appeared first on The Data Roundtable.

6月 092015

customer-intelligenceYou've probably heard many times about the fantastic untapped potential of combining online and offline customer data. But relax, I’m going to cut out the fluff and address this matter in a way that makes the idea plausible and its objectives achievable. The reality is that while much has been written about the benefits of online customer intelligence, it far outweighs what’s happening in most organisations today. In fact, considering how beneficial tapping the data can be, I don’t think enough has been written about what types of online customer behaviours should be tracked and how they could be used to create a better customer experience across all touch points.

So where do you begin? 

It all starts with what you have decided are the objectives for your digital presence – are they to register, to make a transaction, sign up for a newsletter, interact with a certain content object such as internal or third party? Those are generally the key objectives I see organisations having in order to understand the customer journey leading up to these events, as well as tracking and ‘remembering’ when the customer interacts with all the organisation’s available channels to the market. A key aspect is to monitor and understand how external campaigns, in-site promotions and search contribute towards those goals and how this breaks down into behavioural segments/profiles.

Recognising a customer

The next important consideration is – how do we recognise visitors/customers we should know from previous interactions even if they haven’t identified themselves on this occasion? Identification doesn’t have to be dependent on a log-in. It could be through an email address we can match with a satisfactory level of confidence, or it could be a tracking code coming from another digital channel where customers had earlier identified themselves. It’s of much greater value if we can match their behaviour as unknown visitors when the identify themselves and not have to start building our knowledge from scratch at the time of identification.

This leads to the point where we need to explore our options for weaving a visitors’ online behaviours into our offline knowledge about them and how – at the enterprise level – we can best exploit the capabilities of our broader data-driven marketing eco-system. We should ask ourselves, is it valuable to us to be able to send a follow up email to the ones that abandoned a specific form? Can our call centre colleagues enrich their conversations by knowing which customers downloaded particular content? How important is it to us as an organisation to be able to analyse text from in-site searches and combine it with insights driven of complaint data from our CRM system? What are the attributes of the various parts of the journey leading up to completing an objective?

Perhaps you wonder what I mean by the capabilities of the ‘broader data-driven marketing eco-system’. Well, my point is that it that it puzzles me that most organisations today can’t integrate/report/visualise online customer intelligence in the systems that already comprise the backbone of their information infrastructure. They don’t utilise their existing campaign management systems to make decisions on what’s relevant for the individual and drive online personalisation which increase the online conversion rates, but at the same time can be used across channels. Organisations rarely take ownership of online customer data or use their advanced analytical engines and existing analytical skills to drive next level insights.

Not taking full advantage of campaign management systems already in place is opportunity missed because the deliverables of integrated online and offline customer intelligence are very real. We should be looking for them every day.

This post first appeared on

Take the Customer Intelligence Assessment

tags: business intelligence, CRM, customer experience, customer intelligence, Data, data management, data visualisation, marketing

All customer intelligence must be woven into CRM programs – online and offline was published on Left of the Date Line.

9月 152014

CRMData-driven marketing is all about how marketers can harness data and analytics to create a more customer-centric, fact-based approach to customer engagement. This, combined with quality execution leads to better customer experiences and improved customer equity.

However when looking at customer-brand interactions in silos such as in the call centre and separately online, we don’t gain an accurate view of their lifecycle – we need to be able to look at the entire customer lifecycle and every touch point it involves across all channels including call centres, in-store and online.

Now, it’s easy to say all of this in theory, however when it comes to actually implementing and or obtaining data on all these touch points – especially when you have thousands, if not millions of customers – the process becomes somewhat complex. It’s no wonder that some marketers struggle to get a holistic view of how they should be engaging with their customers.

This is where customer intelligence comes into play.

Data plays a huge role in mapping out the customer journey, which is one of the key enablers for understanding individual preferences and motivations to create that feeling of appreciation and recognition. Not only does this build long term value into the conversation, it ensures consistency in messaging across sales and services efforts.

It’s important to ask ourselves, as professional marketers, how we could go about our job differently in an attempt to further enhance our perception and knowledge of customer lifecycles.

It is now widely accepted that customer centricity is a priority in the boardroom. Recognising that customers are the most important asset to any organisation is crucial, and many have come to this conclusion through practice. However, the challenge is determining how to develop this holistic relationship with individual customers. If marketing is unable to gain this understanding, they will struggle to position themselves strategically in the boardroom.

So, how do you avoid the struggle and embed customer centricity in the company’s DNA?

Think of how you approach personal interactions and relationships – when and how did you last speak with that person? What was the context? Was it on the phone, email or in person? Was it a defining moment for them, and if so was it positive or negative? All of these are valid questions that can, and should be considered when creating a mutually valuable interaction with another individual, whether it be personal or with a customer. Customer intelligence helps accomplish this. It allows you to scale that understanding into a holistic communication program across multiple channels, ultimately enabling you to take advantage of those ‘defining moments’ and make tailored offers or decisions that are relevant and timely to that individual.

The relationship you form with a customer needs to reflect and or be the same as the interpersonal relationships you have formed in your own life. So, things like common interests, past conversations, preferences – all that jazz – it has to be accessible and utilised in order to build a long term relationship with that particular customer. Customer intelligence does exactly that – provides readily available information that can be used at one’s own discretion, to specifically target that personas needs.

tags: analytics, CRM, customer experience, customer intelligence, decision support, marketing