SAS Viya

1月 072022
 

Welcome to the sixth installment in my series Getting Started with Python Integration to SAS Viya. In previous posts, I discussed how to connect to the CAS serverhow to execute CAS actions, and how to work with the results. Now it's time to generate simple descriptive statistics of a CAS table.

Let's begin by confirming the cars table is loaded into memory. With a connection to CAS established, execute the tableInfo action to view available in-memory tables. If necessary, you can execute the following code in SAS Studio to load the sashelp.cars table into memory.

conn.tableinfo(caslib="casuser")

The results show the cars table is loaded into memory and available for processing. Next, reference the cars table in the variable tbl. Then use the print function to show the value of the variable.

tbl = conn.CASTable('cars', caslib='casuser')
print(tbl)
CASTable('cars', caslib='casuser')

The results show that the tbl variable references the cars table in the CAS server.

Preview the CAS Table

First things first. Remember, the SWAT package blends the world of Pandas and CAS into one. So you can begin with the traditional head method to preview the CAS table.

tbl.head()

The SWAT head method returns five rows from the CAS server to the client as expected.

The Describe Method

Next, let's retrieve descriptive statistics of all numeric columns by using the familiar describe method on the CAS table.

tbl.describe()

The SWAT describe method returns the same descriptive statistics as the Pandas describe method. The only difference is that the SWAT version uses the CAS API to convert the describe method into CAS actions behind the scenes to process the data on the distributed CAS server. CAS processes the data and returns summarized results back to the client as a SASDataFrame, which is a subclass of the Pandas DataFrame. You can now work with the results as you would a Pandas DataFrame.

Summary CAS Action

Instead of using the familiar describe method, let's use a CAS action to do something similar. Here I'll use the summary CAS action.

tbl.summary()

Summary CAS Action

The results of the summary action return a CASResults object (Python dictionary) to the client. The CASResults object contains a single key named Summary with a SASDataFrame as the value. The SASDataFrame shows a variety of descriptive statistics.  While the summary action does not return exactly the same statistics as the describe method, it can provide additional insights into your data.

What if we don't want all the statistics for all of the data?

Selecting Columns and Summary Statistics with the Summary Action

Let's add additional parameters to the summary action. I'll add the inputs parameter to specify the columns to analyze in the CAS server.

tbl.summary(inputs = ['MPG_City','MPG_Highway'])

The results show only the MPG_City and MPG_Highway columns were analyzed.

Next, I'll use the subSet parameter to specify the summary statistics to produce. Here I'll obtain the MEAN, MIN and MAX.

tbl.summary(inputs = ['MPG_City','MPG_Highway'],
                       subSet = ['mean','min','max'])

The results processed only the MPG_City and MPG_Highway columns, and returned only the specified summary statistics to the client.

Creating a Calculated Column

Lastly, let's create a calculated column within the summary action. There are a variety of ways to do this. I like to add it as a parameter to the CASTable object. You can do that by specifying the tbl object, then computedVarsProgram parameter. Within computedVarsProgram you can use SAS assignment statements with most SAS functions. Here we will create a new column name MPG_Avg that takes the mean of MPG_City and MPG_Highway. Lastly, add the new column to the inputs parameter.

tbl.computedVarsProgram = 'MPG_Avg = mean(MPG_City, MPG_Highway);'
tbl.summary(inputs = ['MPG_City','MPG_Highway', 'MPG_Avg'],
                       subSet = ['mean','min','max'])

In the results I see the calculated column and requested summary statistics.

Summary

The SWAT package blends the world of Pandas and CAS. You can use many of the familiar Pandas methods within the SWAT package, or the flexible, highly optimized CAS actions like summary to easily obtain summary statistics of your data in the massively parallel processing CAS engine.

Additional and related resources

Getting Started with Python Integration to SAS® Viya® - Index
SWAT API Reference
CAS Action Documentation
SAS® Cloud Analytic Services: Fundamentals
SAS Scripting Wrapper for Analytics Transfer (SWAT)
CAS Action! - a series on fundamentals
Execute the following code in SAS Studio to load the sashelp.cars table into memory

Getting Started with Python Integration to SAS® Viya® - Part 6 - Descriptive Statistics was published on SAS Users.

1月 062022
 

This article was co-written by Nick Johnson, Product Marketing Manager, Microsoft Partnership. Check out his blog profile for more information.

As employees continue to adapt to the reality of remote work — collaborating with teams near and far — it is vital that organizations have the right collaboration and productivity tools at their fingertips to support teams remotely. With over 250 million monthly active users, Microsoft Teams has become the collaboration tool of choice for thousands of organizations, changing the way meetings are conducted and how teams access the documents and data that support their business operations.

SAS and Microsoft are partnering to inspire greater trust and confidence in every decision, by driving innovation and proven AI in the cloud. With a combined product roadmap, SAS and Microsoft are working tirelessly to improve offerings and connectivity between SAS Viya and Microsoft. That’s why we’re especially excited to announce SAS Conversation Designer is now generally available in Microsoft Teams.

Conversational AI enables humans to interact with machines using natural language – text or voice – and instantly get a human-like, intelligent response. And ChatOps – a way of collaborating that connects people with process, tools and automation into a transparent workflow – can enable your teams to work together on complex analytics processes without writing a single line of code. Conversational AI is creating new opportunities for finding insights in your data by simply asking a question in natural language to a chatbot.

Now, you can ask questions of your SAS and open-source data directly from the Microsoft Teams toolbar and share insights directly with your teammates without jumping between application interfaces. Chat-enabled analytics does the work for you by providing data, reports and visualizations through a chat interface in Microsoft Teams.

With SAS Conversation Designer in Teams you can:

    • Build and deploy a chatbot with ease using a low-code visual interface.
    • Get answers and complete tasks using SAS’ industry leading natural language processing.
    • Access data, reports and visualizations via chat – even run advanced analytics and AI.

Follow the quick start guide below to see how easy it is to build and deploy natural language chatbots in your Microsoft Teams environment.

Get started:

Step 1: To get started, log onto SAS Viya through a web browser using Azure AD for simplified access.

Step 2: SAS Conversation Designer’s visual interface is where you can build a chatbot. You can see where key words and phrases, intents, and dialog shown on a visual pipeline create a structure for the chatbot.

Step 3: Now that the critical elements of the chatbot are in place, the chatbot can be published and is ready for interaction.

Step 4: Let’s put this bot into Microsoft Teams. Gather the information within SAS Viya for configuring a manifest file, then enter the information into the App Studio in Microsoft Teams.

Step 5: Start a conversation! Your chatbot is ready to provide insights and accelerate collaboration for you and your colleagues.

Learn more

Looking for more on chatbots and our partnership with Microsoft? Check out the resources below:

SAS Conversation Designer

Microsoft Partnership

Make collaboration your superpower with SAS Conversation Designer in Microsoft Teams was published on SAS Users.

1月 062022
 

This article was co-written by Nick Johnson, Product Marketing Manager, Microsoft Partnership. Check out his blog profile for more information.

As employees continue to adapt to the reality of remote work — collaborating with teams near and far — it is vital that organizations have the right collaboration and productivity tools at their fingertips to support teams remotely. With over 250 million monthly active users, Microsoft Teams has become the collaboration tool of choice for thousands of organizations, changing the way meetings are conducted and how teams access the documents and data that support their business operations.

SAS and Microsoft are partnering to inspire greater trust and confidence in every decision, by driving innovation and proven AI in the cloud. With a combined product roadmap, SAS and Microsoft are working tirelessly to improve offerings and connectivity between SAS Viya and Microsoft. That’s why we’re especially excited to announce SAS Conversation Designer is now generally available in Microsoft Teams.

Conversational AI enables humans to interact with machines using natural language – text or voice – and instantly get a human-like, intelligent response. And ChatOps – a way of collaborating that connects people with process, tools and automation into a transparent workflow – can enable your teams to work together on complex analytics processes without writing a single line of code. Conversational AI is creating new opportunities for finding insights in your data by simply asking a question in natural language to a chatbot.

Now, you can ask questions of your SAS and open-source data directly from the Microsoft Teams toolbar and share insights directly with your teammates without jumping between application interfaces. Chat-enabled analytics does the work for you by providing data, reports and visualizations through a chat interface in Microsoft Teams.

With SAS Conversation Designer in Teams you can:

    • Build and deploy a chatbot with ease using a low-code visual interface.
    • Get answers and complete tasks using SAS’ industry leading natural language processing.
    • Access data, reports and visualizations via chat – even run advanced analytics and AI.

Follow the quick start guide below to see how easy it is to build and deploy natural language chatbots in your Microsoft Teams environment.

Get started:

Step 1: To get started, log onto SAS Viya through a web browser using Azure AD for simplified access.

Step 2: SAS Conversation Designer’s visual interface is where you can build a chatbot. You can see where key words and phrases, intents, and dialog shown on a visual pipeline create a structure for the chatbot.

Step 3: Now that the critical elements of the chatbot are in place, the chatbot can be published and is ready for interaction.

Step 4: Let’s put this bot into Microsoft Teams. Gather the information within SAS Viya for configuring a manifest file, then enter the information into the App Studio in Microsoft Teams.

Step 5: Start a conversation! Your chatbot is ready to provide insights and accelerate collaboration for you and your colleagues.

Learn more

Looking for more on chatbots and our partnership with Microsoft? Check out the resources below:

SAS Conversation Designer

Microsoft Partnership

Make collaboration your superpower with SAS Conversation Designer in Microsoft Teams was published on SAS Users.

12月 072021
 

This post is written in the hopes of easing the SAS Viya deployment process for novices like me. Firstly, deploying SAS Viya, like most enterprise software packages, isn't a skill we're innately born with. We're going to need a little help, some good documentation, and time to absorb the intricoes of the task.

There are many parts and pieces to standing up SAS Viya, depending on what you’re trying to accomplish and how you’d like to go about doing it. Know that the documentation and process can seem colossal and overwhelming, so take your time and don’t rush things. You got this.

Scope of the post

What this blog is and is not

This post will not walk you through the entirety of a deployment. Instead, it’ll point you to the right resources, guide you away from pitfalls, and show you how to accomplish certain tasks the documentation may not entirely cover. Many of these nuances were hard-earned lessons either by me or by people who have been kind enough to show me the way.

Please note the following

  • my experience is limited, and mostly pertains to AWS and Azure
  • the information is current at the time of this writing (December, 2021)

Please feel free to reach out to me if you have any suggestions, comments, or spot any mistakes. Many thanks!

Santa’s Workshop

Deploying SAS Viya is akin to creating toy trains in Santa’s workshop.

At its core, each toy train requires an engine, several cars, and a track. Likewise, each SAS Viya deployment requires a CAS engine that lives on a Kubernetes cluster together with several other servers (e.g., Compute, Connect, Stateful/Stateless), and storage.

Each toy train can be modified in numerous ways depending on the person’s preferences, whether it’s a steam locomotive or a bullet train. Or maybe it’s something more trivial, like merely the color of the train. Regardless of the need, Santa’s workshop contains a plethora of tools, materials, and plenty of knowledgeable elves who have different expertise and insights to customize the pipelines and trains.

Once again, each instance of SAS Viya can also be modified greatly depending on the customer’s needs. There are many hosts, flavors of servers, storage options, and common customizations. A SAS Viya deployment has its own kitchen sink full of tools, pipelines, and methods. And just like in Santa’s workshop, there are plenty of people who are experienced with deploying SAS Viya (and have specialties in different aspects of the deployment) who will assist if you run into issues.

Links Galore

There’s never a shortage of links required to complete deployments. I find myself with multiple windows filled with tabs (for referencing info) while I’m deploying so here’s a list of some I have found helpful.

      1. Setting up SAS Viya Monitoring for K8S
      2. Azure
      3. AWS
      4. GCP
      5. SAS Viya 4 Resource Guide

      Setting Up Your Environment

      There are several required tools for deployment. These include but are not necessarily limited to:

      • kubectl v1.19.9
      • kustomize 3.7.0
      • Docker

      Ensure your environment is set up precisely the way the docs recommend. For example, if you’re going the Terraform route from Viya4-IaC-AWS, you’re going to need this:

      • Terraform v1.0.0
      • kubectl v1.19.9
      • jq v1.6
      • AWS CLI v2.1.29

      The documentation is rather specific in terms of the required version, so please read carefully.

      Starting Off

      To start off, there are a few required readings to get a better understanding of SAS Viya architecture and requirements. Please review these webpages as often as you’d like to ensure understanding and avoid missing any steps.

      1. Getting Started portion of the Viya Operations documentation (linked previously).
      2. In the System Requirements section, please pay special attention to the “Kubernetes Client Machine Requirements” (under Virtual Infrastructure Requirements) to ensure you have the right tools and versions installed.
      3. When you’re done reading the above, it’s time to set up the IaC.

      Choose the corresponding link for “Help with Cluster Setup” (under Virtual Infrastructure Requirements) based on your cloud host of choice.

      IaC

      IaC stands for Infrastructure as Code. These are essentially scripts allowing you to build your cloud infrastructure and provision them through code instead of through the GUI. Several things to note here:

      1. I prefer cloning the IaC repo alongside the other folders, not within them so they’re better organized. It looks something like this:
        Viya4 <– Parent directory
        |– IaC
        |– Deploy
      2. Grab a sample .tfvars file under /examples and paste it into the root IaC directory. I recommend the “sample-input-minimal.tfvars” file if you’re just practicing.
        • Rename this file to “terraform.tfvars” (or preferred name, just be aware that the doc’s instructions assume that you have named it “terraform.tfvars”)
        • This file has several important values to keep in mind / input.
          1. This file contains the cluster configuration and details what all will be created
          2. “prefix” is essentially the name given to all your resources
          3. “default_public_access_cidrs” are CIDRs that you’d like to allow access to your cluster.
          4. “tags” you should include are {“resourceowner”=”your_Email”} (this is to ensure that people will be able to tell who owns the resource. Also, note that the preferred syntax is dependent on the cloud provider, please check the docs to be sure)
          5. “postgres_servers” should only be uncommented if you require an external db server (more expensive), if you don’t and you’re just practicing, leave it commented and it should create an internal one
      3. I highly recommend going the Docker route instead of Terraform (I have personally run into fewer problems through Docker, especially the tearing down process as compared to Terraform).
      4. It takes a while to create the cloud resources so have patience (takes about 15 mins at most).
      5. Once the resources exist, ensure you copy the [prefix]-eks-kubeconfig.conf file into your $(pwd) as well as your ~/.kube/config file if you’d like to keep it. The command to copy the conf file to your ~/.kube location is cp &lt;.conf file&gt; ~/.kube/config
      6. After you’re done with the above, make sure you run export KUBECONFIG=&lt;.conf file&gt;
      7. Test that your deployment is actually up: kubectl get nodes

      Post-IaC

      The next section covers additional SAS Viya requirements for the cluster after standing it up. There are a few things I’d recommend building after ensuring the deployment is up.

      • Ingress Controller
        kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.44.0/deploy/static/provider/cloud/deploy.yaml
      • Cert Manager
        kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.2.0/cert-manager.yaml
      • Helm/nfs-provisioner (this part is specifically for AWS)
        What’s happening here is that we’re getting the elastic load balancer URL from the ingress-nginx, the EFS ID, and installing the NFS server provisioner

        kubectl get service -n ingress-nginx
         
        ELBURL=$(kubectl get svc -n ingress-nginx ingress-nginx-controller --output jsonpath='{.status.loadBalancer.ingress[0].hostname}')
        echo $ELBURL
         
        EFSFSID=$(aws efs describe-file-systems --region $AWS_DEFAULT_REGION --query "FileSystems[*].FileSystemId" --region $AWS_DEFAULT_REGION --output text)
        echo $EFSFSID
         
        helm repo add stable  --force-update
        helm install stable/nfs-server-provisioner
         
        kubectl get storageclass # to check if the NFS server is up
      • Create a namespace where your SAS Viya deployment lives in the cluster – kubectl create ns . It is critical to go through the System Requirements entirely to ensure you don’t miss any steps (Just be sure that you’re following the portions meant for your cloud host). Examples in the Hardware and Resource Requirements page:
        • Azure – There’s an “Additional PVC Requirements for Microsoft Azure”: a link for “Specify PersistentVolumeClaims to Use ReadWriteMany StorageClass” where you’re required to add a file in the /site-config directory and an additional portion under “transformers” in the kustomization.yaml file
        • AWS – Under “File System and Shared Storage Requirements” refer to the notes on installing a provisioner for the EBS volumes. (The instructions are in the code block above)

      Installation

      This section sets up the parameters and additional customizations to included in the $deploy folder. It falls specifically under the Deployment tab of the SAS Viya Operations documentation.
      After retrieving the required files (under the desired version!), the certificates, license, and all assets and untarring them, take a good look at the section named "Directory Structure" so you have an understanding your desired file structure.

      Under “Installation -> Initial kustomization.yaml file”, once you’ve created your kustomization.yaml file, there are a few things of note here to change:

      1. {{ NAME-OF-NAMESPACE }}
        • If you haven’t already created the namespace where SAS Viya will live, do so now (instructions above #4)
        • Once you have a namespace, replace the entire thing including the {{}} with the name you have chosen.
        • You can always check what namespaces your cluster has by running kubectl get ns
      2. {{ NAME-OF-INGRESS-HOST }} and {{ PORT }} (note that there are multiple references in the kustomization.yaml file)
        • Use kubectl get service -n ingress-nginx and use the external-ip of the output
        • port is 80

      There are plenty of instructions beneath the kustomization.yaml file example, be sure to read through them and follow their instructions thoroughly.

      Additionally, Configure TLS

    1. kustomize build -o site.yaml
      kubectl apply --kubeconfig=kubeconfig-file --selector="sas.com/admin=cluster-wide" -f site.yaml
      kubectl wait --kubeconfig=kubeconfig-file --for condition=established --timeout=60s -l "sas.com/
      admin=cluster-wide" crd
      kubectl apply --kubeconfig=kubeconfig-file --selector="sas.com/admin=cluster-local" -f site.yaml --prune
      kubectl apply --kubeconfig=namespace-kubeconfig-file --selector="sas.com/admin=namespace" -f site.yaml --prune

OR
kustomize build . | kubectl apply -f -
(Note that this is the shortcut of building and piping the results to be applied in kubectl. It does not output a site.yaml file.)

There are a few false-positive errors that may appear during the process (the documentation outlines them pretty clearly).

Post-Deployment

You may run the readiness service to check for when your deployment is ready. Note that this process is lengthy and the fastest I’ve seen a deployment go up is about 15-20 mins. (Now’s a good time to go for a walk or get a cup of coffee).

I highly recommend using Lens to visualize the deployment process and to take a look at the pods and their logs (mini section below).

While all of these steps are possible in Lens, it’s good to know the commands required to inspect and manipulate pods.

kubectl get pods -n  # Take a look at all the pods, add a -W flag to watch them as they update
kubectl describe pod  -n  # To describe specific pods
kubectl logs  -n  # To see the logs of specific pods
kubectl delete pods  -grace-period=0 --force # To force deletion of pods, pods will automatically restart after being deleted.

Important pods to look at:

  • Logon
  • Consul
  • Cache

These pods are pre-requisites for many other pods to come up. If they’re stuck, go ahead and delete them to initiate a restart. This seems to work frequently.

If the pods look good, try going to this website: www.name-of-ingress-host:port/SASDrive. You should see a blue background and a SAS login screen.

Hooray! Now you just have to follow the Sign In as the sasboot User instructions and complete other post deployment tasks (Post-Installation Tasks, Validating the Deployment, etc.” that are pertinent to your use case.

Quick aside: Lens

K8s Lens is an incredibly useful IDE to visualize what is going on in your Kubernetes cluster.
Here are two quick screenshots to help you get situated when you’re looking at pods.

First, you need your .conf file to connect to your cluster. Upon entry, click on Workloads -> Pods to look at the pods. Also click on your namespace for all of the pods for the SAS Viya Deployment to show up.

There are times where you’ll see a yellow triangle with an exclamation mark. While this is technically a warning, it may be an indicator of an error your pod is suffering from. (If you see a HTTP 503 Readiness Probe error, it may just mean that the pod is starting up)

Click on the pod and the lines on the top right in order to see the logs for the chosen pod.

Conclusion

Hopefully this post was helpful for your start in deploying SAS Viya.

Please remember there’s a lot more to it than is covered here. Don’t be disheartened if this wasn’t particularly easy, it certainly wasn’t for me.
Know there are plenty of customizations as well as a constant stream of changes (updates, product related etc.), new methods, and places to deploy.
So there’s always plenty to learn.

Please feel free to reach out and let me know if you have any questions or suggestions for this post.

Acknowledgements

Many thanks to my colleagues Ali Aiello and Jacob Braswell for answering my incessant questions and helping me on this journey!

A Novice Perspective on SAS Viya Deployment was published on SAS Users.

12月 072021
 

This post is written in the hopes of easing the SAS Viya deployment process for novices like me. Firstly, deploying SAS Viya, like most enterprise software packages, isn't a skill we're innately born with. We're going to need a little help, some good documentation, and time to absorb the intricoes of the task.

There are many parts and pieces to standing up SAS Viya, depending on what you’re trying to accomplish and how you’d like to go about doing it. Know that the documentation and process can seem colossal and overwhelming, so take your time and don’t rush things. You got this.

Scope of the post

What this blog is and is not

This post will not walk you through the entirety of a deployment. Instead, it’ll point you to the right resources, guide you away from pitfalls, and show you how to accomplish certain tasks the documentation may not entirely cover. Many of these nuances were hard-earned lessons either by me or by people who have been kind enough to show me the way.

Please note the following

  • my experience is limited, and mostly pertains to AWS and Azure
  • the information is current at the time of this writing (December, 2021)

Please feel free to reach out to me if you have any suggestions, comments, or spot any mistakes. Many thanks!

Santa’s Workshop

Deploying SAS Viya is akin to creating toy trains in Santa’s workshop.

At its core, each toy train requires an engine, several cars, and a track. Likewise, each SAS Viya deployment requires a CAS engine that lives on a Kubernetes cluster together with several other servers (e.g., Compute, Connect, Stateful/Stateless), and storage.

Each toy train can be modified in numerous ways depending on the person’s preferences, whether it’s a steam locomotive or a bullet train. Or maybe it’s something more trivial, like merely the color of the train. Regardless of the need, Santa’s workshop contains a plethora of tools, materials, and plenty of knowledgeable elves who have different expertise and insights to customize the pipelines and trains.

Once again, each instance of SAS Viya can also be modified greatly depending on the customer’s needs. There are many hosts, flavors of servers, storage options, and common customizations. A SAS Viya deployment has its own kitchen sink full of tools, pipelines, and methods. And just like in Santa’s workshop, there are plenty of people who are experienced with deploying SAS Viya (and have specialties in different aspects of the deployment) who will assist if you run into issues.

Links Galore

There’s never a shortage of links required to complete deployments. I find myself with multiple windows filled with tabs (for referencing info) while I’m deploying so here’s a list of some I have found helpful.

      1. Setting up SAS Viya Monitoring for K8S
      2. Azure
      3. AWS
      4. GCP
      5. SAS Viya 4 Resource Guide

      Setting Up Your Environment

      There are several required tools for deployment. These include but are not necessarily limited to:

      • kubectl v1.19.9
      • kustomize 3.7.0
      • Docker

      Ensure your environment is set up precisely the way the docs recommend. For example, if you’re going the Terraform route from Viya4-IaC-AWS, you’re going to need this:

      • Terraform v1.0.0
      • kubectl v1.19.9
      • jq v1.6
      • AWS CLI v2.1.29

      The documentation is rather specific in terms of the required version, so please read carefully.

      Starting Off

      To start off, there are a few required readings to get a better understanding of SAS Viya architecture and requirements. Please review these webpages as often as you’d like to ensure understanding and avoid missing any steps.

      1. Getting Started portion of the Viya Operations documentation (linked previously).
      2. In the System Requirements section, please pay special attention to the “Kubernetes Client Machine Requirements” (under Virtual Infrastructure Requirements) to ensure you have the right tools and versions installed.
      3. When you’re done reading the above, it’s time to set up the IaC.

      Choose the corresponding link for “Help with Cluster Setup” (under Virtual Infrastructure Requirements) based on your cloud host of choice.

      IaC

      IaC stands for Infrastructure as Code. These are essentially scripts allowing you to build your cloud infrastructure and provision them through code instead of through the GUI. Several things to note here:

      1. I prefer cloning the IaC repo alongside the other folders, not within them so they’re better organized. It looks something like this:
        Viya4 <– Parent directory
        |– IaC
        |– Deploy
      2. Grab a sample .tfvars file under /examples and paste it into the root IaC directory. I recommend the “sample-input-minimal.tfvars” file if you’re just practicing.
        • Rename this file to “terraform.tfvars” (or preferred name, just be aware that the doc’s instructions assume that you have named it “terraform.tfvars”)
        • This file has several important values to keep in mind / input.
          1. This file contains the cluster configuration and details what all will be created
          2. “prefix” is essentially the name given to all your resources
          3. “default_public_access_cidrs” are CIDRs that you’d like to allow access to your cluster.
          4. “tags” you should include are {“resourceowner”=”your_Email”} (this is to ensure that people will be able to tell who owns the resource. Also, note that the preferred syntax is dependent on the cloud provider, please check the docs to be sure)
          5. “postgres_servers” should only be uncommented if you require an external db server (more expensive), if you don’t and you’re just practicing, leave it commented and it should create an internal one
      3. I highly recommend going the Docker route instead of Terraform (I have personally run into fewer problems through Docker, especially the tearing down process as compared to Terraform).
      4. It takes a while to create the cloud resources so have patience (takes about 15 mins at most).
      5. Once the resources exist, ensure you copy the [prefix]-eks-kubeconfig.conf file into your $(pwd) as well as your ~/.kube/config file if you’d like to keep it. The command to copy the conf file to your ~/.kube location is cp &lt;.conf file&gt; ~/.kube/config
      6. After you’re done with the above, make sure you run export KUBECONFIG=&lt;.conf file&gt;
      7. Test that your deployment is actually up: kubectl get nodes

      Post-IaC

      The next section covers additional SAS Viya requirements for the cluster after standing it up. There are a few things I’d recommend building after ensuring the deployment is up.

      • Ingress Controller
        kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.44.0/deploy/static/provider/cloud/deploy.yaml
      • Cert Manager
        kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.2.0/cert-manager.yaml
      • Helm/nfs-provisioner (this part is specifically for AWS)
        What’s happening here is that we’re getting the elastic load balancer URL from the ingress-nginx, the EFS ID, and installing the NFS server provisioner

        kubectl get service -n ingress-nginx
         
        ELBURL=$(kubectl get svc -n ingress-nginx ingress-nginx-controller --output jsonpath='{.status.loadBalancer.ingress[0].hostname}')
        echo $ELBURL
         
        EFSFSID=$(aws efs describe-file-systems --region $AWS_DEFAULT_REGION --query "FileSystems[*].FileSystemId" --region $AWS_DEFAULT_REGION --output text)
        echo $EFSFSID
         
        helm repo add stable  --force-update
        helm install stable/nfs-server-provisioner
         
        kubectl get storageclass # to check if the NFS server is up
      • Create a namespace where your SAS Viya deployment lives in the cluster – kubectl create ns . It is critical to go through the System Requirements entirely to ensure you don’t miss any steps (Just be sure that you’re following the portions meant for your cloud host). Examples in the Hardware and Resource Requirements page:
        • Azure – There’s an “Additional PVC Requirements for Microsoft Azure”: a link for “Specify PersistentVolumeClaims to Use ReadWriteMany StorageClass” where you’re required to add a file in the /site-config directory and an additional portion under “transformers” in the kustomization.yaml file
        • AWS – Under “File System and Shared Storage Requirements” refer to the notes on installing a provisioner for the EBS volumes. (The instructions are in the code block above)

      Installation

      This section sets up the parameters and additional customizations to included in the $deploy folder. It falls specifically under the Deployment tab of the SAS Viya Operations documentation.
      After retrieving the required files (under the desired version!), the certificates, license, and all assets and untarring them, take a good look at the section named "Directory Structure" so you have an understanding your desired file structure.

      Under “Installation -> Initial kustomization.yaml file”, once you’ve created your kustomization.yaml file, there are a few things of note here to change:

      1. {{ NAME-OF-NAMESPACE }}
        • If you haven’t already created the namespace where SAS Viya will live, do so now (instructions above #4)
        • Once you have a namespace, replace the entire thing including the {{}} with the name you have chosen.
        • You can always check what namespaces your cluster has by running kubectl get ns
      2. {{ NAME-OF-INGRESS-HOST }} and {{ PORT }} (note that there are multiple references in the kustomization.yaml file)
        • Use kubectl get service -n ingress-nginx and use the external-ip of the output
        • port is 80

      There are plenty of instructions beneath the kustomization.yaml file example, be sure to read through them and follow their instructions thoroughly.

      Additionally, Configure TLS

    1. kustomize build -o site.yaml
      kubectl apply --kubeconfig=kubeconfig-file --selector="sas.com/admin=cluster-wide" -f site.yaml
      kubectl wait --kubeconfig=kubeconfig-file --for condition=established --timeout=60s -l "sas.com/
      admin=cluster-wide" crd
      kubectl apply --kubeconfig=kubeconfig-file --selector="sas.com/admin=cluster-local" -f site.yaml --prune
      kubectl apply --kubeconfig=namespace-kubeconfig-file --selector="sas.com/admin=namespace" -f site.yaml --prune

OR
kustomize build . | kubectl apply -f -
(Note that this is the shortcut of building and piping the results to be applied in kubectl. It does not output a site.yaml file.)

There are a few false-positive errors that may appear during the process (the documentation outlines them pretty clearly).

Post-Deployment

You may run the readiness service to check for when your deployment is ready. Note that this process is lengthy and the fastest I’ve seen a deployment go up is about 15-20 mins. (Now’s a good time to go for a walk or get a cup of coffee).

I highly recommend using Lens to visualize the deployment process and to take a look at the pods and their logs (mini section below).

While all of these steps are possible in Lens, it’s good to know the commands required to inspect and manipulate pods.

kubectl get pods -n  # Take a look at all the pods, add a -W flag to watch them as they update
kubectl describe pod  -n  # To describe specific pods
kubectl logs  -n  # To see the logs of specific pods
kubectl delete pods  -grace-period=0 --force # To force deletion of pods, pods will automatically restart after being deleted.

Important pods to look at:

  • Logon
  • Consul
  • Cache

These pods are pre-requisites for many other pods to come up. If they’re stuck, go ahead and delete them to initiate a restart. This seems to work frequently.

If the pods look good, try going to this website: www.name-of-ingress-host:port/SASDrive. You should see a blue background and a SAS login screen.

Hooray! Now you just have to follow the Sign In as the sasboot User instructions and complete other post deployment tasks (Post-Installation Tasks, Validating the Deployment, etc.” that are pertinent to your use case.

Quick aside: Lens

K8s Lens is an incredibly useful IDE to visualize what is going on in your Kubernetes cluster.
Here are two quick screenshots to help you get situated when you’re looking at pods.

First, you need your .conf file to connect to your cluster. Upon entry, click on Workloads -> Pods to look at the pods. Also click on your namespace for all of the pods for the SAS Viya Deployment to show up.

There are times where you’ll see a yellow triangle with an exclamation mark. While this is technically a warning, it may be an indicator of an error your pod is suffering from. (If you see a HTTP 503 Readiness Probe error, it may just mean that the pod is starting up)

Click on the pod and the lines on the top right in order to see the logs for the chosen pod.

Conclusion

Hopefully this post was helpful for your start in deploying SAS Viya.

Please remember there’s a lot more to it than is covered here. Don’t be disheartened if this wasn’t particularly easy, it certainly wasn’t for me.
Know there are plenty of customizations as well as a constant stream of changes (updates, product related etc.), new methods, and places to deploy.
So there’s always plenty to learn.

Please feel free to reach out and let me know if you have any questions or suggestions for this post.

Acknowledgements

Many thanks to my colleagues Ali Aiello and Jacob Braswell for answering my incessant questions and helping me on this journey!

A Novice Perspective on SAS Viya Deployment was published on SAS Users.

12月 022021
 

It’s a hard time to be a decision maker. Unexpected externalities like global pandemics, natural disasters and climate change make it harder to predict – and react to – everyday events. And that’s not just true for the world around us. The organizations we work within are more complex than ever, too.

The volume of communications and channels where we must meet customers and employees has grown exponentially – demanding our attention and reducing our focus. Not to mention added organizational complexity blurring the lines of roles and responsibilities according to geography, product and function.

Gaining control of such complexity requires rapid, streamlined and agile decision making. Technology that enables decision making needs to identify problems and take corrective action in real time to move quickly from questions to decisions.

SAS and Microsoft empower you to make better, faster decisions with unique enterprise decision management with SAS Intelligent Decisioning and Microsoft Power Automate using the SAS Decisioning connector – giving you the ability to design, deploy and manage automated decisions to improve the customer, employee and partner experience.

Enterprise decision management from SAS and Microsoft allows you to automate with a deliberate focus on decisions. You can combine business rules management with digital process automation and ModelOps, including model management and analytics, to accelerate the decision making process.

Together, Intelligent Decisioning and Power Automate unlock a breadth of use cases across the enterprise, including:

  • Insurance: Claims processing. Improve customer satisfaction and process claims faster. Receive insurance claims via Microsoft Power Apps and use Microsoft Power Automate to seamlessly ingest the claim into SAS Intelligent Decisioning. Using neural network models, SAS Intelligent Decisioning can analyze images of damage and compare with policies. If more information is required, Power Automate can trigger a flow to connect with a representative in Dynamics 365 Omnichannel for Customer Service. Once the decision is rendered, Power Automate can trigger process flows to notify the customer and deposit money into the bank account on file.
  • Banking: Credit decisioning. Reduce lender risk, improve decisioning response times and increase your bottom line. Build risk profiles in SAS Intelligent Decisioning by creating score cards and decision tables based off external data points, such as credit score, that assign each customer a risk rating. Use risk ratings to render decisions like home equity and line of credit approvals, and determine the loan amount. Once a decision has been made Power Automate flows can be used to communicate the loan amount to the customer and help them complete the loan agreement.
  • Retail/Banking: Fraud detection. Enable more secure transactions, reduce losses due to fraud and improve customer trust in your organization. SAS Intelligent Decisioning can identify fraudulent purchases and determine an appropriate course of action based on the level of confidence that a purchase is fraudulent. Power Automate can trigger automated reactions like alerting associated parties, denying a purchase at the point of sale, alerting the vendor, or sending notifications to the card holder.
  • Retail: Contextual Marketing. Increase marketing influence and become more customer centric by curating relevant and timely offers based on individual preferences. Use SAS Intelligent Decisioning to build a profile of tastes and preferences via geolocation, recommendation engines and market basket analysis. Use this profile to trigger Power Automate flows to send specific offers that align with important events, like birthdays or anniversaries, and send emails or push notifications to customers with unique, context-specific offers.

To learn more about what SAS Intelligent Decisioning and Microsoft Power Automate can help you achieve, visit sas.com/microsoft.

4 ways to make better, faster decisions with enterprise decision management from SAS Viya on Azure was published on SAS Users.

11月 042021
 

Think about what a modern implementation of SAS looks like for a customer. Programmers rely on robust environments to run the models and programs that answer business questions. These environments can be different for platforms like SAS® 9 and SAS® Viya®. They can be deployed across distributed servers, either on premises or using a cloud provider (sometimes both at the same time). These environments could even be set up across geographic regions for programmers across time zones. And we’re just thinking about the SAS servers—not counting data sources and third-party servers. All of these systems have their own suites of monitoring tools, which only show small slices of the big picture.

Observing all environments

SAS Enterprise Session Monitor aims to be the single point of contact for observing distributed systems. It brings unparalleled visibility to understanding environments using detailed system and application-level metrics for every session of SAS that is launched. This goes beyond traditional monitoring and into observability—aggregating, correlating, and analyzing a steady stream of constant data from systems to effectively troubleshoot or debug environments and sessions. Sessions in this case are those that come from SAS 9, SAS Viya, build servers, testing environments, and more. SAS Enterprise Session Monitor receives that data, displays it live in the tool, and stores the data for historical review in an embedded database.

SAS Enterprise Session Monitor is extensible and customizable: administrators can build patterns using regular expressions to track third-party sessions or custom in-house applications. If a process runs on a Windows or Linux server, SAS Enterprise Session Monitor can be configured to record metrics about it.

What metrics are collected?

SAS Enterprise Session Monitor collects and stores system metrics and logs, many monitoring tools do. Here is where things begin to get interesting, however: SAS Enterprise Session Monitor collects application-level metrics about SAS user sessions. The size of the SASWORK area is monitored and the amount of space in the CAS_DISK_CACHE. Users of SAS Enterprise Session Monitor are able see within DATA and PROC steps as code executes within SAS 9 or SAS Compute Server sessions. SAS Viya users can see the CAS actions that execute within their CAS sessions.

This information is presented in the form of spans which appear on a time-series graph along with session information such as CPU usage, memory usage and disk usage. This user activity is tracked for all user sessions, across all platforms. This code-level analysis can help to understand which SAS Procedures are used, which (and how frequently) datasets are opened, and which users are using the environments at different times.

Grand central admin-station

Administrators use SAS Enterprise Session Monitor to make sure their environments are stable and performant. Historical data can be used to profile workloads, charge back departments or help promote jobs between development, testing and production environments. Critical system resources are tracked to better understand when peak usage time is and to understand where resource constraints occur. This stored historical data can also be used for troubleshooting purposes, and all sessions and jobs can be searched for error events to help in problem analysis. Profiles of scheduled batch jobs can be graphed to see when large numbers of sequential programs could be redesigned to run in parallel. SAS Enterprise Session Monitor knows when distributed workloads should be linked together – in a SAS Grid or MPP CAS deployment.

Lower total cost of ownership

Administrators can use SAS Enterprise Session Monitor to accurately right-size their infrastructure with all the metrics collected — whether that is in the cloud or on premises. Accurate user counts and licensing can be determined for concurrent users in all distributed environments. And with accurate information coming in from distributed environments and multiple nodes, potential problems can be identified, and administrators can accelerate time to resolution and reduce system downtime on production or business-critical systems.

A drag-and-drop interface also allows for workloads from different teams to generate cost allocation rules so that costs can be charged back to departments depending on their usage of system resources. This allows for accurate tracking of cooperative resource sharing.

Empowering development teams

Developers (data scientists, analysts or programmers) use SAS Enterprise Session Monitor in real time to monitor or view progress of their code as it runs. This improves the developer experience and closes the feedback loop as they can see issues before something is promoted to production. Developers can use it to prioritize jobs and have insight into what is happening during their program execution.

This empowers individual programmers, as well as teams of developers: teams can be configured to have access to their other team members’ sessions in SAS Enterprise Session Monitor. Privileged users can also be configured to allow team leads or power users to terminate sessions and view SAS program logs from the SAS Enterprise Session Monitor interface in a secure and audited way.

Other tidbits

I mentioned how SAS Enterprise Session Monitor can analyze batch job flows, visualizing them into graphs that display total runtime and dependencies. Taking this a step further, the batch job flows can be viewed through Relative Comparisons — a feature where two defined time spans can be compared. Simply put, this means that one set of scheduled work can be compared to a previous run. This can give detailed information when evaluating whether to change a program or model, or when performing root-cause analysis of issues that impact the runtime of the scheduled work.

Lastly, developers can use real-time custom chart annotations that show up on the time-series graph. The %esmtag statement generates these annotations and can be used much like %put statements. These can be used as status checkpoints or observation counts, providing feedback in real time as the developer watches the program execute. These annotations are searchable in SAS Enterprise Session Monitor.

Summary

I hope you can feel my excitement about this tool and are able to see a few reasons to check this offering out — the potential for what can be monitored is almost endless. Here’s a quick recap:

    • Enterprise Session Monitor provides visibility into many different types of SAS workloads. Servers and microservices across multiple SAS 9 and SAS Viya environments can be monitored in one place. Even third-party tools and data sources can be monitored with a little customization.

    • Developers use it to close the feedback loop when developing new SAS programs.

    • Administrators use it to solve platform issues—through session management, live data and historical data about SAS processes and system resources.

Additional resources

SAS Enterprise Session Monitor documentation

Configuration and Usage of SAS Enterprise Session Monitor

SAS Enterprise Session Monitor - Obsessing over Observability was published on SAS Users.

10月 302021
 

In a September 10 post on the SAS Users blog, we announced that SAS Analytics Pro is now available for on-site or containerized cloud-native deployment. For our thousands of SAS Analytics Pro customers, this provides an entry point into SAS Viya.

SAS Analytics Pro consists of three core elements of the SAS system: Base SAS®, SAS/GRAPH® and SAS/STAT®. The containerized deployment option adds the full selection of SAS/ACCESS engines making it even easier to work with data from virtually any source.

Even better, the containerized deployment option now adds new statistical capabilities that are not available in SAS/STAT on SAS9. Thanks to SAS Viya’s continuous delivery approach, we are able to provide this additional functionality so soon after the initial release.

Below are highlights of these additional capabilities (you can find more details by following the links):

Causal Inference Procedures

Bayesian Analysis Procedures

  • Model multinomial data with cumulative probit, cumulative logit, generalized link, or other link functions in PROC BGLIMM.
  • Specify fixed scale values in a generalized linear mixed-effects model, and use an improved CMPTMODEL statement in PROC MCMC and PROC NLMIXED to fit compartment models.

Survey Procedures

Additional Capabilities:

For those SAS customers already on SAS Viya, or those considering the move, SAS Analytics Pro provides one more example of the new powers you will enjoy!

Additional statistical capabilities in the containerized deployment of SAS Analytics Pro was published on SAS Users.

10月 282021
 

From articles I've read on the web, it is clear that data is gold in the twenty-first century. Loading, enriching, manipulating and analyzing data is something in which SAS excels. Based on questions from colleagues and customers, it is clear end-users are willing to display data handled by SAS outside of the user interfaces bundled with the SAS software.

I recently completed a series of articles on the SAS Community library where I shed some light on different techniques for feeding web applications with SAS data stored in SAS Viya environment.  The series includes a discussion of options for extracting data, building a React application, how to build web applications using SAS Viya, SAS Cloud Analytic Service (CAS), SAS Compute Server, and SAS Micro Analytic Service (MAS).

I demonstrate the functionality and discuss project details in the video Develop Web Application to Extract SAS Data, found on the SAS Users YouTube Channel.

I'm tying everything together in this post as a reference point. I'll provide a link to each article along with a brief description. The Community articles have all the detailed steps for developing the application. I'm excited bring you this information, so let's get started.

Part 1 - Develop web applications series: Options for extracting data

In this first article, I explain when to use SAS Micro Analytic Service, SAS Viya Jobs, SAS Cloud Analytic Service, and SAS Compute Server.

Part 2 - Develop web applications series: Creating the React based application

To demonstrate the different options, in the second article, I create a simple web application using React JavaScript library. The application also handles authentication against SAS Viya. The application is structured in such a way to avoid redundant code and each component has a well-defined role. From here, we can build the different pages to access CAS, MAS, Compute Server or SAS Viya Jobs.

The image below offers a view of the application which starts in Part 2 and continues throughout the series..

Part 3 - Develop web applications series: Build a web application using SAS Viya Jobs

In this article, I drive you through the steps to retrieve data from the SAS environment using SAS Viya Jobs. We build out the Jobs tab and on the page, display two dropdown boxes to select a library and table. The final piece is a submit button to retrieve the data to populate a table.

Part 4 - Develop web applications series: Build a web application using SAS Cloud Analytic Service

In article number 4, we go through the steps to build a page similar to the one in the previous article, but this time the data comes directly from the SAS Cloud Analytic Service (CAS). We reuse the application structure which was created in Part 2. We focus on the CAS tab. As for the SAS Viya Jobs, we display two dropdown boxes to select a library and table. We finish again with a submit button to retrieve the data to populate a table.

Part 5 - Develop web applications series: Build a web application using SAS Compute Server

In the next article, we go through the steps to build a page similar to the ones from previous articles, but this time the data comes directly from the SAS Compute Server. We reuse the application structure created in this Part 2 article. The remainder of the article focuses on the Compute tab. As for the CAS content, we display two dropdown boxes to select a library and table. Finishing off again with the submit button to retrieve the data to populate a table.

Part 6 - Develop web applications series: Build a web application using SAS Micro Analytic Service

For the final article, you discover how to build a page to access data from the SAS Micro Analytic Service. We reuse the same basic web application built in Part 2. However, this time it will require a bit more preparation work as the SAS Micro Analytic Service (MAS) is designed for model scoring.

Bonus Material - SAS Authentication for ReactJS based applications

In this addendum to the series, I outline the authorization code OAuth flow. This is the recommended means of authenticating to SAS Viya and I provide technical background and detailed code.

Conclusion

If you followed along with the different articles in this series, you should now have a fully functional web application for accessing different data source types from SAS Viya. This application is not for use as-is in production. You should, for example add functionality to handle token expiration. You can of course tweak the interface to get the look and feel you prefer.

See all of my SAS Communities articles here.

Creating a React web app using SAS Viya was published on SAS Users.

10月 122021
 

This article was co-written by Jane Howell, IoT Product Marketing Leader at SAS. Check out her blog profile for more information.

As artificial intelligence comes of age and data continues to disrupt traditional industry boundaries, the need for real-time analytics is escalating as organizations fight to keep their competitive edge. The benefits of real-time analytics are significant. Manufacturers must inspect thousands of products per minute for defects. Utilities need to eliminate unplanned downtime to keep the lights on and protect workers. And governments need to warn citizens of natural disasters, like flooding events, providing real time updates to save lives and protect property.

Each of these use cases requires a complex network of IoT sensors, edge computing, and machine learning models that can adapt and improve by ingesting and analyzing a diverse set of high-volume, high-velocity data.

SAS and Microsoft are partnering to inspire greater trust and confidence in every decision, by innovating with proven AI and streaming analytics in the cloud and on the edge. Together, we make it easier for companies to harness hidden insights in their diverse, high volume, high velocity IoT data, and capitalize on those insights in Microsoft Azure for secure, fast, and reliable decision making.

To take advantage of all the benefits that real-time streaming analytics has to offer, it’s important to tailor your streaming environment to your organization’s specific needs. Below, we’ll dive into how to understand the value of IoT in parallel to your organization’s business objectives and then strategize, plan, and manage your streaming analytics environment with SAS Viya on Azure.

Step 1: Understand the value of IoT

While you may already know that IoT and streaming analytics are the right technologies to enable your business’ real time analytics strategy, it is important to understand how it works and how you can benefit. You can think of streaming analytics for IoT in three distinct parts: sense, understand and act.

Sense: Sensors by design are distributed, numerous, and collect data at high fidelity in various formats. The majority of data collected by sensors has a short useful life and requires immediate action. Streaming analytics is well-suited to this distributed sensor environment to collect data for analysis.
Understand: A significant number of IoT use cases requires quick decision-making in real time or near-real time. To achieve this, we need to apply analytics to data in motion. This can be done by deploying AI models that detect anomalies and patterns as events occur.
Act: As with any analytics-based decision support, it is critical to act on the insight generated. Once a pattern is detected this must trigger an action to reach a desired outcome. This could be to alert key individuals or change the state of a device, possibly eliminating the need for any human intervention.

The value in IoT is driven by the reduced latency to trigger the desired outcome. Maybe that’s improving production quality in the manufacturing process, recommending a new product to a customer as they shop online, or eliminating equipment failures in a utility plant. Whatever it is, time is of the essence and IoT can help get you there.

Step 2: Strategize

Keeping the “sense, understand, act” framework in mind, the next step is to outline what you hope to achieve. To get the most out of your streaming analytics with SAS and Microsoft, keep your objectives in mind so you can stay focused on the business outcome instead of trying to act on every possible data point.

Some important questions to ask yourself are:

1. What is the primary and secondary outcomes you are hoping to achieve? Increase productivity? Augment safety? Improve customer satisfaction?
2. What patterns or events of interest do you want to observe?
3. If your machines and sensors show anomalous behavior what actions need to be taken? Is there an existing business process that reflects this?
4. What data is important to be stored as historical data and what data can expire?
5. What kind of infrastructure exists from the point where data is generated (edge) to cloud? Is edge processing an option for time-critical use cases or does processing needs to be centralized in cloud?
6. What are your analytics and application development platforms? Do you have access to high performance streaming analytics and cloud infrastructure to support this strategy?

Once you’ve identified your outcomes, define which metrics and KPIs you can measure to show impact. Make sure to have some baseline metrics to start from that you can improve upon.

Step 3: Plan and adopt

Now it’s time to take your strategy and plan the adoption of streaming analytics across your business.

Adoption will look different if you already have an IoT platform in place or if you are working to create a net-new solution. If you are going to be updating or iterating upon an existing solution, you will want to make sure you have access to key historical data to measure improvement and use institutional knowledge to maximize performance. If you are working with a net-new solution, you will want to give yourself some additional time to start small and then scale your operations up over time so you can tackle any unforeseen challenges.

In both cases it is important to have key processes aligned to the following considerations:

Data variety, volume and accuracy: Focus here on the “sense” part of the “sense, understand, act” framework. Accessing good data is the foundation to the success of your streaming projects. Make sure you have the right data needed to achieve your desired business outcome. Streaming analytics helps you understand the signals in IoT data, so you can make better decisions. But if you can’t access the right data, or your data is not clean, your project will not be successful. Know how much data you will be processing and where. Data can be noisy, so it is important to understand which data will give you the most insight.
Reliability: Ensure events are only processed once so you’re not observing the same events multiple times. When equipment fails or defects occur on the production line, ensure there are processes in place to auto-start to maximize uptime for operations.
Scalability: Data science resources are scarce, so choose a low-code, no-code solution that can address your need to scale. When volume increases, how are you going to scale up and out? Azure simplifies scale with its PaaS offerings, including the ability to auto-scale SAS Viya on Azure.
Operations: Understand how you plan to deploy your streaming analytics models, govern them and decide which processes can be automated to save time.
Choose the right partners and tools: This is critical to the success of any initiative. SAS and Microsoft provide a best-in-class solution for bringing streaming analytics on the most advanced platform for integrated cloud and edge analytics.

Now that you have created your plan, it is time to adopt. Remember to start small and add layers of capability over time.

Step 4: Manage

To get the most value from IoT and streaming analytics, organizations must implement processes for continuous iteration, development, and improvement. That means having the flexibility to choose the most powerful models for your needs – using SAS, Azure cloud services, or open source. It also means simplifying DevOps processes for deploying and monitoring your streaming analytics to maximize uptime for your business systems.

With SAS Viya on Azure, it is easy to do this and more. Seamlessly move between your SAS and Microsoft environment with single sign on authentication. Develop models with a host of no-code, low-code tools, and monitor the performance of your SAS and open-source models from a single model management library.

Maximizing value from your IoT and streaming analytics systems is a continuous, agile process. That is why it is critical to choose the most performant platform for your infrastructure and analytics needs. Together, SAS and Microsoft make it easier for organizations of all sizes and maturity to rapidly build, deploy, and scale IoT and streaming analytics, maximizing up time to better serve customers, employees, and citizens.

If you want to learn more about SAS and streaming analytics and IoT capabilities as well as our partnership with Microsoft, check out the resources below:

• Learn about SAS Viya’s IoT and streaming analytics capabilities
• Discover out all the exciting things SAS and Microsoft are working to achieve together at SAS.com/Microsoft
• See how SAS and Microsoft work together to help the town of Cary, North Carolina warn citizens of flood events: Smart city uses analytics and IoT to predict and manage flood events

Your guide for analyzing real time data with streaming analytics from SAS® Viya® on Azure was published on SAS Users.