Joe Furbee

9月 042019
 

Editor’s note: This article is a continuation of the series by Conor Hogan, a Solutions Architect at SAS, on SAS and database and storage options on cloud technologies. Access all the articles in the series here.

In a previous article in this series, Accessing Databases in the Cloud – SAS Data Connectors and Amazon Web Services, I covered SAS and database as a service (DBaaS) and storage offerings from Amazon Web Services (AWS). Today, I cover the various storage options available on AWS and how connect to and interact with them from SAS.

Object Storage

Amazon Simple Storage Service (S3) is a low-cost, scalable cloud object storage for any type of data in its native format. Individual Amazon S3 objects can range in size from 1 byte all the way to 5 terabytes (TB). Amazon S3 organizes these objects into buckets. A bucket is globally unique. You access the bucket directly through an API from anywhere in the world, if granted permissions. The default granted to the bucket is least access. Amazon advertises 11 9’s, or 99.999999999% of durability, meaning that you never lose your data. Data replicates automatically across availability zones to meet this durability. You can reduce the number of replicants or use one of the various tiers of archive services to reduce your object storage cost. Costs are calculated based on terabytes of storage per month with added costs for request and transfers of data.

SAS and S3

Support for Amazon Web Services S3 as a Caslib data source for SAS Cloud Analytic Services (CAS) was added in SAS Viya 3.4. This data source enables you to access SASHDAT files and CSV files in S3. You can use the CASLIB statement or the table.addCaslib action to add a Caslib for S3. SAS is currently exploring native object storage integration with AWS S3 for more file types. For other file types you can copy the data from S3 and then use a SAS Data Connector to load the data into memory. For example, if I had Excel data in S3, I could use PROC S3 to copy the data locally and then load the data into CAS using the SAS Data Connector to PC Files.

Block Storage

Amazon Elastic Block Store (EBS) is the block storage service designed for use with Amazon Elastic Compute Cloud (EC2). Only when attached to an operating system is the storage class accessible. Storage volumes can be treated as an independent disk drive controlled by a server operating system. You would mount an EBS volume to an operating system as if it were a physical disk. EBS volumes are valuable because they are the storage that will persist when you terminate your compute instance. You can choose from four different volume types that supply performance levels at corresponding costs.

SAS and EBS

EBS is used as the permanent SAS data storage and persists through a restart of your SAS environment. The performance choices made when selecting from the different EBS volume type will have a direct impact on the performance that you get from SAS. One thing to consider is using compute instances that have enhanced EBS performance or dedicated solid state drive instance storage. For example, the SAS Viya on AWS QuickStart uses Storage Optimized and Memory Optimized compute instances with local NVMe-based SSDs that are physically connected to the host server that is coupled to the lifetime of the instance. This is beneficial for performance.

SAS Cloud Analytic Services (CAS) is an in-memory server that relies on the CAS Disk Cache as the virtual memory storage backend. This is especially true if you are reading data from a database. In this case, make sure you have enough block storage, in the form of EBS volumes for use as the CAS Disk Cache.

File Storage

Amazon Elastic File System (EFS) provides access to data through a shared file system. EFS is an elastic network file system that grows and shrinks as you add or remove files, so you only pay for the storage you consume. Users create, delete, modify, read, and write files organized logically in a directory structure for intuitive access. This allows simultaneous access for multiple users to a common set of file data managed with user and group permissions. Amazon FSx for Lustre is the high-performance file system service.

SAS and EFS

EFS shared file system storage can be a powerful tool if utilizing a SAS Grid architecture. If you have a requirement in your SAS architecture for a shared location that any node in a group can access and write to, then EFS could meet your requirement. To access the data stored in your network file system you will have to mount the EFS file system. You can mount your Amazon EFS file systems to any EC2 instance, or any on-premises server connected to your Amazon VPC.

BONUS: Serverless

Amazon Athena is query service for Amazon S3. This service makes it easy to submit queries against the objects stored in S3. You can run analysis on this data using standard SQL. Athena is serverless, so there is no infrastructure to manage, and you pay only for the queries you run. Amazon Athena uses Presto with ANSI SQL support and works with a variety of standard data formats, including CSV, JSON, ORC, Avro, and Parquet.

SAS and Athena

Amazon Athena is ODBC/JDBC compliant which means I can use SAS/ACCESS Interface to ODBC or SAS/ACCESS Interface to JDBC to connect using SAS. Download an Amazon Athena ODBC driver and submit code from SAS just like you would any ODBC data source. Athena is a great tool if you want to use the serverless computing power of Amazon to query data in S3.

Finally

Many times, we do not have a choice of technologies we use and infrastructures on which they sit. Luckily, if you use AWS, integration with SAS is not a concern. I’ve now covered databases and storage for AWS. In future articles, I’ll cover the same topics for Microsoft Azure and Google Cloud Platform.

Additional Resources

Storage in the Cloud – SAS and Amazon Web Services was published on SAS Users.

8月 222019
 

Editor’s note: This is the first article in a series by Conor Hogan, a Solutions Architect at SAS, on SAS and database and storage options on cloud technologies. This article covers the SAS offerings available to connect to and interact with the various database options available in Amazon Web Services.

As companies move their computing to the cloud, they are also moving their storage to the cloud. Just like compute in the cloud, data storage in the cloud is elastic and responds to demand while only paying for what you use. As more technologies moves to a cloud-based architecture, companies must consider questions like: Where is my data going to be stored? Do I want a hybrid solution? What cloud storage options do I have? What storage solution best meets my business needs?. Another question requiring an answer is: Is the software I use cloud-ready?. The answer in the case of SAS is, YES! SAS offers various cloud deployment patterns on various cloud providers and supports integration with cloud storage services.

This is part one in a series covering database as a service (DBaaS) and storage offerings from Amazon Web Services (AWS). Microsoft Azure and Google Cloud Platform will be covered in future articles. The goal is to supply a breakdown of these services to better understanding the business requirements of these offerings and how they relate to SAS. I will focus primarily on SAS Data Connectors as part of SAS Viya, but all the same functionality is available using a SAS/ACCESS Interface in SAS 9.4. SAS In-Database technologies in SAS Viya are called SAS Data Connect Accelerators and are synonymous with the SAS Embedded Process.

SAS integration with AWS

SAS has extended SAS Data Connectors and SAS In-Database Technologies support to Amazon Web Services database variants. A database running in AWS is much like your on-premise database, but instead Amazon is managing the software and hardware. Amazon’s DBaaS offerings take care of the scalability and high availability of the database with minimal user input. SAS integrates with your cloud database even if SAS is running on-premise or with a different cloud provider.

AWS databases

Amazon offers database service technologies familiar to users. It is important to understand the new terminology and how the different database services best meet the demands of your specific application. Many common databases already in use are being refactored and provided as service offerings to customers in AWS. The advantages for customers are clear: no hardware to manage and no software to install. Databases that scale automatically to meet demand and software that updates and create backups automatically means customers can spend more time creating value from their data and less time managing their infrastructure.

For the rest of this article I cover various database management systems, the AWS offering for each database type, and SAS integration. First let's consider the diagram below depicting a decision flow chart to determine integration points between AWS database services and SAS.

Integration points between AWS database services and SAS

Trace you path in the diagram and read on to learn more about connection details.

Relational Database Management System (RDBMS)

In the simplest possible terms, an RDBMS is a collection of managed tables with rows and columns. You can divide relational databases into two functional groups: online transaction processing (OLTP) and online analytical processing (OLAP). These two methods serve two distinct purposes and are optimized depending in how you plan to use the data in the database.

Transactional Databases (OLTP)

Transactional databases are good at processing reads, inserts, updates and deletes. These queries usually have minimal complexity, in large volumes. Transactional databases are not optimized for business intelligence or reporting. Data processing typically involves gathering input information, processing the data and updating existing data to reflect the collected and processed information. Transactional databases prevent two users accessing the same data concurrently. Examples include order entry, retail sales, and financial transaction systems. Amazon offers several types of transactional database services. You can organize Amazon Relational Database Service (RDS) into three categories: enterprise licenses, open source, and cloud native.

Enterprise License

Many customers already have workloads built around enterprise databases. Amazon provides a turn-key enterprise solution for customers not looking to break their relationship with enterprise vendors or refactor their existing workflows. AWS offers Oracle and Microsoft SQL Server as a turn-key enterprise solution in RDS. Both offerings include the required software license, however Oracle also allows you to “Bring Your Own License” (BYOL). SAS has extended SAS Data Connector support for both cloud variants. You can use your existing license for SAS Data Connector to Oracle or SAS Data Connector to Microsoft SQL Server to interact with these RDS databases.

Remember you can install and manage your own database on a virtual machine if there is not an available database as a service offering. The traditional backup and update responsibilities are left to the customer in this case. For example, both SAS Data Connector to Teradata and SAS Data Connect Accelerator for Teradata are supported for Teradata installed on AWS.

Open Source

Amazon provides service offerings for common open source databases like MySQL, MariaDB, and PostgreSQL. SAS has extended SAS Data Connector support for all these cloud variants. You can use your existing license for SAS Data Connector to MYSQL to connect to either RDS MYSQL or RDS MariaDB and SAS Data Connector to PostgreSQL to interface with RDS PostgreSQL.

Cloud Native

Amazon Aurora is a MySQL and PostgreSQL-compatible relational database built for the cloud, combining the performance and availability of traditional enterprise databases with the simplicity and cost-effectiveness of open source databases. SAS has extended SAS Data Connector support for Amazon Aurora. You can use your existing license for SAS Data Connector to MYSQL to connect to either Aurora MYSQL or and SAS Data Connector to PostgreSQL to interface with Aurora PostgreSQL.

Analytical Databases (OLAP)

Analytical Databases optimize on read performance. These databases work best from complex queries in smaller volume. When working with an analytical database you are typically doing analysis on multidimensional data interactively from multiple perspectives. Redshift is the analytical database service offered by Amazon. SAS has a dedicated product called SAS Data Connector to Amazon Redshift that was purpose built for analytics workloads running in the Amazon cloud.

NoSQL Databases

A non-relational or NoSQL database is any database not conforming to the relational database model. These databases are more easily scalable to a cluster of machines. NoSQL databases are a more natural fit for the cloud because the loose dependencies make the data easier to distribute and scale. The different NoSQL databases are designed to solve a specific business problem. Some of the most common data structures are key-value, column, document, and graph databases. Here is a brief overview of the most common data structures.

Key-Value Database

A key-value database stores data as a collection of key-value pairs. The key acts as a unique identifier for each record. Amazon’s key-value database as a service is DynamoDB. SAS interacts with DynamoDB using industry standard ODBC or JDBC drivers.

Columnar Database

Data in a traditional relational database is sorted by rows. The alternative columnar databases optimize by sorting data quickly using columns, saving valuable time and network I/O. Redshift is the columnar database service offered by Amazon. SAS has a dedicated product called SAS Data Connector to Amazon Redshift that was purpose built for this database.

Document Database

A document database queries data in documents typically stored in JSON format. DocumentDB is the document database service offering from Amazon. SAS interacts with DocumentDB using industry standard ODBC or JDBC drivers. DocumentDB is MongoDB-compatible which means existing MongoDB drivers and tolls work with DocumentDB. Currently SAS is building out functionally to support SAS Data Connector to MongoDB and you should expect that to expand further into DocumentDB as well.

Graph Database

Amazon Neptune is the graph database service designed to work with a complex hierarchy of interconnected data. These design of these types of databases queries relationships in data and reduce the number of table joins. SAS interacts with Amazon Neptune using industry standard ODBC or JDBC drivers.

Hadoop

The traditional deployment of Hadoop is changing dramatically with the cloud. Traditional Hadoop vendors may have a tough time keeping up with the service offerings available in the cloud. Hadoop still offers reliable replicated storage across nodes and powerful parallel processing of large jobs without much data movement. Amazon offers Elastic Map Reduce as their Hadoop as a service offering. Amazon Elastic supports both SAS Data Connector to Hadoop and SAS Data Connect Accelerator for Hadoop.

Finally

It is important to think about the use case for your database and the type of data that you plan to store before you select an AWS database service. Understanding your workloads is critical to getting the right performance and cost. When dealing with cloud databases always remember that you will be charged for the storage that you use but also for the data that you move out of the database. To do analysis and reporting on your data may require data transfer. Be aware of these costs and think about how you can lower these costs by keeping frequently accessed data cached somewhere or remain on-premise.

Additional Resources

  1. Support for Databases in SAS® Viya® 3.4
  2. Support for Cloud and Database Variants in SAS® 9.4

Accessing Databases in the Cloud – SAS Data Connectors and Amazon Web Services was published on SAS Users.

7月 262019
 

In a previous post, Zero to SAS in 60 Seconds- SAS Machine Learning on SAS Analytics Cloud, I documented my experience with a SAS free trial on the SAS Analytics Cloud. Well, the engineers at SAS have been busy and created another free trial. The new trial covers SAS Event Stream Processing (ESP).

This time last year (when just starting at SAS), I only knew ESP as extrasensory perception. I'm more enlightened now. Working through this exercise introduced me to how event stream processing is a powerful and effective tool for analyzing data using machine learning and streaming analytics to uncover insights for real-time decision making. In a nutshell, you create a model, stream your data, process the results, and make timely decisions based on the results.

The trial uses SAS ESPPy, allowing you to embed an ESP project inside a Python pipeline. To see ESPPy in action take a look at this video. To learn more about ESP and IoT see this article on the SAS Communities Library. In this article I chronicle my journey through the trial while introducing key concepts and operations of ESP.

Register and get started

The process to register and initial login are identical to the machine learning article. You must have a SAS Profile to participate in the trial. The only difference is you need to follow this link to sign up for the ESP trial. Please refer to the machine learning article for detailed steps of signing up and logging in.

The use case

SAS Solar Farm in Cary

The SAS Solar Farm sits on almost 12 acres of SAS Headquarters property. There are 10,276 solar panels producing more than 3.6 million kilowatt hours annually. That’s enough power for more than 325 average sized U.S. homes.

As part of the environment management, it is important to continuously monitor the operation of the solar panels to optimize configuration parameters, detect potential equipment failure, and accurately forecast the amount of energy generated. Factors considered include panel angles, time of day, seasons, and weather patterns as the energy generated depends directly of the amount of sun available to the panels.

The ESP project in this demo is pre-loaded in the trial and is run through a Jupyter notebook. The project shows the monitoring of energy (kWh) and power (kW) generated during a specific time interval eliminating localized outlier effects and triggering alerts when there is a pre-defined difference in the energy generated between subsequent time intervals.

Solar Farm Data represented as digital art

Take two minutes and watch this video on how SAS uses SAS software to create a work of art with solar farm data.

Disclaimer: no sheep were harmed during data collection or writing of this article.

Navigating the trial

Once logged into the trial, you see the Applications screen.

ESP trial Applications screen

The Data and Team options in the left pane behave exactly as those in the machine learning trial. These sections allow you to access data and manage a multi-user system. Select the SAS Event Stream Processing icon to start a JupyterLab session.

JupyterLab home screen

I will not go into the details of JupyterLab here. The left pane contains menus, file management, and other options. The pane on the right displays three options:

Python 3 Notebook - a blank Jupyter notebook - documents that combine live, runnable code with narrative text (Markdown), equations (LaTeX), images, interactive visualizations and other rich output
Python 3 Console - a blank Python console - code consoles enable you to run code interactively in a kernel
Text File - basic text editor - enables you to edit text files in JupyterLab

For this article we're going to follow along and interact with the pre-loaded demo Solar Farm ESP project. To locate the Jupyter notebook double click the demo directory from the left pane.

Select the demo directory from the left pane

Next select Event_Stream_Processing. Before proceeding with the demo, I'd highly suggest opening the README.ipynb file.

Contents of the README notebook

Here you will find overview and environment organization information for the trial. The trial uses SAS ESPPy for designing, testing, and deploying projects on ESP Servers.

Step through the demo

Before starting the trial, I needed a little background on event stream processing. I located the SAS ESP product documentation. I recommend referring to it for details on the ESP model, objects, and workflow.

To access the demo, double click the demo directory from the left pane. The trial comes with five pre-loaded demos. Feel free to try any/all of them. Double click on ESP Basic Project - Solar Farm.ipynb to display the Solar Farm notebook. The notebook walks you through the ESP model creation and execution. To run a command place the cursor in a command cell and select the 'Run' button (triangle-shaped button at the top of the notebook). If no response returns when running the cell block, assume the commands ran successfully.

Below is a brief description of the steps in the project:

  1. Create the project and query used - this creates dedicated space and objects where the ESP process takes place
  2. Create input and aggregate windows - this action extracts desired data and creates data subsets from the stream
  3. Add a join window - this brings together lag and current values into the project
  4. Add a compute window - this calculates the difference between the previous and current event
  5. Add a filter window - this action filters occurrences outside a threshold value; this creates an alert for potential mechanical issues
  6. Define workflow connections - this defines the workflow between the various windows in the project
  7. Save the project - this generates an XML file for the project
  8. Load the project to the ESP Server - this loads the project and produces a graphical representation of the workflow

    Solar Farm project workflow

  9. Start streaming data - in this example, rather than streaming data in real time, the stream derives from the solar farm table data
  10. View solar farm data - this creates a graphical representation of streaming data

    Solar Farm graph for kW and kWh

While not included in the demo, the streaming data would pass through the filter and if a threshold breach occurs, an alert is created. Considering the graph above, alerts could very well have occurred just before 1:15 pm (IntkW drops from 185 to 150) and just before 2:30 pm (IntkW drops from 125 to 35).

Your turn

Now that you have a taste of ESP, feel free to step through the rest of the demos. You may also load your own data and create your own ESP models. Feel free to share your experience and what you create by leaving a comment.

SAS Event Stream Processing on SAS Analytics Cloud - my journey was published on SAS Users.

6月 112019
 

This article is a follow-on to a recent post from Jeff Owens, Getting started with SAS Containers. In that post, Jeff discussed building and running a single container for a SAS Viya runtime/IDE. Today we will go through how to build and run the full SAS Viya stack - visual components and all - in Kubernetes. Step 1 is building the container images and Step 2 is running the containers. For both steps, you can go to the sas-container-recipes GitHub repo for more detail and to obtain the tools needed to accomplish this task. An in-depth guide and more information is located on the wiki page in the repository.

The project development team at SAS has done an incredible job of making this new and intuitive way to dynamically create large collections of containers easy and foolproof, despite my long-winded explanation...

Building the Container Images

Keeping with the recipes theme, we are going to need to prepare a few ingredients to make this work. Of course, you will need a valid SAS_Viya_deployment_data.zip file containing your ordered products.

Build Machine

First, you need a Build Machine. This can be a lightweight server, but it needs to be running Linux. The build machine in this example is 2cpu x 8GB RAM, running RHEL 7.6. Hint – 2 cores is the minimum but the more you use for the build the better (faster). I have installed Docker version 18.09.5 here and I have a 100GB volume attached to my docker root (by default this is /var/lib/docker but you can easily change the location in your /etc/docker/daemon.json file).

You can review full system requirements in the GitHub repository here. This article covers the "multiple" or "full" deployment types so focus on that column in the table.
This build machine is going to execute the build script which builds each one of your containers, push them to your Docker Registry, and create the corresponding Kubernetes manifests files needed to launch your deployment.

Make sure you have cloned the sas-container-recipes repository to this machine.

Docker Registry

You will need access to a Docker registry. Your build machine must be able to push images into it, and your Kubernetes machines must be able to pull images from it. Prior to building, make sure you runt the docker login myregistry.com command using the build uid. This docker login will ensure a file is present at /home/.docker/config.json. This is a requirement whether you secure the registry with a form of authentication, or not. Note, if your registry does not respond to pings you will need to add the --skip-docker-url-validation parameter to the build command.

Mirror Repo (Optional)

Similar to the single containers build, it is a good idea to create a mirror repository to host your SAS rpms. A local mirror gives you consistent performance during installation and a consistent build. However, if your containers are able to connect to ses.sas.download then you can skip the mirror step. Beware of the network implications and the fluid nature of these repos.

LDAP

Just like any other SAS Viya environment, all users/groups/authentication/authorization are managed by connecting to an external LDAP. This could be a quick-and-dirty OpenLDAP server we stand up ourselves, or a corporate Active Directory server. Regardless, we will have to be able to make this connection if we want to use SAS Viya's visual interfaces. The easiest and best way to handle this connection is with a sitedefault.yml file. Below is a sample sitedefault.yml that would hypothetically connect to host.com's corporate LDAP. You need to construct your own sitedefault file using values for your LDAP. Consult SAS documentation (linked above) for further information.

config:
    application:
        sas.logon.initial.password: sasboot
        sas.identities.providers.ldap.connection:
            host: myldap.host.com
            port: 368
            userDN: 'CN=ldapadmin,DC=host,DC=com'
            password: ldappassword
        sas.identities.providers.ldap.group:
            baseDN: OU=Groups,DC=host,DC=com
        sas.identities.providers.ldap.user:
            baseDN: DC=host,DC=com
        sas.identities:
            administrator: youruserid

Additionally, we will need to make sure a few of our containers have "host integration" with this same LDAP (specifically, the CAS container and the programming container). The way we do that is with a standard sssd.conf file. You should hopefully be able to track down a valid sssd.conf file for your site from an administrator. Hint – it may be necessary to add homedir (/home/%u) and default shell (/bin/bash) overrides to this file depending on your LDAP configuration.

The way one would apply these two files here is:

  1. place sssd.conf in the add-ons/auth-sssd directory and include the --addons/auth-sssd option when you run build.sh, as we do in the example later.
  2. place sitedefault.yml in the top level of sas-container-recipes. If the recipe sees a sitedefault.yml file here, it will base64 encode it and embed it as a value in the consul.yml config map. If you didn't do this beforehand, you can add your sitedefault.yml file later. Remember the step below is optional, post-build. This is necessary if you did not include sitedefault.yml pre-build.
    cat sitedefault.yml | base64 --wrap=0

    Next, copy and paste the output into your consul.yml configmap (by default you can find this in builds/full/manifests/kubernetes/configmaps/consul.yml). You want to add a new key/value similar to the following:

    consul_key_value_data_enc: Y29uZmlnOgogICAgYXBwbGlj......XNvZW1zaXRlLERDPWNvbQo=[

Ingress

Ingress is a crucial component to make this come together because the only way to access your SAS Viya environment is through your Ingress. The recipe gives us an Ingress resource (one of the generated Kubernetes manifests files); however, an Ingress resource is simply an internal HTTP routing rule. We will need to make sure we have manually installed a valid Ingress controller inside of our Kubernetes environment which can be a little tricky if you are new to Kubernetes. The Ingress controller reads and applies routing rules (Ingress resources) such as the ones created by the recipes.

Traefik and Ngnix are the two most popular industry options. Or you might use native Ingresses offered by AWS, Azure, or GCP if you are running your Kubernetes cluster in the cloud. But to reiterate, you will need an Ingress controller up and running.

Once your Ingress controller is up, you need to edit the provided manifests_usermods.yml. You should set SAS_K8S_INGRESS_DOMAIN to be the DNS name that resolves to a Kubernetes node that can reach your Ingress controller. And while you have this file open you can also set a unique name for the Kubernetes namespaces you want these resources to deploy (the default is "sas-viya"). This manifests_usermods.yml file is available in the util/ directory, so if you are going to use this then you will first make a copy of that file in the top-level sas-container-recipes directory and edit it there.

Kubernetes namespace

Build.sh

With all this in place we are ready to build. To summarize, the “pre-build” config needed here are the files we touched in this sas-container-recipes project:

Relevent pre-build files

So, we can go ahead and launch the build script. I prefer using environment variables for easier readability along with copying and pasting when things change - new registries, mirrors, tags, etc.

SAS_VIYA_DEPLOYMENT_DATA_ZIP=/path/to/SAS_Viya_deployment_data.zip
MIRROR_URL=mymirror.com/myrepo #optional
DOCKER_REGISTRY_URL=myregistry.com
SAS_RECIPE_TYPE=full
DOCKER_REGISTRY_NAMESPACE=viya
SAS_DOCKER_TAG=prod
 
./build.sh --type $SAS_RECIPE_TYPE \
--mirror-url $MIRROR_URL \ #optional
--docker-registry-url $DOCKER_REGISTRY_URL \
--docker-registry-namespace $DOCKER_REGISTRY_NAMESPACE \
--zip $SAS_VIYA_DEPLOYMENT_DATA_ZIP \
--tag $SAS_DOCKER_TAG \
--addons "addons/auth-sssd"

Once complete:

  1. We store container images (30-40 of them depending on the software you have ordered) locally in the build host's docker images directory.
  2. All these images also are tagged and pushed to our Docker Registry. For your organizational reference, the naming convention used is:
    $DOCKER_REGISTRY_URL/$ DOCKER_REGISTRY_NAMESPACE/-:$SAS_DOCKER_TAG
  3. All our Kubernetes manifests files are available on the build machine in sas-container-recipes/builds/full/manifests/kubernetes. These fully configured manifest files are ready to use. They reference the images we have built and pushed.
  4. The build log gives us instructions for how to apply these resources to Kubernetes. These are simple commands you should be able to copy and paste to standup our Viya environment).

Build log instructions

For the curious
The list below is what happened during the build process. Feel free to skip this section, you do not need to know how any of this works to use the recipes:

  1. You, the builder invokes build.sh. This is a wrapper script around the greater build framework.  This script created a "builder container."  Check out the Dockerfile in the top level of the recipes directory.  This builder container builds from a golang base image as the build process, written in a few Go files (new as of April 2019).  Several files from the sas-container-recipes project copy into this container, including said Go files.
    • Note, we did not have to install Go on our build machine since Go is running inside a container.
    • If you are interested in seeing what the builder container looks like, you can run this command: docker run -it --rm --entrypoint /bin/bash sas-container-recipes-builder:$SAS_DOCKER_TAG.
    • A 'sas' user is created inside of this container - this user has the same uid as the user who invoked build.sh on the host.
  2. build.sh also created a new subdirectory on the host called 'builds/<buildtype>-<timestamp>'. This will contain logs, manifests, and various templates used during this specific build.
  3. build.sh then runs that builder container and the real work gets underway. The entry point for the builder is:  go run main.go container.go order.go.  All those arguments you specified when invoking build.sh pass right into this Go program.  Also, the newly created "builds" directory mounts into the container at /sas-container-recipes/builds.
    • The host's /var/run/docker.sock file mounts into this container - this allows the builder container to run docker (docker in docker)
  4. This Go program then:
    • Generates a playbook from your deployment data file (SOE zip) using the [sas-orchestration tool](https://support.sas.com/en/documentation/install-center/viya/deployment-tools/34/command-line-interface.html).
    • Creates Kubernetes manifests for the images set to build.
    • Gathers sets of Ansible roles to install in each container, based on the entitlement of your software order.
    • Generates a Dockerfile for each container, where each applicable Ansible role installs in a new Docker layer
    • Creates a "build context" for each container with the generated Dockerfile and the Ansible role files.
    • Starts a docker build process for each container. The Dockerfile installs ansible and executes the playbook "locally" (inside of each container).
    • Pushes these images into your registry as each build finishes.
    • Note, this happens inside of containers, and the builds execute concurrently. Recall this build machine has 2 cores, so only 2 containers build at a time and it took several hours.  If we used a 16-core machine, this whole build would go faster.  In another terminal, look at docker stats during the build.  Another significant “performance” impact is the network bandwidth between your build machine and your registry.

Running the Containers

We are going to run these containers inside of a Kubernetes environment. Here are the finishing touches needed to give us a completely containerized SAS Viya environment running in Kubernetes. Note, that by default this deploys into a new namespace inside of your Kubernetes cluster and isolates the resources from anything else running.

Kubernetes Environment

Since we built the full stack, we'll need to make sure we have sufficient resources to run all of these containers at the same time. We'll need a minimum of 8 cores and 80GB RAM available. Remember CAS is a multithreaded, in-memory runtime, so the more cores and RAM you provide, the more horsepower you'll have for doing actual analytical work with SAS and CAS.

Kubectl

Hopefully, if you've gotten this far you are familiar with kubectl, which is the client tool/interface used with a Kubernetes cluster. Consider it a cli wrapper around the Kubernetes API. But for thoroughness, you will launch your SAS Viya deployment from whatever machine from where you are running kubectl. If this happens to be the same machine you built on, then you can stay inside of the sas-container-recipes directory you started in, and copy and paste those kubectl apply -f... commands. Or you can copy your manifest files somewhere else and modify those commands accordingly. In either instance, once those commands run, your environment is up, and you should be able to access SAS Environment Manager and other SAS web apps. If you added your userid as an administrator in the sitedefault.yml file, then you can log in as yourself with admin access.
Apply the manifests:

Apply the manifests

And after a few minutes your pods should be up (first time takes the longest since images must be pulled). Note that the pod running doesn’t mean all your SAS Viya services are running. It may take up to 30 minutes for all services to be up and stabilized.

Pods list

With your Ingress and DNS rules set up correctly, you should be able to reach your environment:

SAS login screen

Based on properly configured sitedefault.yml and sssd.conf files, you should be able to log in as an LDAP user.

Miscellaneous Notes

Scaling

Once your SAS Viya environment is up and running in Kubernetes, the following kubectl command adds CAS worker nodes to scale out the capacity of our CAS server.

kubectl scale deployment sas-viya-cas-worker --replicas=5 -n sas-viya-prod

Note, there isn’t any value in adding any more workers than you have physical nodes in your cluster.

Performance

SAS is a powerful programming language designed to handle heavy workloads on large data. General hardware performance has historically been a chief concern to customers implementing SAS. Containers bring a whole new wrinkle to the concept of performance given the general notion of hardware abstraction. One performance related question is: how can we ever guarantee the IO provided by the underlying filesystem (SASWORK, CAS_DISK_CACHE)? Like Kubernetes and Storage/State in general, no easy answer exists. It falls back on the Kubernetes operator to make high performance filesystems (i.e. local SSD) available on all nodes a SAS programming or CAS container(s) might land on, and manually edit the corresponding manifest files to leverage those host disks. Alternatively, we can try to limit the burden on these scratch disk spaces. For CAS, this means ensuring we have more RAM available than data in use.

Amnesia

See the summary section below for a caveat about this deployment methodology – this is not quite a complete implementation for “production” types of environments. At least not without the understanding customer configuration requirements. You should have a discussion with your sales team about some of these details. But please be aware building/deploying as we did here leaves us with an “Amnesiac Viya” (this useful term coined by an astute SAS employee). That is, there is no state here. If and when you take your environment down, or scale pods to 0 across services, this will yield a "brand new" or "fresh" environment once brought back up. The good news is this also means if we run into any issues, we can easily delete the whole namespace and restart. If you want to persist any user data, config, reports, code, etc. you will have to manually attach storage to a few locations.

Full vs Multiple

Note, here we used SAS_DEPLOYMENT_TYPE=full. This built the entire Viya stack, visual interfaces, microservices and all. Alternatively, if we set the deployment type to "multiple" we get three container images – programming, httpproxy, and cas. This would be all we need if we wanted to write SAS code, whether we wanted to use SAS Studio or an external IDE like Jupyter. And we could still scale out our CAS cluster the same way as we did in our full environment.

Summary

Just like everyone else, the SAS container strategy is quickly evolving. SAS Viya, as a scalable, highly available services-oriented architecture, is a perfect fit to run in containers inside of the Kubernetes orchestration framework. Kubernetes brings tremendous operational benefits to the table for this type of software. Smoother deployments, higher uptime, instant scale, much more efficient hardware usage to name a few.

As you will see in the build log when running the recipe, this is an "EXPERIMENTAL" deployment process. The recipes are an excellent way to get your hands on a Kubernetes version of SAS Viya early. Future releases of SAS Viya will be fully "containerized" and "kubernetes-ized" so customers won’t be building their own containers in this manner. Rather, SAS will provide a Helm chart to customers that will pull container images straight from SAS and apply them into their Kubernetes environments appropriately. Further, many aspects of SAS Viya’s infrastructure will be redesigned to be more "Kubernetes native," but the general feel of this model is what sysadmins/operators should see from SAS going forward.

Deploying the Full SAS Viya Stack in Kubernetes was published on SAS Users.

4月 082019
 


As word spreads that SAS integrates with open source technologies, people are beginning to explore how to connect, interact with, and use SAS in new ways. More and more users are examining the possibilities and with this comes questions like: How do I code A, integrate B, and accomplish C?

Documentation is plentiful but is undergoing a makeover. People aren’t sure where to go for help – and that's why we're launching the SAS Developers Community, where you can gather to ask questions and get answers.

The community will mirror the activities in existing SAS Communities: Q&A, library articles, tips, technical discussions, etc. We migrated some content from other boards. For example, we moved the content from the Coding on SAS Viya board to the new community. Additionally, we scoured other boards for content that may be better aligned with developers and moved it. We also created some original content. Any good community needs participation by all, so read on and get the 411 on the new Developers Community.

Who is the target audience?

Developers – data scientists, application developers, analysts, programmers and administrators – who need to access SAS resources and/or run SAS procedures. This audience may or may not have SAS programming skills but need to access and analyze data using SAS.

What can developers expect to find?

The Developers Community provides a forum for collaboration, Q&A, and knowledge and resource sharing. The focus will be on developers using open source languages and technology. The community will create synergy between communities.sas.com, developer.sas.com, and github.com/sassoftware. SAS employees and external users will post how-to articles and other items of interest in the library section of the community. This community will not replace the SAS Programming Communities, rather, it will fill a void for non-SAS programmers who have a need/desire to interact with SAS.

When will the community launch?

The Developers Community is live! The site is public, and we've moved existing artifacts to the community. I am attending SAS Global Forum and will be available to answer questions about the new community from our booth in the Quad. Come by and see me!

Where will the community live?

The Developers Community exists on communities.sas.com, under the Developers Category.

Why do we need a community for developers?

Developers need a centralized place to share ideas, ask and answer questions, and discover resources. Currently developers lack a forum to work through things such as authentication, coding, API use, and integration issues. The community will encourage communication, engagement and leadership. Also, the Developers Community will be tightly integrated with the SAS Developers web site and SAS GitHub resources.

How do we go about creating the community?

After seeding the SAS Developer Community with existing discussions, we'll build out a group of SAS developer experts to help monitor the community. The true magic will happen as questions are asked, discussions transpire, and ideas are shared. But we need to your help too. Here is your call to action.

Share the community with your networks, buddies and even family members who may get something out of chatting it up about how to develop in SAS. The livelihood of the community hinges on user interaction. Our current and future users will thank you for it. And you may make a friend while you're at it.

Launching the Developers Community in SAS Communities was published on SAS Users.

4月 052019
 

You are a data scientist, in your office, doing data scientist-y things when, your manager's, manager's, manager makes an impossible request. She wants you take a raw data set from the stem cell research team, scrub the data, create and score models, and be ready to rescore when new data comes is available. And she wants it in a week. WHAT?! Your company doesn't own an analytics software license, and a spreadsheet is not going to work on this data with millions of records. Even if you received funding, how could you ever create and maintain an environment under your tight deadline? Take a deep breath, conjure your inner data scientist acumen, and realize SAS has the answer.

SAS Machine Learning on SAS Analytics Cloud provides on-demand programming access to machine learning algorithms in the cloud. No downloads, no install, no infrastructure, no maintenance. This solution provides a multithreaded, multiuser environment for concurrent access to data in memory. The solution is designed for data scientists (and others) coding in SAS or Python and allows them on-demand programmatic access to SAS Viya. You can find more details on Analytic Cloud in the fact sheet. You can even try it for free! The rest of this article will walk you through the features of this new SAS offering and outline how it can help you complete the task bestowed upon you.

Register and get started

Literally, to sign up for the trial, all you need are a SAS Profile, an email address, and a PC. You will be coding in SAS in less than a minute. From the SAS Cloud Analytics page, select the Get Free Trial button. This takes you to the SAS Profile login page (note you can create your SAS Profile here if you do not have one).

SAS Profile log in or creation

Agree to the Terms and Conditions on the License Agreement page and select the Continue button:

Trial License Agreement

You will receive an email containing a URL much like the following:

email confirmation with trial URL

Logging in

Select the link or paste it into your browser (Google Chrome 64-bit recommended) and you will see the log in screen. Enter your SAS Profile credentials and click the Sign In button.

Sign In screen

The Home screen (Applications) appears.

Home Page

We'll discuss the Data and Team pages in further detail later on in this article. You have two options for applications: SAS Studio (for SAS programming) and JupyterLab (for Python programming). This article focuses on SAS Studio. A follow up article will cover the JupyterLab use case. Select the SAS Studio button, a new tab opens to SAS Studio, and we're ready to start coding.

SAS Studio

You are familiar with the SAS language, but you need to brush up a little. Have no fear, support documentation is easily accessible. Also, the SAS Data Mining and Machine Learning Community is a great place to discover additional resources and ask questions. Finally, embedded in SAS Studio are code snippets. You decide to explore the latter.

Code snippets

In SAS Studio select the Snippets twisty in the left pane. Navigate to the SAS Viya Machine Learning section. Here you find code samples you will use to prep and analyze your data. When opening a snippet, you see code and detailed comments on what the code will accomplish. You will use these snippets as a guide when you load and prep your data and preform your analysis. Below is an image of the Prepare and Explore Data snippet. Notice each code step has accompanying comments.

You read through each snippet in the Machine Learning section. The command and structure of the code comes back to you pretty quickly and you're now ready to try it all out on your own data.

Uploading data

Now that you have an idea of what code you need to write, you need to load the data from the research department. You accomplish this by selecting the Server Files and Folders twisty and navigate to the Folder Shortcuts section. In this instance you want to upload your file into the shared/data directory (I'll explain why I chose this location in a moment). Use the Upload button to upload the research data file.

Upload file to the data directory

You're not alone

Files uploaded to shared/data are now visible by others logged into the environment. Wait, did I forget to mention this is a multi-user environment?! Well, yes, it is. You can invite others to collaborate on the project. To add and manage users, return to the Home screen (leaving SAS Studio open). Select the Team section in the left pane. The Team page lists users and displays an Invite button, used to send an invitation for system access to others.

Teams page

To invite others, click the button and enter the email address of the new user. This generates and sends an invitation in an email. The new user accepts the invite and now has access to the system. Using the URL provided in the email the new user logs in using their own SAS Profile credentials. The default role for new users is ‘User.’ A user with admin privileges can change the role to ‘Admin. In the free trial, you are permitted to have a total of five users.

Shared data

You may have guessed by now the Data section lists directories and files located in the shared directory in SAS Studio.

Data page

You also notice here you have 5 GB of storage space. This includes shared and non-shared files.

I love this. How do I get more?

Now you know your way around the system and are ready to start coding. Return to SAS Studio, open a new program, and commence your analysis of the stem cell data. When you successfully deliver the project and impress your management chain, you can mention how the SAS Analytics Cloud solution made it all possible (and simple). You now have a case for the departmental procurement of the solution opening your organization up to add more users, access more storage, and gain more power to run advanced machine learning algorithms on your data.

Your turn

In this article I've outlined how to easily register for the SAS Machine Learning trial and start coding in the matter of a minute. Try it out yourself. Register, load your data, get coding, and solve your problem.

Related Resources

For more details on the development of SAS Analytics Cloud, check out Missy Hannah's interview with two UI developers on the project.

Zero to SAS in 60 Seconds- SAS Machine Learning on SAS Analytics Cloud was published on SAS Users.

3月 072019
 

As of December 2018, any customer with a valid SAS Viya order is able to package and deploy their SAS Viya software in Docker containers. SAS has provided a fully documented and supported project (or “recipe”) for easily building these containers. So how can you start? You can simply stop reading this article and go directly to the GitHub repository and follow the instructions there. Otherwise, in this article, Jeff Owens, a solutions architect at SAS, provides a little color commentary around the process in case it is helpful…

First of all, what is the point of these containers?

Well, at its core, remember SAS and it’s massively parallel, in-memory counterpart, Cloud Analytic Services (CAS) is a powerful runtime for data processing and analytics. A runtime simply being an engine responsible for processing and executing a particular type of code (i.e. SAS code). Traditionally, the SAS runtime would live on a centralized server somewhere and users would submit their “jobs” to that SAS runtime (server) in a variety of ways. The SAS server supports a number of different products, tasks, etc. – but for this discussion let’s just focus on the scenario where a job here is a “.sas” file, perhaps developed in an IDE-like Enterprise Guide or SAS Studio, and submitted to the SAS runtime engine via the IDE itself, a bash shell, or maybe even SAS’ enterprise grade scheduler and job management solution – SAS Grid. In these cases, the SAS and CAS servers are on dedicated, always-on physical servers.

The brave new containerized world in which we live provides us a new deployment model: submit the job and create the runtime server at the same time. Plus, only consume the exact resources from the host machine or the Kubernetes cluster the specific job requires. And when the job finishes, release those resources for others to use. Kubernetes and PaaS clusters are quite likely shared environments, and one of the major themes in the rise of the containers is the further abstraction between hardware and software. Some of that may be easier said than done, particularly for customers with very large volumes of jobs to manage, but it is indeed possible today with SAS Viya on Docker/Kubernetes.

Another effective (and more immediate) usage of this containerized version of SAS Viya is simply an adhoc, on-demand, temporary development environment. The container package includes SAS Studio, so one can quickly spin up a full SAS Viya programming sandbox – SAS Studio as well as the SAS & CAS runtimes. Here they can develop and test SAS code, and just as quickly tear the environment down when no longer needed. This is useful for users that: (a) don’t have access to an “always-on” environment for whatever reason, (b) want to try out experimental code that could potentially consume resources from a shared "always-on" sas environment, and/or (c) maybe their Kubernetes cluster has many more resources available than their always-on and they want to try a BIG job.

Yes, it is possible to deploy the entire SAS Viya stack (microservices and all) via Kubernetes but that discussion is for another day. This post focuses strictly on the SAS Viya programming components and running on a single machine Docker host rather than a Kubernetes cluster.

Build the container image

I will begin here with a fresh single machine RHEL 7.5 server running on Openstack. But this machine could have been running on any cloud or VM platform, and I could use any (modern enough) flavor of Linux thanks to how Docker works. My machine here has 8cpu, 16GB RAM, and a 50GB root volume. Less or more is fine. A couple of notes to help understand how to configure an instance:

  • The final docker container image we will end up with will be ~10GB in size and like all docker images will live in /var/lib/docker/images by default.
    • Yes, that is large for a container. Most of this size is just static bins and libs that support the very developed SAS language. Compare to an Anaconda image which is ~3.6GB.
  • As for RAM, remember any tables loaded to CAS are loaded to memory (and will swap to disk as needed). So, your memory choice should be directly dependent on the data sizes you expect to work with.
  • Similar story for cores – CAS code is multithreaded, so more cores = more parallelization.

The first step is to install Docker.

Following along with sas-container-recipes now, the first thing I should do is mirror the repo for my order. Note, this is not a required step – you could build this container directly from SAS repos if you wanted, but we’ll mirror as a best practice. We could simply mirror and serve it over the local filesystem of our build host, but since I promised color I’ll serve it over the web instead. So, these commands run on a separate RHEL server. If you choose to mirror on your build host, make sure you have the disk space (~30GB should be plenty). You will also need your SAS_Viya_deployment_data.zip file available on the SAS Customer Support site. Run the following code to execute the setup.

$ wget https://support.sas.com/installation/viya/34/sas-mirror-manager/lax/mirrormgr-linux.tgz
$ tar xf mirrormgr-linux.tgz
$ rm -f mirrormgr-linux.tgz
$ mkdir -p /repos/viyactr
$ mirrormgr mirror –deployment-data SAS_Viya_deployment_data.zip –path /repos/viyactr –platform x64-redhat-linux-6 –latest
$ yum install httpd -y
$ system start httpd
$ systemctl enable httpd
$ ln -s /repos/viyactr /var/www/html/sas_repo

Next, I go ahead and clone the sas-containers-recipes repo locally and upload my SAS-Viya-deployment-data.zip file and I am ready to run the build command. As a bonus, I am also going to use my site’s (SAS’) sssd.conf file so my container will use our corporate Active Directory for authentication. If you do not need or want that integration you can skip the “vi addons/sssd.conf” line and change the “--addons” option to “addons/auth-demo” so your container seeds with a single “sasdemo:sasdemo” user:password instead.

$ # upload SAS_Viya_deployment_data.zip to this machine somehow
$ Git clone https://github.com/sassoftware/sas-container-recipes.git
$ cd sas-container-recipes/
$ vi addons/sssd.conf # <- paste in your site’s sssd.conf file
$ build.sh \
--type single \
--zip ~/SAS_Viya_deployment_data.zip \
--mirror-url http://jo.openstack.sas.com/sas_repo \
--addons “addons/auth-sssd”

The build should take about 45 minutes and produce a single container image for you (there might be a few images, but it is just one with a thin layer or two on top). You might want to give this image a new name (docker tag) and push it into your own private registry (docker push). Aside from that, we are ready to run it.
If you are curious, look in the addons directory for the other optional layers you can add to your container. Several tools are available for easily configuring connections to external databases.

Run the container

Here is the run command we can use to launch the container. Note the image name I use here is “sas-viya-programming:xxxxxx” – this is the image that has my sssd layer built on top of it.

$ docker run \
--detach \ 
--rm \ 
--env CASENV_CAS_HOST=$(hostname -f) \ 
--env CASENV_CAS_VIRTUAL_PORT=8081 \ 
--publish 5570:5570 \ 
--publish 8081:80 \ 
--name sas-viya-programming \ 
--hostname sas-viya-programming \ 
sas-viya-programming:xxxxxx

Connect to the container

And now, in a web browser, I can go to :8081/SASStudio and I will end up in SAS Studio, where I can sign in with my internal SAS credentials. To stop the container, use the name you gave it: “docker stop sas-viya-programming”. Because we used the “--rm” flag the container will be removed (completely destroyed) when we stop it.

Note we are explicitly mapping in the HTTP port (8081:80) so we easily know how to get to SAS Studio. If you want to start up another container here on the same host, you will need to use a different port or else you’ll get an address already in use error. Also note we might be interested in connecting directly to this CAS server from something other than SAS Studio (localhost). A remote python client for example. We can use the other port we mapped in (5570:5570) to connect to the CAS server.

Persist the data

Running this container with the above command means anything and everything done inside the container (configuration changes, code, data) will not persist if the container stops and a new one started later. Luckily this is a very standard and easy to solve scenario with Docker and Kubernetes. Here are a couple of targets inside the container you might be interested in mounting a volume to:

  • /tmp – this is where CAS_DISK_CACHE is by default, not to mention SASWORK. Those are scratch space used by the runtimes. If you are working with small data and don’t care too much about performance, no need to worry about this. But to optimize your container we would suggest mounting a Docker volume to this location (or, ideally, bind mount a high-performance storage device here). Note that generally Docker prefers us to use Docker volumes in lieu of bind mounts, but that is more for manageability, security, and portability than performance.
  • /data – this directory doesn’t necessarily exist but when you mount a volume into a container the target location will be created. So, you could call this target whatever you want, assuming it doesn’t exist yet.  Bind mounting is tempting here and OK to do but consider the scenario when another user wants to run your container following instructions you provided them – better to use a Docker volume than force them to create the directory on the host.  If you have an NFS location, bind mounting that makes sense
  • /code – same spiel as with /data. Once you are in the container you can save your work and it will persist in the docker volume from run to run of your container.

Here is what an updated docker run command might look like with these volumes included:

$docker run \ 
--detach \ 
-rm \ 
--env CASNV_CAS_VIRTUAL_HOST=$(hostname -f) \ 
--env CASNV_CAS_VIRTUAL_PORT=8081 \ 
--volume mydata:/data \ 
--volume /nfsdata:/nfsdata \ # example syntax for bind mount instead of docker volume mount 
--volume mycode:/code \ 
--volume sastmp:/tmp \ 
--publish 5570:5570 \ 
--publish 8081:80 \ 
--name sas-viya-programming \ 
--hostname sas-viya-programming \ 
sas-viya-programming:xxxxxx

Can I run this on my laptop?

Yes. You would just need to install Docker on your laptop (go to docker.com for that). You can certainly follow the instructions from the top to build and run locally. You can even push this container image out to an internal registry so other users could skip the build and just run.

So far, we have only talked about the “ad-hoc” or “sandbox” dev type of use case for this container. A later article may cover how to run in batch mode or maybe we will move straight to multi-containers & Kubernetes. In the meantime though, here is how to submit a .sas program as a batch job to this single container we have built.

Give it a try!

Try creating your own image and deploying a container. Feel free to comment on your experience.

More info:

SAS Communities Article- Running SAS Analytics in a Docker container
SAS Global Forum Paper- Docker Toolkit for Data Scientists – How to Start Doing Data Science in Minutes!
SAS Global Forum Tech Talk Video- Deploying and running SAS in Containers

Getting Started with SAS Containers was published on SAS Users.

1月 252019
 

Need to authenticate on REST API calls

In my blog series regarding SAS REST APIs (article 1, article 2, article 3) I outlined how to integrate SAS analytical capabilities into applications. I detailed how to construct REST calls, build body parameters and interpret the responses. I've not yet covered authentication for the operations, purposefully putting the cart before the horse. If you're not authenticated, you can't do much, so this post will help to get the horse and cart in the right order.

While researching authentication I ran into multiple, informative articles and papers on SAS and OAuth. A non-exhaustive list includes Stuart Rogers' article on SAS Viya authentication options, one of which is OAuth. Also, I found several resources on connecting to external applications from SAS with explanations of OAuth. For example, Joseph Henry provides an overview of OAuth and using it with PROC HTTP and Chris Hemedinger explains securing REST API credentials in SAS programs in this article. Finally, the SAS Viya REST API documentation covers details on application registration and access token generation.

Consider this post a quick guide to summarize these resources and shed light on authenticating via authorization code and passwords.

What OAuth grant type should I use?

Choosing the grant method to get an access token with OAuth depends entirely on your application. You can get more information on which grant type to choose here. This post covers two grant methods: authorization code and password. Authorization code grants are generally used with web applications and considered the safest choice. Password grants are most often used by mobile apps and applied in more trusted environments.

The process, briefly

Getting an external application connected to the SAS Viya platform requires the following steps:

  1. Use the SAS Viya configuration server's Consul token to obtain an ID Token to register a new Client ID
  2. Use the ID Token to register the new client ID and secret
  3. Obtain the authorization code
  4. Acquire the access OAuth token of the Client ID using the authorization code
  5. Call the SAS Viya API using the access token for the authentication.

Registering the client (steps 1 and 2) is a one-time process. You will need a new authorization code (step 3) if the access token is revoked. The access and refresh tokens (step 4) are created once and only need to be refreshed if/when the token expires. Once you have the access token, you can call any API (step 5) if your access token is valid.

Get an access token using an authorization code

Step 1: Get the SAS Viya Consul token to register a new client

The first step to register the client is to get the consul token from the SAS server. As a SAS administrator (sudo user), access the consul token using the following command:

$ export CONSUL_TOKEN=`cat /opt/sas/viya/config/etc/SASSecurityCertificateFramework/tokens/consul/default/client.token`
64e01b03-7dab-41be-a104-2231f99d7dd8

The Consul token returns and is used to obtain an access token used to register the new application. Use the following cURL command to obtain the token:

$ curl -k -X POST "https://sasserver.demo.sas.com/SASLogon/oauth/clients/consul?callback=false&serviceId=app" \
     -H "X-Consul-Token: 64e01b03-7dab-41be-a104-2231f99d7dd8"
 {"access_token":"eyJhbGciOiJSUzI1NiIsIm...","token_type":"bearer","expires_in":35999,"scope":"uaa.admin","jti":"de81c7f3cca645ac807f18dc0d186331"}

The returned token can be lengthy. To assist in later use, create an environment variable from the returned token:

$ export IDTOKEN="eyJhbGciOiJSUzI1NiIsIm..."

Step 2: Register the new client

Change the client_id, client_secret, and scopes in the code below. Scopes should always include "openid" along with any other groups this client needs to get in the access tokens. You can specify "*" but then the user gets prompted for all their groups, which is tedious. The example below just uses one group named "group1".

$ curl -k -X POST "https://sasserver.demo.sas.com/SASLogon/oauth/clients" \
       -H "Content-Type: application/json" \
       -H "Authorization: Bearer $IDTOKEN" \
       -d '{
        "client_id": "myclientid", 
        "client_secret": "myclientsecret",
        "scope": ["openid", "group1"],
        "authorized_grant_types": ["authorization_code","refresh_token"],
        "redirect_uri": "urn:ietf:wg:oauth:2.0:oob"
       }'
{"scope":["openid","group1"],"client_id":"app","resource_ids":["none"],"authorized_grant_types":["refresh_token","authorization_code"],"redirect_uri":["urn:ietf:wg:oauth:2.0:oob"],"autoapprove":[],"authorities":["uaa.none"],"lastModified":1547138692523,"required_user_groups":[]}

Step 3: Approve access to get authentication code

Place the following URL in a browser. Change the hostname and myclientid in the URL as needed.

https://sasserver.demo.sas.com/SASLogon/oauth/authorize?client_id=myclientid&response_type=code

The browser redirects to the SAS login screen. Log in with your SAS user credentials.

SAS Login Screen

On the Authorize Access screen, select the openid checkbox (and any other required groups) and click the Authorize Access button.

Authorize Access form

After submitting the form, you'll see an authorization code. For example, "lB1sxkaCfg". You will use this code in the next step.

Authorization Code displays

Step 4: Get an access token using the authorization code

Now we have the authorization code and we'll use it in the following cURL command to get the access token to SAS.

$ curl -k https://sasserver.demo.sas.com/SASLogon/oauth/token -H "Accept: application/json" -H "Content-Type: application/x-www-form-urlencoded" \
     -u "app:appclientsecret" -d "grant_type=authorization_code&code=YZuKQUg10Z"
{"access_token":"eyJhbGciOiJSUzI1NiIsImtpZ...","token_type":"bearer","refresh_token":"eyJhbGciOiJSUzI1NiIsImtpZC...","expires_in":35999,"scope":"openid","jti":"b35f26197fa849b6a1856eea1c722933"}

We use the returned token to authenticate and authorize the calls made between the client and SAS. We also get a refresh token we use to issue a new token when the current one expires. This way we can avoid repeating all the previous steps. I explain the refresh process further down.

We will again create environment variables for the tokens.

$ export ACCESS_TOKEN="eyJhbGciOiJSUzI1NiIsImtpZCI6ImxlZ..."
$ export REFRESH_TOKEN="eyJhbGciOiJSUzI1NiIsImtpZC..."

Step 5: Use the access token to call SAS Viya APIs

The prep work is complete. We can now send requests to SAS Viya and get some work done. Below is an example REST call that returns user preferences.

$ curl -k https://sasserver.demo.sas.com/preferences/ -H "Authorization: Bearer $ACCESS_TOKEN"
{"version":1,"links":[{"method":"GET","rel":"preferences","href":"/preferences/preferences/stpweb1","uri":"/preferences/preferences/stpweb1","type":"application/vnd.sas.collection","itemType":"application/vnd.sas.preference"},{"method":"PUT","rel":"createPreferences","href":"/preferences/preferences/stpweb1","uri":"/preferences/preferences/stpweb1","type":"application/vnd.sas.preference","responseType":"application/vnd.sas.collection","responseItemType":"application/vnd.sas.preference"},{"method":"POST","rel":"newPreferences","href":"/preferences/preferences/stpweb1","uri":"/preferences/preferences/stpweb1","type":"application/vnd.sas.collection","responseType":"application/vnd.sas.collection","itemType":"application/vnd.sas.preference","responseItemType":"application/vnd.sas.preference"},{"method":"DELETE","rel":"deletePreferences","href":"/preferences/preferences/stpweb1","uri":"/preferences/preferences/stpweb1","type":"application/vnd.sas.collection","itemType":"application/vnd.sas.preference"},{"method":"PUT","rel":"createPreference","href":"/preferences/preferences/stpweb1/{preferenceId}","uri":"/preferences/preferences/stpweb1/{preferenceId}","type":"application/vnd.sas.preference"}]}

Use the refresh token to get a new access token

To use the refresh token to get a new access token, simply send a cURL command like the following:

$ curl -k https://sasserver.demo.sas.com/SASLogon/oauth/token -H "Accept: application/json" \
     -H "Content-Type: application/x-www-form-urlencoded" -u "app:appclientsecret" -d "grant_type=refresh_token&refresh_token=$REFRESH_TOKEN"
{"access_token":"eyJhbGciOiJSUzI1NiIsImtpZCI6ImxlZ...","token_type":"bearer","refresh_token":"eyJhbGciOiJSUzI1NiIsImtpZCSjYxrrNRCF7h0oLhd0Y","expires_in":35999,"scope":"openid","jti":"a5c4456b5beb4493918c389cd5186f02"}

Note the access token is new, and the refresh token remains static. Use the new token for future REST calls. Make sure to replace the ACCESS_TOKEN variable with the new token. Also, the access token has a default life of ten hours before it expires. Most applications deal with expiring and refreshing tokens programmatically. If you wish to change the default expiry of an access token in SAS, make a configuration change in the JWT properties in SAS.

Get an access token using a password

The steps to obtain an access token with a password are the same as with the authorization code. I highlight the differences below, without repeating all the steps.
The process for accessing the ID Token and using it to get an access token for registering the client is the same as described earlier. The first difference when using password authentication is when registering the client. In the code below, notice the key authorized_grant_types has a value of password, not authorization code.

$ curl -k -X POST https://sasserver.demo.sas.com/SASLogon/oauth/clients -H "Content-Type: application/json" \
       -H "Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZ..." \
       -d '{
        "client_id": "myclientid", 
        "client_secret": "myclientsecret",
        "scope": ["openid", "group1"],
        "authorized_grant_types": ["password","refresh_token"],
        "redirect_uri": "urn:ietf:wg:oauth:2.0:oob"
        }'
{"scope":["openid","group1"],"client_id":"myclientid","resource_ids":["none"],"authorized_grant_types":["refresh_token","authorization_code"],"redirect_uri":["urn:ietf:wg:oauth:2.0:oob"],"autoapprove":[],"authorities":["uaa.none"],"lastModified":1547801596527,"required_user_groups":[]}

The client is now registered on the SAS Viya server. To get the access token, we send a command like we did when using the authorization code, just using the username and password.

curl -k https://sasserver.demo.sas.com/SASLogon/oauth/token \
     -H "Content-Type: application/x-www-form-urlencoded" -u "sas.cli:" -d "grant_type=password&username=sasdemo&password=mypassword"
{"access_token":"eyJhbGciOiJSUzI1NiIsImtpZCI6Imx...","token_type":"bearer","refresh_token":"eyJhbGciOiJSUzI1NiIsImtpZ...","expires_in":43199,"scope":"DataBuilders ApplicationAdministrators SASScoreUsers clients.read clients.secret uaa.resource openid PlanningAdministrators uaa.admin clients.admin EsriUsers scim.read SASAdministrators PlanningUsers clients.write scim.write","jti":"073bdcbc6dc94384bcf9b47dc8b7e479"}

From here, sending requests and refreshing the token steps are identical to the method explained in the authorization code example.

Final thoughts

At first, OAuth seems a little intimidating; however, after registering the client and creating the access and refresh tokens, the application will handle all authentication components . This process runs smoothly if you plan and make decisions up front. I hope this guide clears up any question you may have on securing your application with SAS. Please leave questions or comments below.

Authentication to SAS Viya: a couple of approaches was published on SAS Users.

12月 222018
 

This post rounds out the year and my series of articles on SAS REST APIs. The first two articles in the series: Using SAS Viya REST APIs to access images from SAS Visual Analytics and Using SAS Cloud Analytics Service REST APIs to run CAS Actions, examined how to use SAS Viya REST and SAS CAS REST APIs to access SAS data from external resources. Access the links to for a quick detour to get some background. This article takes things a step further and outlines how to use a simple application to interact with SAS Viya using REST APIs.

What do chocolate and toffee have to do with optimization? Read on and find out.

The application

When deciding on an example to use in this article, I wanted to focus on the interaction between the application and SAS, not app complexity. I decided to use an application created by my colleague, Deva Kumar. His OptModel1 is an application built on the restAF framework and demonstrates how SAS REST APIs can be used to build applications that exploit various SAS Viya functionalities. This application optimizes the quantities of chocolate and toffee to purchase based on a budget entered by the user.

Think of the application as comparable to the guns and butter economic model. The idea in the model is the more you spend on the military (guns), the less you spend on domestic programs and the civilian goods (butter). As President Johnson stated in 1968, "That bitch of a war, killed the lady I really loved -- the Great Society." In this article, I'll stick to chocolate and toffee, a much less debatable (and tastier) subject matter.

The OptModel1 application uses the runOptmodel CAS action to solve the optimization problem. The application launches and authenticates the user, the app requests a budget. Based on the amount entered, a purchase recommendation returns for chocolate and toffee. The user may also request a report based on returned values. In the application, OptModel1 and SAS interact through REST API calls. Refer to the diagram below for application code workflow.

Create the application

To create the application yourself, access the source code and install instructions on SAS' github page. I recommend cloning, or in the least, accessing the repository. I refer to code snippets from multiple files throughout the article.

Application Workflow

Represented below is the OptModel1 work flow. Highlighted in yellow is each API call.

OptModel1 Work Flow

OptModel1 Work Flow

Outlined in the following sections is each step in the work flow, with corresponding numbers from the diagram.

Launch the application

Enter url http://localhost:5006/optmodel in a browser, to access the login screen.

OptModel1 app login page

1. Login

Enter proper credentials and click the 'Sign In' button. The OptModel1 application initiates authentication in the logon.html file with this code:

        <script>
            function logonButton() {
                let store = restaf.initStore();
                store.logon(LOGONPAYLOAD)
                    .then(msg => console.log(msg))
                    .catch(err => alert(err));
            }
        </script>

Application landing page

After successfully logging in, the application's main page appears.

Application landing page

Notice how the host and access token are part of the resulting url. For now, this is as far as I'll go on authentication. I will cover this topic in depth in a future article.

As I stated earlier, this is the simplest of applications. I want to keep the focus on what is going on under the covers and not on a flashy application.

2a. Application initialization

Once the app confirms authentication, the application initialization steps ensue. The app needs to be available to multiple users at once, so each session gets their own copy of the template Visual Analytics (VA) report. This avoids users stepping on each other’s changes. This is accomplished through a series of API calls as explained below. The code for these calls is in vaSetup.js and reportViewer.js.

2b. Copy data

The app copies data from the Public caslib to a temporary worklib – a worklib is a standard caslib like casuser. The casl code below is submitted to CAS server for execution. The code to make the API call to CAS is in vaSetup.js. The relevant snippet of javascript code is:

  // create casl statements
    let casl = `
        /* Drop the table in memory */
        action table.dropTable/
        caslib='${appEnv.work.caslib}' name='${appEnv.work.table}' quiet=TRUE;
 
        /* Delete the table from the source */
        action table.deletesource / 
        caslib='${appEnv.work.caslib}' source='${appEnv.work.table}.sashdat' quiet=TRUE;
 
        /* Run data step to copy the template table to worklib */
        action datastep.runCode /
            code='
            data ${appEnv.work.caslib}.${appEnv.work.table}; 
            set ${appEnv.template.caslib}.${appEnv.template.table};
            run;';
 
        /* Save the new work table */
        action table.save /
            caslib  = '${appEnv.work.caslib}'
            name    = '${appEnv.work.table}'
            replace = TRUE
            table= {
                caslib = '${appEnv.work.caslib}'
                name   = '${appEnv.work.table}'
            };
 
        /* Drop the table to force report to reload the new table */
        action table.dropTable/
            caslib='${appEnv.work.caslib}' name='${appEnv.work.table}' quiet=TRUE;
 
 
    `;
 
    // run casl statements on the server via REST API
    let payload = {
        action: 'sccasl.runCasl',
        data: {code: casl}
    }
    await store.runAction(session, payload);

2c. Does report exist?

This step checks to see if the personal copy of the VA report already exists.

2d. Delete temporary report

If the personal report exists it is deleted so that a new one can be created using the latest VA report template.

// If temporary report exists delete it - allows for potential new template report
    let reportsList = await getReport( store, reports, `${APPENV.work.report}`);
    if ( reportsList !== null ) {
        await store.apiCall(reportsList.itemsCmd(reportsList.itemsList(0), 'delete'));
      };

2e. Create new report

A new personal report is created. This new report is associated with the table that was created in step 2b.

// make the service call to create the temporary report
    let changeData = reportTransforms.links('createDataMappedReport');
    let newReport = await store.apiCall(changeData, p);

2f. Save report info

A new personal report is created. This new report is associated with the table that was created in step 2b.

// create src parameter for the iframe
    let options = "&appSwitcherDisabled=true&reportViewOnly=true&printEnabled=true&sharedEnabled=true&informationEnabled=true&commentEnabled=true&reportViewOnly=true";
    let href = `${appEnv.host}/SASReportViewer/?reportUri=${reportUri}${options}`;
 
    // save href in appEnv to use for displaying VA report in an iframe
    appEnv.href = href;

3. Enter budget

Enter budget in the space provided (I use $10,000 in this example) and click the Optimize button. This action instructs the application calculate the amount of chocolate and toffee to purchase based on the model.

Enter budget and optimize

4. & 5. Generate and execute CASL code

The code to load the CAS action set, run the CAS action, and store the results in a table, is in the genCode.js file:

  /* Assumption: All necessary input tables are in memory */
	pgm = "${pgm}";
	/*Load action set and run optimization*/
	loadactionset 'optimization';
		action optimization.runOptmodel / 
		code=pgm printlevel=0; 
		run; 
 
	/* save result of optimization for VA to use */
	action table.save /
		caslib  = '${appEnv.work.caslib}'
		name    = '${appEnv.work.table}'
		replace = TRUE
		table= {
			caslib = '${appEnv.work.caslib}'
			name   = '${appEnv.work.table}'
		};
 
	/* fetch results to return for the UI to display */
	action table.fetch r=result /
		table= {caslib = '${appEnv.work.caslib}' name = '${appEnv.work.table}'};
	run;
 
	/* drop the table to force report to reload the new table */
	action table.dropTable/
		caslib='${appEnv.work.caslib}' name='${appEnv.work.table}' quiet=TRUE;

Note: The drop table step at the end of the preceding code is important to force VA to reload the data for the report.

6. Get the results - table form

The results return to the application in table form. We now know to buy quantities of 370 chocolate and 111 toffee with our $10,000 budget. Please refer to the casTableViewer for code details of this step.

Data view in table format

6. Get the results - report form

Select the View Graph button. This action instructs OptModel1 to display the interactive report with the new data (the report we created in step 2f). Please refer to the onReport function in index.html for code details of this step.

Data view in report format

Now that we know how much chocolate and toffee to buy, we can make enough treats for all of the holiday parties just around the corner. More importantly, we see how to integrate SAS REST APIs into our application. This completes the series on using SAS REST APIs. The conversation is not over however. I will continue to search out and report on other topics related to SAS, open source languages, and agile technologies. Happy Holidays!

SAS REST APIs: a sample application was published on SAS Users.

11月 172018
 

Disclaimer: this article does not cover or promote any political views. It’s all about data and REST APIs.

I am relieved, thankful, elated, glad, thrilled, joyful (I could go on with more synonyms from my thesaurus.com search for 'happy') November 6, 2018 has come and gone. Election day is over. This means no more political ads on TV, and those signs lining the streets will be coming down! It is a joy to now watch commercials about things that matter. Things like injury lawyers who are on your side or discovering a copper colored pan is going to cook my food better than a black one.

The data in this article pertains to advertising expenditures in the 2018 elections. This is the second of three articles in a series outlining the use of REST APIs and SAS. The first article, Using SAS Viya REST APIs to access images from SAS Visual Analytics, I used SAS Viya REST APIs to download an image from a flight data SAS report. In this article I use Cloud Analytics Service (CAS) REST APIs to run statistical methods on political ad spending data. The third article will bring both APIs together in an application.

The data

In the closing days of the election season, while being inundated with political advertising, I thought about how much money is spent during each cycle. The exact numbers vary depending on the resource, but the range for this year’s mid-term elections is between four and five billion dollars.

A little research reveals that outside the candidates themselves, the biggest spenders on political ads are political action committees, aka PACs. The Center for Responsive Politics compiled the data set used in this article, and derives from a larger data set released by the Federal Election Commission. The data set lists a breakdown of PAC contributions to campaign finances.

CAS REST APIs

As I explained in the previous article, SAS publishes two sets of APIs. Which APIs to use depends on the service, the data organization, or the intended use of the data. Please refer to the SAS Viya REST API article for more information on each set of APIs.

CAS REST APIs use CAS actions to perform statistical methods across a variety of SAS products. You can also use the CAS REST APIs to configure and maintain the SAS Viya environment. Here, I focus on the CAS actions. Calling the CAS actions via the REST API allow users to access SAS data and procedures and integrate them into their applications.

The process

How to construct the API call

I start with the API documentation for information on how to construct and use the CAS REST APIs. The REST API can submit actions and return the results. Parameters and result data are in JSON format. To specify your parameters, encapsulate the attributes in a JSON object, then submit a POST method on the action. The URL for your action will include the UUID of your session in the format: /cas/sessions/{uuid}/actions/{action}. Replace {uuid} and action with the appropriate values.

Create a session

The first requirement is to create a session. I use the following cURL command to create the session.

curl -X POST http://sasserver.demo.sas.com:8777/cas/sessions \
    -H 'Authorization: Bearer <access-token-goes-here>'

The response is a JSON object with a session ID:

{
    "session": "16dd9ee7-3189-1e40-8ba7-934a4a257fd7"
}

I’ll use the UUID for the session to build the URLs for the remainder of the REST calls.

Build the CAS REST API call body

Now we know the general structure of the CAS REST API call. We can browse the CAS actions by name to determine how to build the body text.

Using the simple.summary action definition, I build a JSON body to access the PAC spending from a CASTable, create a new table grouped by political views, and calculate total spending. The resulting code is below:

{
	"table":{"caslib":"CASUSER(sasdemo)","name":"politicalspending2018","groupBy":{"name":"view"}},
	"casout":{"caslib":"CASUSER(sasdemo)","name":"spendingbyaffiliation","promote":true},
	"inputs":"total",
	"subset":["SUM","N"],
}

Each line of code above contributes to running the CAS action:

  1. Define the table to use and how to group the data
  2. The output of the API call will create a new CASTable
  3. Dictate the column to summarize.
  4. The statistical method(s) to include in the result table; in this case I want to sum the Total column and count the number of PACs by group.

Send the CAS REST API

Next, I send the body of the text with the curl call below. Notice the session ID obtained earlier is now part of the URL:

curl -X POST http://sasserver.demo.sas.com:8777/cas/sessions/16dd9ee7-3189-1e40-8ba7-934a4a257fd7/actions/simple.summary \
  -H 'Authorization: Bearer <access-token-goes-here>' \
  -H 'Accept = application/json' \
  -H 'Content-Type = application/json'

The REST call creates a new CASTable, SPENDINGBYAFFILIATION. Refer to the screen shot below.

New table

SAS CASTable created by the simple.summary action

I also have the option of returning the data to create the SPENDINGBYAFFILIATION table in JSON format. To accomplish this, remove the casout{} line from the preceding call. Below is a snippet of the JSON response.

JSON response

JSON response to the simple.summary REST call

After parsing the JSON response code, it is now ready for utilization by a web application, software program, or script.

Moving on

The Thanksgiving Day holiday is fast approaching here in the United States. I plan to eat a lot of turkey and sweet potato pie, welcome the out-of-town family, and watch football. It will be refreshing to not hear the back-and-forth banter and bickering between candidates during commercial breaks. Oh, but wait, Thanksgiving is the start of the holiday season. This means one thing: promotions on Black Friday deals for items I may not need will start airing and last through year's-end. I guess if it is not one thing filling the advertising air waves, it is another. I'll just keep the remote handy and hope I can find another ball game on.

What’s next?

I understand and appreciate political candidates’ needs to communicate their stance on issues and promote their agendas. This takes money. I don't see the spending trend changing direction in the coming years. I can only hope the use of the funds will promote candidates' qualifications, beliefs, and ideas, and not to bash or belittle their opponents.

My next article will demonstrate how to use both the SAS Viya and the CAS REST APIs under the umbrella of one web application. And I promise, no politics.

Using SAS Cloud Analytics Service REST APIs to run CAS Actions was published on SAS Users.