SAS Viya

10月 312018
 

This article is the first in a series of three posts to address REST APIs and their use in, and with, SAS. Today, I'll present a basic example using SAS Viya REST APIs to download an image from a report in SAS Visual Analytics.

The second article will show an example of the Cloud Analytics Services (CAS) REST APIs. My third planned article will outline show a simple application that accesses SAS Viya using both sets of REST APIs.

The inspiration for this example: a visualization of air traffic data

I ran across a great post from Mike Drutar: How to create animated line charts that "grow" in SAS Visual Analytics. I followed the steps in Mike's example, which creates a visualization of airline traffic. The result was an animated line chart. For this article, I removed the animation, as it will serve me better in my use case.

SAS Viya APIs and CAS APIs: Two entry points into SAS Viya

The first thing I'd like to cover is why SAS Viya offers two sets of REST APIs. Let's consider who is using the APIs, and what they are trying to accomplish? SAS Viya APIs target enterprise application developers (who may or may not be versed in analytics), who intend to build on the work of model builders and data scientists. These developers want to deliver apps based on SAS Viya technology -- for example, to call an analytical model to score data. On the other hand, the CAS REST API is used by data scientists and programmers (who are decidedly adept at analytics) and administrators, who need to interact with CAS directly and are knowledgeable about CAS actions. CAS actions are the building blocks of analytical work in SAS Viya.

How to get started with SAS Viya REST APIs

The best place to start working with SAS Viya REST APIs is on the SAS Developer's web site. There, you will find links to the API documentation.

The REST APIs are written to make it easy to integrate the capabilities of SAS Viya to help build applications or create scripts. The APIs are based on URLs, using HTTP Authentication, and HTTP verbs. The API documentation page is split into multiple categories. The following table outlines the breakdown:

API Category Description
Visualization Provide access to reports and report images
Compute Act on SAS compute and analytic servers, including Cloud Analytic Services (CAS)
Text Analytics Provide analysis and categorization of text documents
Data Management Enable data manipulation and data quality operations on data sources
Decision Management Provide access to machine scoring and business rules
Core Services Provide operations for shared resources such as files and folders

 

The REST API documentation page is divided into multiple sections.

SAS Viya REST API doc

  1. The categories are listed in the upper-left side.
  2. Once a you select a category, related services and functions are listed in the lower left pane.
  3. The service appears in the center pane with a description, parameters, responses, and error codes.
  4. The right pane displays how to form a sample request, any optional or required body text, and sample response code.

The REST API call process

The example outlined in this article shows how to access a report image from SAS Visual Analytics. To try this out yourself, you will need: a SAS Viya environment (with SAS Visual Analytics configured), an access token, and a REST client. The REST client can be cURL (command line), Postman (a popular REST API environment), or Atom with the rest-client plugin -- or any other scripting language of your choice. Even if you do not have access to an environment right now, read on! As a SAS developer, you're going to want to be aware of these capabilities.

Get a list of reports from SAS Visual Analytics

Run the following curl command to get a list of reports on the SAS server:

curl -X GET http://sasserver.demo.sas.com/reports/reports\
  -H 'Authorization: Bearer <access-token-goes-here>' \
  -H 'Accept: application/vnd.sas.table.column+json'

Alternatively, use Postman to enter the command and parameters:

GET Report List API call from Postman

From the JSON response, find the report object and grab the id of the desired report:

GET Report List Response

Create a job

The next step is to create an asynchronous job to generate the SVG image from the report. I use the following HTTP POST with the /jobs verb:

curl -X POST <a href="http://sasserver.demo.sas.com/reportImages/jobs/">http://sasserver.demo.sas.com/reportImages/jobs\
  -H 'Authorization: Bearer &lt;access-token-goes-here&gt;' \
  -H 'Accept = application/vnd.sas.report.images.job+json'\
  -H 'Content-Type = application/vnd.sas.report.images.job.request+json'

Using the following sample Body text

{
  "reportUri" : "/reports/reports/b555ea27-f204-4d67-9c74-885311220d45",
  "layoutType" : "entireSection",
  "selectionType" : "report",
  "size" : "400x300",
  "version" : 1
}

Here is the sample response:

POST Job Creation Response

The job creation kicks off an asynchronous action. The response indicates whether the job is completed at response time, or whether it's still pending. As you can see from the above response, our job is still in a 'running' state. The next step is to poll the server for job completion.

Poll for job completion

Using the 'id' value from the job creation POST, the command to poll is:

curl -X GET http://sasserver.demo.sas.com/reportImages/jobs/f7a12533-ac40-4acd-acda-e0c902c6c2c1\
  -H 'Authorization: Bearer ' \ 
  -H ‘Accept = application/vnd.sas.report.images.job+json’

And the response:

GET Poll Job Creation Response

Once the job comes back with a 'completed' state, the response will contain the information we need to fetch the report image.

Get the image

I am now ready to get the image. Using the image file name (href field) from the response above, I run the following command:

curl -X GET http://sasserver.demo.sas.com/reportImages/images/K1870020424B498241567.svg\
  -H 'Authorization: Bearer ' \ 
  -H ‘'Accept: image/svg+xml'

Postman automatically interprets the response as as an image. If you use the curl command, you'll need to redirect the output to a file.

SAS Visual Analytics Graph for Air Traffic

What's Next?

SAS Visual Analytics is usually considered an interactive, point-and-click application. With these REST APIs we can automate parts of SAS Visual Analytics from a web application, a service, or a script. This opens tremendous opportunities for us to extend SAS Visual Analytics report content outside the bounds of the SAS Visual Analytics app.

I'll cover more in my next articles. In the meantime, check out the Visualization APIs documentation to see what's possible. Have questions? Post in the comments and I'll try to address in future posts.

Using SAS Viya REST APIs to access images from SAS Visual Analytics was published on SAS Users.

10月 312018
 

An important step of every analytics project is exploring and preprocessing the data.  This transforms the raw data to make it useful and quality.  It might be necessary, for example, to reduce the size of the data or to eliminate some columns. All these actions accelerate the analytical project that comes right after.  But equally important is how you "productionize" your data science project.  In other words, how you deploy your model so that the business processes can make use of it.

SAS Viya can help with that.  Several SAS Viya applications have been engineered to directly add models to a model repository including SAS® Visual Data Mining and Machine Learning, SAS® Visual Text Analytics, and SAS® Studio. While the recent post on publishing and running models in Hadoop on SAS Viya outlined how to build models, this post will focus on the process to deploy your models with SAS Model Manager to Hadoop.

SAS Visual Data Mining and Machine Learning on SAS Viya contains a pipeline interface to assist data scientists in finding the most accurate model.  In that pipeline interface, you can do several tasks such as import score code, score your data, download score API code or download SAS/BASE scoring code.  Or you may decide – once you have a version ready - to store the model out of the development environment by registering your analytical model in a model repository.

Registered models will show up in SAS Model Manager and are copied to the model repository.   That repository provides long-term storage and includes version control.  It's a powerful tool for managing and governing your analytical models.  A registered version of your model will never get lost, even it's deleted from your development environment.   SAS models are not the only kind of models that SAS Model Manager can handle:  Python, R, Matlab models can also be imported.

SAS Model Manager can read, write, and manage the model repository and provide actions for model editing, comparing, testing, publishing, validating, monitoring, lineage, and history of the models.  It also allows you to easily demonstrate your compliance with regulations and policies. You can organize models into different projects.   Within a project it's feasible to test, deploy and monitor the performance of the registered models.

Deploying your models

Deploying, a key step for any data scientist and model manager, can assist in bringing the models into production processes. Kick off deployment by publishing your models.  SAS Model Manager can publish models to systems being used for batch processing or publish to applications where real-time execution of the models is required.   Let's have a look at how to publish the analytical model to a Hadoop cluster and run the model into the Hadoop cluster.  In doing so, you can score the data where it resides and avoid any data movement.

  1. Create the Hadoop public destination.

The easiest way to do this is via the Visual Interface.  Go to SAS Environment Manager and click on the Publish destinations icon:

Click on the new destination icon:

Important:

10月 292018
 

CASL is a language specification that can be used by the SAS client to interact with and provide easy access to Cloud Analytic Services (CAS).  CASL is a statement-based scripting language with many uses and strengths including:

  • Specifying CAS actions to submit requests to the CAS server to perform work and return results.
  • Evaluating and manipulating the results returned by an action.
  • Creating user-defined actions and functions and creating the arguments to an action.
  • Developing analytic pipelines.

CASL uses PROC CAS which enables you to program and execute CAS actions from the SAS client and use the results to prepare the parameters for a subsequent action.  A single PROC CAS statement can contain several CASL programs.  With the CAS procedure you can run any CAS action supported by the server, load new action sets into the server, use multiple sessions to perform asynchronous execution and operate on parameters and results as variables using the function expression parser.

CASL, and the CAS actions, provide the most control, flexibility and options when interacting with CAS.  One can use DATA Step, CAS-enabled PROCS and CASL for optimal flexibility and control.  CASL works well with traditional SAS interfaces and the Base SAS language.

Each CAS action belongs to an action set.  Each action set is further categorized by product (i.e. VA, VS, VDMML, etc.).  In addition to the many CAS actions supplied by SAS, as of SAS® Viya™ 3.4, you can create your own actions using CASL.  Developing and utilizing your own CAS actions allows you to further customize your code and increase your ability to work with CAS in a manner that best suits you and your organization.

About user-defined action sets

Developing a CASL program that is stored on the CAS server for processing is defined as a user-defined action set.  Since the action set is stored on the CAS server, the CASL statements can be written once and executed by many users. This can reduce the need to exchange files between users that store common code.  Note that you cannot add, remove, or modify a single user-defined action. You must redefine the entire action set.

Before creating any user-defined actions, test your routines and functions first to ensure they execute successfully in CAS when submitted from the programming client.  To create user-defined actions, use the defineActionSet action in the builtins action set and add your code.  You also need to modify your code to use CASL functions such as SEND_RESPONSE, so the resulting objects on the server are returned to the client.

Developing new actions by combining SAS-provided CAS actions

One method for creating user-defined CAS actions is to combine one or more SAS provided CAS actions into a user-defined CAS action.  This allows you to execute just one PROC CAS statement and call all user-defined CAS actions.  This is beneficial if you repeatedly run many of the same actions against a CAS table.  An example of this is shown below. If you would like copy of the actual code, feel free to leave a reply below.

In this example, four user-defined CAS actions named listTableInfo, simplefreq, detailfreq, and corr have been created by using the corresponding SAS-provided CAS actions tableInfo, freq, freqTab, and correlation.  These four actions return information about a CAS table, simple frequency information, detailed frequency and tabulate information, and Pearson correlation coefficients respectively.  These four actions are now part of the newly created user-defined action set myActionSet.  When this code is executed, the log will display a note that the new action set has been added.

Once the new action set and actions have been created, you can call all four or any combination of them via a PROC CAS statement.  Specify the user-defined action set, user-defined action(s), and parameters for each.

Developing new actions by writing your own code

Another way to create user-defined CAS actions is to apply user-defined code, functions, and statements instead of SAS-provided CAS actions.

In this example, two user-defined CAS actions have been created, bdayPct and sos.  These actions belong to the new user-defined action set myFunctionSet.

To call one or both actions, specify the user-defined action set, user-defined action(s), and parameters for each.

The results for each action are shown in the log.

Save and load custom actions across CAS sessions

User-defined action sets only exist in the current CAS session.  If the current CAS session is terminated, the program to create the user-defined action set must be executed again unless an in-memory table is created from the action set and the in-memory table is subsequently persisted to a SASHDAT file.  Note: SASHDAT files can only be saved to path-based caslibs such as Path, DNFS, HDFS, etc.  To create an in-memory table and persist it to a SASHDAT file, use the actionSetToTable and save CAS actions.

To use the user-defined action set, it needs to be restored from the saved SASHDAT file.  This is done with the actionSetFromTable action.

More about CASL programming and CAS actions

Check out these resources for further information on programming in the CASL language and running actions with CASL.

How to use CASL to develop and work with user-defined CAS actions was published on SAS Users.

10月 222018
 

This blog post was also written by Reece Clifford.

Who’s responsible for x, y, z sales territory? What’s the most amount of people they engaged with in a month? What type of location leads to the best response from the meeting?

To get the complete answer to these sales team-related questions, you need to trust your data. You need to be able to cut and slice high-quality data to prepare for analytics to drive innovation in your company. With SAS Data Preparation alongside SAS Decision Manager, you can do all this. Its many features allow you to perform out-of-the-box column and row transformations to increase your data quality and build the foundations for data-driven innovation.

This blog will discuss how you can leverage SAS Decision Manager to enrich data when preparing it through SAS Data Preparation.

The use case

As posed above, we want to create a SAS Data Preparation plan to map a sales person to a postcode area. We use a SAS Decision Manager rule to find the sales person for a postcode area and map the person to the address. To trigger the rule, we are going to call it from SAS Data Preparation.

In SAS Decision Manager we import a csv file to create a Lookup Table mapping a sales person to a postcode area. Lookup Tables are tables of key-value pairs and provide the ability to access reference data from business rules.

Next, we create a rule to map a postcode and sales person. A rule specifies conditions to be evaluated and actions to be taken if those conditions are satisfied. Rules are grouped together into rule sets. Most rules correspond to the form:

if condition_expressions then action_expressions

For our rule, we are going to have an incoming postcode plus a record id. The postcode is assumed to be a UK postcode. We are extracting the first two characters of the postcode and lookup the sales person from the Lookup Table that we have just imported.

The rule outputs the sales person (representative) and the record ID. When we have tested and published the rule, it's ready to be used in a SAS Data Preparation Plan.

In SAS Data Preparation, we load a table with address data that we want to enrich by the appropriate sales person.

  1. We need to make sure the table column names and rule input parameter names match. Therefore, we are renaming the field ADDRESS_ID to ID, as ID is the rule input name. The second rule input parameter is Postcode which is the same as in the table, therefore no action is needed.

  1. We can then call the previously-created rule in SAS Decision Manager to map a sales person to an area. This will be done by adding some CASL code to the Code node in the SAS Data Preparation plan. This is featured below with a brief explanation of the functions.
    As the rule has two output parameters, we receive only two columns when executing the code step.

CASL Code

loadactionset "ds2";
action runModel submit / 
	modelTable={name="MONITORRULES", caslib="DCMRULES"}
	modelName="Mon_Person"
	table= {name= _dp_inputTable, caslib=_dp_inputCaslib}
	casout= {name= _dp_outputTable, caslib=_dp_outputCaslib};

Parameters settings for CASL call

modelTable name Name of the table where the rule set was published to.
modelTable caslib Name of the caslib where the rule set was published to.
modelName Name of the decision flow to execute.
table name Table name of the decision flow input data.

(Set to _dp_inputTable)

table caslib caslib name of the table input data.

(Set to _dp_inputCaslib)

casout name Table name of the decision flow output data.

(Set to _dp_outputTable)

casout caslib caslib name of the table output data.

(Set to _dp_outputCaslib)

 

Decision Manager Publishing Dialogue

 

  1. We then wanted to bring back the columns from the input table. We do this through joining the table in the SAS Data Preparation Plan to the original table (again) on the rule output field ID and the tables field ADDRESS_ID.

Conclusion

We have answered our initial question of which sales person is mapped to which region by enriching our data in a user-friendly, efficient process in SAS Data Preparation. We can now begin to gain further insight from our data to answer more of the questions posed at the beginning of the blog to help drive innovation. This can be done through additional insight using SAS Decision Manager or functions in SAS Data Preparation in the current plan or use the output table in another plan. Ultimately, this will facilitate data-driven Innovation via reporting or advanced analytics in your organisation.

Using SAS Decision Manager to enrich the data prep process was published on SAS Users.

10月 052018
 

In my earlier blog, I described how to create maps in SAS Visual Analytics 8.2 if you have an ESRI shapefile with  granular geographies, such as counties, that you wish to combine into regions. Since posting this blog in January 2018, I received a lot of questions from users on a range of mapping topics, so I thought a more general post on using – and troubleshooting - custom polygons in SAS Visual Analytics on Viya was in order. Since version 8.3 is now generally available, this post is tailored to the 8.3 version of SAS Visual Analytics, but the custom polygon functionality hasn’t really changed between the 8.2 and 8.3 releases.

What are custom polygons?

Custom polygons are geographic boundaries that enable you to visualize data as shaded areas on the map. They are also sometimes referred to as a choropleth maps. For example, you work for a non-profit organization which is trying to decide where to put a new senior center. So you create a map that shows the population of people over 65 years of age by US census tract. The darker polygons suggest a larger number of seniors, and thus a potentially better location to build a senior center:

SAS Visual Analytics 8.3 includes a few predefined polygonal shapes, including countries and states/provinces. But if you need something more granular, you can upload your own polygonal shapes.

How do I create my own polygonal shapes?

To create a polygonal map, you need two components:

  1. A dataset with a measure variable and a region ID variable. For example, you may have population as a measure, and census tract ID as a region ID. A simple frequency can be used as a measure, too.
  2. A “polygon provider” dataset, which contains the same region ID as above, plus geographic coordinates of each vertex in each polygon, a segment ID and a sequence number.

So where do I get this mysterious polygon provider? Typically, you will need to search for a shapefile that contains the polygons you need, and do a little bit of data preparation. Shapefile is a geographic data format supported by ESRI. When you download a shapefile and look at it on the file system, you will see that it contains several files. For example, my 2010 Census Tract shapefile includes all these components:

Sometimes you may see other components present as well. Make sure to keep all components together.

To prepare this data for SAS Visual Analytics, you have two options.

Preparing shapefile for SAS Visual Analytics: The long way

One method to prepare the polygon provider is to run PROC MAPIMPORT to convert the shapefile into a SAS dataset, add a sequence ID field and then load into the Cloud Analytic Services (CAS) server in SAS Viya. The sequence ID is mandatory, as it helps SAS Visual Analytics to draw the lines connecting vertices in the correct order.

A colleague recently reached out for help with a map of Census block groups for Chatham County in North Carolina. Let’s look at his example:

The shapefile was downloaded from here. We then ran the following code on my desktop:

libname geo 'C:\...\Data;
 
proc mapimport datafile="C:\...\Data\Chatham_County__2010_Census_Block_Groups.shp"
out=work.chatham_cbg;
run;
 
data geo.chatham_cbg;
set  chatham_cbg;
seqno=_n_;
run;

We then manually loaded the geo.chatham_cbg dataset in CAS using self-service import in SAS Visual Analytics. If you are not sure how to upload a dataset to CAS, please check the %SHIMPR. The macro will automatically run PROC MAPIMPORT, create a sequence ID variable and load the table into CAS. Here’s an example:

%shpimprt(shapefilepath=/path/Chatham_County__2010_Census_Block_Groups.shp, id=GEOID, outtable=Chatham_CBG, cashost=my_viya_host.com,   casport=5570, caslib='Public');

For this macro to work, the shapefile must be copied to a location that your SAS Viya server can access, and the code needs to be executed in an environment that has SAS Viya installed. So, it wouldn’t work if I tried to run it on my desktop, which only has SAS 9.4 installed. But it works beautifully if I run it in SAS Studio on my SAS Viya machine.

Configuring the polygon provider

The next step is to configure the polygon provider inside your report. I provided a detailed description of this in my earlier blog, so here I’ll just summarize the steps:

  • Add your data to the SAS Visual Analytics report, locate the region ID variable, right-click and select New Geography
  • Give it a name and select Custom Polygonal Shapes as geography type
  • Click on the Custom Polygon Provider box and select Define New Polygon Provider
  • Configure your polygon provider by selecting the library, table and ID column. The values in your ID column must match the values of the region ID variable in the dataset you are visualizing. The ID column, however, does not need to have the same name as in the visualization dataset.
  • If necessary, configure advanced options of the polygon provider (more on that in the troubleshooting section of this blog).

If all goes well, you should see a preview of your polygons and a percentage of regions mapped. Click OK to save your geographic item, and feel free to use it in the Geo Map object.

I followed your instructions, but the map is not working. What am I missing?

I observed a few common troubleshooting issues with custom maps, and all are fairly easy to fix. The table below summarizes symptoms and solutions.
 

Symptom Solution
In the Geographic Item preview, 0% of the regions are mapped. For example:
Check that the values in the region ID variable match between the main dataset and the polygon provider dataset.
I successfully created the map, but the colors of the polygons all look the same. I know I have a range of values, but the map doesn’t convey the differences. In your main dataset, you probably have missing region ID values or region IDs that don’t exist in the polygon provider dataset. Add a filter to your Geo Map object to exclude region IDs that can’t be mapped.

Only a subset of regions is rendered. You may have too many points (vertices) in your polygon provider dataset. SAS Visual Analytics can render up to 250,000 points. If you have a large number of polygons represented in a detailed way, you can easily exceed this limit. You have two options, which you can mix and match:

(1)    Filter the map to show fewer polygons

(2)    Reduce the level of detail in the polygon provider dataset using PROC GREDUCE. See example here. Also, if you imported data using the %shpimprt macro, it has an option to reduce the dataset. Here’s a handy link to In the Geographic Item preview, the note shows that 100% of the regions are mapped, but the regions don’t render, or the regions are rendered in the wrong location (e.g., in the middle of the ocean) and/or at an incorrect scale.

This is probably the trickiest issue, and the most likely culprit is an incorrectly specified coordinate space code (EPSG code). The EPSG code corresponds to the type of projection applied to the latitude and longitude in the polygon provider dataset (and the originating shapefile). Projection is a method of displaying points from a sphere (the Earth) on a two-dimensional plane (flat surface). See this tutorial if you want to know more about projections.

There are several projection types and numerous flavors of each type. The default EPSG code used in SAS Visual Analytics is EPSG:4326, which corresponds to the unprojected coordinate system.  If you open advanced properties of your polygon provider, you can see the current EPSG code:

Finding the correct EPSG code can be tricky, as not all shapefiles have consistent and reliable metadata built in. Here are a couple of things you can try:

(1)    Open your shapefile as a layer in a mapping application such as ArcMap (licensed by ESRI) or QGIS (open source) and view the properties of the layer. In many cases the EPSG code will appear in the properties.

(2)    Go to the location of your shapefile and open the .prj file in Notepad. It will show the projection information for your shapefile, although it may look a bit cryptic. Take note of the unit of measure (e.g., feet), datum (e.g., NAD 83) and projection type (e.g., Lambert Conformal Conic). Then, go to https://epsg.io/ and search for your geography.  Going back to the example for Chatham county, I searched for North Carolina. If more than one code is listed, select a few codes that seem to match your .prj information the best, then go back to SAS Visual Analytics and change the polygon provider Coordinate Space property. You may have to try a few codes before you find the one that works best.

I ruled out a projection issue, the note in Geographic Item preview shows that 100% of the regions are mapped, but the regions still don’t render. Take a look at your polygon provider preparation code and double-check that the order of observations didn’t accidentally get changed. The order of records may change, for example, if you use a PROC SQL join when you prepare the dataset. If you accidentally changed the order of the records prior to assigning the sequence ID, it can result in an illogical order of points which SAS Visual Analytics will have trouble rendering. Remember, sequence ID is needed so that SAS Visual Analytics can render the outlines of each polygon correctly.

You can validate the order of records by mapping the polygon provider using PROC GMAP, for example:

proc gmap map=geo.chatham_cbg data=geo.chatham_cbg;
   id geoid;
   choro geoid / nolegend levels=1;
run;

For example, in image #1 below, the records are ordered correctly. In image #2, the order or records is clearly wrong, hence the lines going crisscross.

 

As you can see, custom regional maps in SAS Visual Analytics 8.3 are pretty straightforward to implement. The few "gotchas" I described will help you troubleshoot some of the common issues you may encounter.

P.S. I would like to thank Falko Schulz for his help in reviewing this blog.

Troubleshooting custom polygon maps in SAS Visual Analytics 8.3 was published on SAS Users.

9月 132018
 

With the release of SAS Viya 3.4, you can easily build large-scale machine learning models and seamlessly publish and run models to Hadoop, or other external databases such as Teradata, without the data ever leaving the Hadoop environment. In this process, SAS Viya:

1) Converts the model into MapReduce Code.

2) Executes the MapReduce code.

3) Returns a new, scored dataset in Hadoop.

SAS Viya is a new, distributed in-memory product that allows users to easily build predictive models at scale. Using the SAS Model Studio interface, I can build complex models without the need to write large amounts of underlying code.

For this blog post, I'll go through the steps to build my model using a telecommunications dataset to predict customer churn. Under the “Data” tab, I can see all of my variables, assign the proper roles, and view the dataset.

 

 

 

 

 

 

 

 

 

 

 

 

 

With the data prepared, I build a pipeline to perform data preprocessing steps such as imputation and binning and build several predictive models, including Regression, Neural Networks and Gradient Boosting. Pipelines are powerful because they automate the heavy lifting of the model building process, allowing you to solve problems faster. In addition, pipelines are re-usable across different users and datasets, allowing the adoption of best practices across an organization.

 

After building the models, I combine the models into one ensemble model with ease, and compare their performance on the validation sample. I determine that the gradient boosting model is the most accurate based on the misclassification rate. You can pick from a large number of accuracy criteria, including KS Statistic, AUC, MCR or F1.

 

After having identified the best model, I  publish the model to Hadoop. This allows me to perform future scoring at the data source, meaning data does not have to leave Hadoop. I could have configured the system to publish the model directly from SAS Model Studio; however, I publish and score the model via SAS code for maximum flexibility. With SAS Studio, I can easily control, and change, where I write my resulting models and datasets in Hadoop.

In the “Compare Models” tab, I then download the score code, which provides me with the following:

  1. sas file containing DS2 code that performs all the data preprocessing steps, such as binning and imputation, in the pipeline above. I load this .sas file into a location that can be viewed in SAS Studio.
  2. A .sashdat file in the “Models” Caslib, that is a binary representation of our model called ASTORE, used to score our model in Hadoop.

 

Opening up the dm_epscore.sas file in SAS Studio, the comments in the top tell me the ASTORE file needed to publish the model.

 

 

 

This scoring file allows the data preparation within the pipeline above to be published to Hadoop as well. In this case, the file is binning the variables before building the Gradient Boosting.

 

 

 

 

The scoring file then invokes the ASTORE file needed to score the model in Hadoop.

 

 

 

Now, I switch to SAS Studio to publish and score my model in Hadoop.  The full code can be found here.

Below is the syntax to publish the model.  I'm sure to set the classpath variable to the appropriate jar and config files for my Hadoop cluster. Note that you will need permission to read and write from the modeldir directory. Publishing the model converts the .sas scoring code and the ATORE file into MapReduce code for execution in the cluster.

 

 

 

 

 

 

 

 

 

 

This publishing code will create a directory called “telco_churn” in my home directory in HDFS, /user/ankram. In a SAS Viya environment co-located with Hadoop, the “CASUSERHDFS” Caslib is by default pointed to this location, allowing me to ensure the “telco_churn” file was successfully published.

 

 

 

 

The next step is to score the model in Hadoop. The code below scores the “looking_glass_v4” table in Hive and create a new table called “looking_glass_v4_scored”, without the data ever leaving Hadoop.

 

 

 

 

 

 

 

 

If everything is configured properly, the log should show that the SAS Embedded Process executed correctly.

 

Using a previously setup Caslib called “Hivelib” that points to the default schema in the Hive Server, I can now load the “Looking_Glass_v4_scored” dataset into CAS to view the table.

 

 

 

 

Using the Table Viewer, I can then see the predicted probabilities of churn for each individual.

 

To conclude, many organizations have very large datasets, often times terabytes or larger, and often find that minimizing data movement is critical to successfully putting models into production. The in-database technologies for Hadoop on SAS Viya allow you and your fellow data scientists to easily prepare data and score large-scale models entirely in Hadoop, with the data never leaving the environment. You can now focus on solving more problems and are no longer at the mercy of large datasets and network latency.

 

 

 

 

Publishing and running models to Hadoop in SAS Viya was published on SAS Users.

9月 052018
 

Typically, when filters are applied in SAS Visual Analytics it affects all the records and aggregations in linked objects. For example, in a typical sales report below, when filters are applied, it changes all the measures of linked objects.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

With this kind of filtering, it becomes difficult to calculate measures which requires a different level of aggregation. In above image the expectation is that the ‘Total Customers’ should not be changing irrespective of ‘Region’, ‘State’, ‘Category’ and ‘Subcategory’ control selections. ‘Total Customers (Geo)’ should be changing only based on ‘Region’ and ‘State’ control selections. ‘Total Customers (Geo and Prod)’ should be changing based on all the controls mentioned above. In the above example only, a ‘Total Customers (Geo and Prod)’ calculation is correct.

We will learn to create measures with different levels of aggregation by using ‘Customer Penetration’ measure as an example.

          Customer Penetration = Distinct customers at selected geography and product level/ Distinct customers at selected geography level

Selective filtering may be used for creating similar reports like: Dealer Participation, Sales Contribution, etc. The below section exemplifies the creation of a customer penetration report with selective filtering.

Customer penetration using SAS Visual Analytics 8.2 (selective filtering)

Customer penetration is used to analyze whether marketing and sales strategies are working or not. Managers often uses customer penetration or dealer participation measures along with other measures to measure the popularity of a product, category or brand.

This report requirement is such that the numerator in the ‘Customer Penetration’ formula should be filtered based on region and state list control selections, while the denominator should be filtered based on region, state, category and subcategory list control selections. This is not the same requirement as filtering the whole table through common list controls. In general, if you link a table with any control, all the measures in that table will be filtered as per selected value(s) in controls. However, our requirement is not like that. Instead of linking control and tables we will use control parameters to achieve our objective.

Assume we have a customer transaction table with following variables:

 

 

 

 

 

 

Before we move, be ready with the basic report as per below image:

 

Once you are ready with the report as per the above image, create parameters for ‘Region’, ‘State’, ‘Category’, ‘SubCategory’:

Region Parameter


 

State Parameter

 


Category Parameter

 


SubCategory Parameter

 

Now create the following two calculated items derived from ‘Customer_ID’:

Geo_Customer_ID
Equivalent to ‘Customer_ID’. However, populated only for selected geography levels and rest would be filled with missing.

 

 

 

 

 

 

Geo_and_Prod_Customer_ID
Equivalent to ‘Customer_ID’. However, populated only for selected geography and product levels and rest would be filled with missing.

 

Create the following two aggregated measures:

Total Customers (Geo)
You need to subtract the distinct count related to missing ‘Geo_Customer_ID’, which is 1.

 

Total Customers (Geo and Prod)
You need to subtract the distinct count related to missing ‘Geo_and_Prod_Customer_ID’, which is 1.

 

Now you can create an aggregated measure ‘Customer Penetration’.

Customer Penetration = Total Customers (Geo and Prod) / Total Customers (Geo)

 

Final report will look like this:

 

 

 

 

 

 

 

 

 

 

 

 

 

Comparative images with default and selective filtering implementation:

 

If you compare the above images, you will find the difference in highlighted measures where the first image aggregation level is based on selective filtering, while in second image aggregation level is uniform.

Note – ‘Total Customers’ is count of distinct ‘Customer_ID’ i.e., total customers count is independent from geography and product hierarchy selection.

Conclusion

This process allows you to use control parameters in ‘If Then Else…’ statements to create a variable (calculated item) having character values. You can utilize this feature in several other applications – this is just one way you can use parameters to fulfil a business requirement.

Selective filtering in SAS Visual Analytics 8.2 was published on SAS Users.

8月 302018
 

A combination of SAS Grid Manager and SAS Viya can change the game for IT leaders looking to take on peak computing demands without sacrificing reliability or driving higher costs.  Maybe that’s why we fielded so many questions about SAS Grid Manager and SAS Viya in our recent webinar  about how the two can work together to process massive volumes of data – fast.

Participants asked us so many great questions that we wanted to share the answers here, assuming that you may have the same questions.  This is the first of two blog posts focusing on some of the very best questions we received.  Stay tuned for more soon – and if you don’t see your own burning questions posed here, just post your question in the comments and we’ll respond.

1. Do SAS Grid Manager and SAS Viya need to be collocated in the same data center?

And if they’re in different data centers, can SAS Grid Manager and SAS Viya communicate with one another? As to the question of how well SAS Grid Manager and SAS Viya can communicate if they’re in different data centers, it shouldn’t be a functional concern. If there’s network connectivity between the two data centers, they can communicate.  Just be mindful of a few things:

  • The size of data being processed in each environment.
  • How much data is going back and forth between the two.
  • The impact all that data movement can have on response times and overall performance.

In addition to performance, there's a greater sensitivity around data handling in physically separated deployments. The emergence of stricter data protection regulations increases the complexity of compliance when moving data between locations with different legal jurisdictions. It will be important to consider the additional performance implications of encryption of the transferred data as well. Ultimately, having compute as close as possible to the data it needs results in less complexity and better performance.

In this case it is important to remember the old saying “just because you can doesn’t mean that you should.”  When SAS Grid Manager and SAS Viya are collocated, they can share the same data. I need to be clear that there are implications of sharing the same data for example data sets. For example, the data cannot be open by both a process running on the grid and by the SAS Viya analytics server at the same time. If your business processes can accommodate this requirement then sharing the same physical copies of data in storage may save your organization money as well as ease compliance efforts.  I also cannot sufficiently stress the need to complete a proof of concept with production data volumes and job complexity to compare the performance of hosting SAS Grid Manager and SAS Viya in the same data center, compared to having them in geographically separated data centers.

2. Should SAS Viya and SAS Grid Manager 9.4M5 run on the same OS for integration purposes?

From an integration perspective, they can definitely run on different operating systems. In fact, as of version SAS 9.4m5, you can access a CAS (Cloud Analytics Services) server from Solaris, AIX, or 64-bit Windows. The CAS server itself will run on Linux – and soon we’ll roll out SAS Viya for Windows server.

Short version: Crossing operating systems does not present a functional problem.  Those of you who have used SAS in heterogeneous environments know that there are performance implications when processing data that is not native to the running session.  You should carefully consider the performance implications of deploying in a heterogeneous topology before committing to a mixed environment.

3. How do SAS Viya and SAS Grid Manager compare in terms of complexity – particularly in the context of platform administration?

They’re actually very comparable.  Some level of detail varies but in many cases the underlying concepts and effort required are similar – especially when you reach the level of multi-node administration, keeping multiple hosts patched, and those sorts of issues.

4. SAS Grid Manager requires high-performance storage. Do we need to have that same level of storage (such as IBM General Parallel File System) for SAS Viya? 

No – SAS Viya relies on Cloud Analytic Services (CAS), so it doesn’t have the same storage requirements as SAS Grid Manager.  It’s more like what you’d find in a Hadoop environment – the CAS reference architecture is mainly a collection of nodes with local storage that allows SAS Viya to perform memory-mapping to disk run jobs that need resources larger than the total RAM and can use disk cache to continue running. SAS Viya can ingest data serially or in parallel, so for customers that have the ability to use cost-efficient distributed file systems, movement of data into CAS can be done in parallel.

Note, the shared file system that is part of an existing SAS Grid Manager environment could be further leveraged as a means to share data between the SAS Grid and SAS Viya environments.

SAS’ recommended IO throughput for SAS Grid Manager deployments are based upon years of experience with customers who have been unsatisfied with the performance of their chosen storage.  The resulting best practice is one that minimizes performance complaints and allows customers to process very large data in the timeliest manner. If SAS Viya is deployed with a multi-node analytics server (MPP mode) then a shared file system is required. The SAS Viya Cloud Analytic Server (CAS) has been designed with Network File System (NFS) in mind.

Our customers get the best value from environments built with a blend of storage solutions, including both shared files systems for job/user/application concurrency as well as less expensive distributed storage for workloads that may not require concurrency like large machine learning and AI training problems. The latter is where SAS Viya shines.

These were all great questions that we thought deserved more detail than we could offer in a webinar – and there are more!  Soon we’ll post a second set of questions that you can use to inform your work with SAS Grid Manager and SAS Viya.  In the meantime, feel free to post any further questions in the comment section of this post.  We’ll answer them quickly.

4 FAQs about SAS Grid Manager and SAS Viya was published on SAS Users.

8月 152018
 

SAS Viya logoSAS Viya 3.4 has some new functionality that provides real help for those who want to transition from SAS Visual Analytics on 9.4 to SAS Viya. In prior releases of SAS Viya you could promote reports and explorations (and a few other supporting objects). In SAS Viya 3.4, promotion support is added for many additional SAS 9.4 resources, making it easier to make the leap to SAS Viya. In this blog, I will review this new functionality.

In SAS Viya 3.4, the following objects participate in promotion from SAS 9.4.

  • Configuration
    • Identities
    • Authorization
    • Data definitions
  • Content
    • Folders
    • Reports
    • Explorations
    • Stored processes
    • Supporting resources (such as themes, images, graphs templates)

The details of support for each resource are unique and are discussed below.

Identities

User and group promotion from SAS 9.4 to SAS Viya is used to support the transition to the target environment of authorization settings that are associated with content.  Metadata is exported to support the mapping of SAS 9.4 identity metadata (Users and Groups) to SAS Viya identities (Users, Groups and Custom groups).

During promotion of identity metadata:

  • Users connections are mapped using metadata DefaultAuth:logonid to SAS Viya identity id
  • Metadata-only groups from SAS 9.4 are converted to SAS Viya Custom groups (except SAS General Servers and SAS System Services)
  • If custom groups of the same name (or sometimes the same purpose but a different name) exist in the target, the group is preserved and any mapped members from the source system are added to the group.

Authorization

Identities are “promoted” to support re-implementation of authorization. You do not have to explicitly export authorization as it is included with libraries, tables, folders and reports when they are exported. Promotion of authorization is optional. If you don’t wish to include authorization, but rather re-implement it in
SAS Viya, you can switch this functionality off at import time.

SAS Viya has two authorization systems, the general authorization system for folders and content, and the CAS authorization system for data. These authorization systems are different than the metadata authorization model in SAS 9.4. So what happens when you promote content that includes authorization?

General Authorization (folders and content)

Promotion will attempt to convert SAS 9.4 authorization to rules in the General authorization system.  During the process:

  • Explicit Access Control Entries are converted to SAS Viya Rules
  • Access Control Entries with denials are discarded
  • Access Control Templates are not promoted

In addition, if an object (folder/report):

  • does not exist in the target environment,relevant authorization is set for the object and the access control entries from the source are implemented as rules on the object.
  • existsin the target environment, then access control entries from the source are merged with any pre-existing authorizations in the target environment.

CAS Authorization

The CAS authorization system covers CASlibs and data.  Promotion will attempt to convert SAS 9.4 authorization on libraries and tables to access controls in the CAS authorization system. During the process:

  • Access Control Entries are not promoted unless they are applied directly to a library or table.
  • Access Control Entries are converted to CAS access controls.
  • Row-level permissions are preserved.
  • If an object exists in the target environment no authorization settings are imported.
  • Access Control Templates are not promoted.

For details of how individual permissions for both data and content are mapped from SAS 9.4 to SAS Viya see the documentation has great coverage of the steps to follow.

The Process

To finish off, I'll share few observations on the process of exporting from 9.4 and importing in SAS Viya. Like SAS 9.4 promotion, you need to import in a specific order. This allows the software to make the relevant connections to dependent resources. For example, if the CASLIB already exists in the target, then import tables can be mapped to it. Typically, the order is: identities > library definitions > tables > reports and folders. To support this process, make sure, during export, you have a separate package for each resource type. Some considerations for the export process.

You should export:

  • Identities (users and groups) from the security folders in SAS 9.4 metadata to a separate package.
  • Only groups that you need in the target environment (you can subset any irrelevant SAS 9 groups at export time).
  • LASR and Base Libraries and tables directly from the library definition in the folder tree (this prevents extraneous folders being created in the target environment).
  • Libraries in a separate package from tables so that they may be imported first and be available for mapping when the tables are imported.
  • Content and reports from the base of the folder tree so that all directly applied access control entries will be included in the package.

Prior to importing, make sure that users and groups are configured correctly in LDAP. As I already mentioned, physical data is not promoted so ensure that required data and formats are accessible to the SAS Viya environment.

The new functionality for promotion is a great start in helping with the transition from SAS 9.4 to SAS Viya. Look for more functionality in future releases.

New functionality for transitioning from SAS Visual Analytics on 9.4 to SAS Viya was published on SAS Users.

8月 132018
 

Data in the cloud makes it easily accessible, and can help businesses run more smoothly. SAS Viya runs its calculations on Cloud Analytics Service (CAS). David Shannon of Amadeus Software spoke at SAS Global Forum 2018 and presented his paper, Come On, Baby, Light my SAS Viya: Programming for CAS. (In addition to being an avid SAS user and partner, David must be an avid Doors fan.) This article summarizes David's overview of how to run SAS programs in SAS Viya and how to use CAS sessions and libraries.

If you're using SAS Viya, you're going to need to know the basics of CAS to be able to perform calculations and use SAS Viya to its potential. SAS 9 programs are compatible with SAS Viya, and will run as-is through the CAS engine.

Using CAS sessions and libraries

Use a CAS statement to kick off a session, then use CAS libraries (caslibs) to store data and resources. To start the session, simply code "cas;" Each CAS session is given its own unique identifier (UUID) that you can use to reconnect to the session.

Handpicked Related VIDEO: SAS programming in the cloud: CASL code

There are a few significant codes that can help you to master CAS operations. Consider these examples, based on a CAS session that David labeled "speedyanalytics":

  • What CAS sessions do I have running?
    cas _all_ list;
  • Get the version and license specifics from the CAS server hosting my session:
    cas speedyanalytics listabout;
  • I want to sign out of SAS Studio for now, so I will disconnect from my CAS session, but return to it later…
    cas speedyanalytics disconnect;
  • ...later in the same or different SAS Studio session, I want to reconnect to the CAS session I started earlier using the UUID I previous grabbed from the macro variable or SAS log:
    cas uuid="&speedyanalytics_uuid";
  • At the end of my program(s), shutdown all my CAS sessions to release resources on the server:
    cas _all_ terminate;

Using CAS libraries

CAS libraries (caslib) are the method to access data that is being stored in memory, as well as the related metadata.

From the library, you can load data into CAS tables in a couple of different ways:

  1. Takes a sample data set, calculate a new measure and stores the output in memory
  2. Proc COPY can bring existing SAS data into a caslib
  3. Proc CASUTIL loads tables into caslibs

The Proc CASUTIL allows you to save your tables (named "classsi" data in David's examples) for future use through the SAVE statement:

proc casutil;
 save casdata="classsi" casout="classsi";
run;

And reload like this in a future session, using the LOAD statement:

proc casutil;
 load casdata="classsi" casout="classsi";
run;

When accessing your CAS libraries, remember that there are multiple levels of scope that can apply. "Session" refers to data from just the current session, whereas "Global" allows you to reach data from all CAS sessions.

Programming in CAS

Showing how to put CAS into action, David shared this diagram of a typical load/save/share flow:

Existing SAS 9 programs and CAS code can both be run in SAS Viya. The calculations and data memory occurs through CAS, the Cloud Analytics Service. Before beginning, it's important to understand a general overview of CAS, to be able to access CAS libraries and your data. For more about CAS architecture, read this paper from CAS developer Jerry Pendergrass.

The performance case for SAS Viya

To close out his paper, David outlined a small experiment he ran to demonstrate performance advantages that can be seen by using SAS Viya v3.3 over a standard, stand-alone SAS v9.4 environment. The test was basic, but performed reads, writes, and analytics on a 5GB table. The tests revealed about a 50 percent increase in performance between CAS and SAS 9 (see the paper for a detailed table of comparison metrics). SAS Viya is engineered for distributive computing (which works especially well in cloud deployments), so more extensive tests could certainly reveal even further increases in performance in many use cases.

Additional resources

A quick introduction to CAS in SAS Viya was published on SAS Users.