CAS

5月 172019
 
Did you know that you can run Lua code within Base SAS? This functionality has been available since the SAS® 9.4M3 (TS1M3) release. With the LUA procedure, you can submit Lua statements from an external Lua script or just submit the Lua statements using SAS code. In this blog, I will discuss what PROC LUA can do as well as show some examples. I will also talk about a package that provides a Lua interface to SAS® Cloud Analytic Services (CAS).

What Is Lua?

Lua is a lightweight, embeddable scripting language. You can use it in many different applications from gaming to web applications. You might already have written Lua code that you would like to run within SAS, and PROC LUA enables you to do so.
With PROC LUA, you can perform these tasks:

  • run Lua code within a SAS session
  • call most SAS functions within Lua statements
  • call functions that are created using the FCMP procedure within Lua statements
  • submit SAS code from Lua
  • Call CAS actions

PROC LUA Examples

Here is a look at the basic syntax for PROC LUA:

proc lua <infile='file-name'> <restart> <terminate>;

Suppose you have a file called my_lua.lua or my_lua.luc that contains Lua statements, and it is in a directory called /local/lua_scripts. You would like to run those Lua statements within a SAS session. You can use PROC LUA along with the INFILE= option and specify the file name that identifies the Lua source file (in this case, it is my_lua). The Lua file name within your directory must contain the .lua or. luc extension, but do not include the extension within the file name for the INFILE= option. A FILENAME statement must be specified with a LUAPATH fileref that points to the location of the Lua file. Then include the Lua file name for the INFILE= option, as shown here:

filename luapath '/local/lua_scripts';
proc lua infile='my_lua';

This example executes the Lua statements contained within the file my_lua.lua or my_lua.luc from the /local/lua_scripts directory.

If there are multiple directories that contain Lua scripts, you can list them all in one FILENAME statement:

filename luapath ('directory1', 'directory2', 'directory3');

The RESTART option resets the state of Lua code submissions for a SAS session. The TERMINATE option stops maintaining the Lua code state in memory and terminates the Lua state when PROC LUA completes.

The syntax above discusses how to run an external Lua script, but you can also run Lua statements directly in SAS code.

Here are a couple of examples that show how to use Lua statements directly inside PROC LUA:

Example 1

   proc lua; 
   submit; 
      local names= {'Mickey', 'Donald', 'Goofy', 'Minnie'} 
      for i,v in ipairs(names) do 
         print(v) 
   end 
   endsubmit; 
   run;

Here is the log output from Example 1:

NOTE: Lua initialized.
Mickey
Donald
Goofy
Minnie
NOTE: PROCEDURE LUA used (Total process time):
      real time           0.38 seconds
      cpu time            0.10 seconds

Example 2

   proc lua;
   submit;
      dirpath=sas.io.assign("c:\\test")
      dir=dirpath:opendir()
      if dir:has("script.txt") then print ("exists")
      else print("doesn't exist")
      end
   endsubmit;
   run;

Example 2 checks to see whether an external file called script.txt exists in the c:\test directory. Notice that two slashes are needed to specify the backslash in the directory path. One backslash would represent an escape character.

All Lua code must be contained between the SUBMIT and ENDSUBMIT statements.

You can also submit SAS code within PROC LUA by calling the SAS.SUBMIT function. The SAS code must be contained within [[ and ]] brackets. Here is an example:

   proc lua; 
   submit;
      sas.submit [[proc print data=sashelp.cars; run; ]]
   endsubmit;
   run;

Using a Lua Interface with CAS

Available to download is a package called SWAT, which stands for SAS Scripting Wrapper for Analytics Transfer. This is a Lua interface for CAS. After you download this package, you can load data into memory and apply CAS actions to transform, summarize, model, and score your data.

The package can be downloaded from this Downloads page: SAS Lua Client Interface for Viya. After you download the SWAT package, there are some requirements for the client machine to use Lua with CAS:

  1. You must use a 64-bit version of either Lua 5.2 or Lua 5.3 on Linux.

    Note: If your deployment requires newer Lua binaries, visit http://luabinaries.sourceforge.net/.
    Note: Some Linux distributions do not include the required shared library libnuma.so.1. It can be installed with the numactl package supplied by your distribution's package manager.

  2. You must install the third-party package dependencies middleclass (4.0+), csv, and ee5_base64, which are all included with a SAS® Viya® installation.

For more information about configuration, see the Readme file that is included with the SWAT download.

I hope this blog post has helped you understand the possible ways of using Lua with SAS. If you have other SAS issues that you would like me to cover in future blog posts, please comment below.

To learn more about PROC LUA, check out these resources:

Using the Lua programming language within Base SAS® was published on SAS Users.

5月 172019
 

In the article Serverless functions and SAS Viya - a good match I discussed using serverless functions to deliver SAS Viya applications. Ignoring all the buzz words, a serverless function boils down to a set of REST APIs. So, if you tried the example you are now a REST API developer 🙂 .

The serverless function allowed the application developer to do the following:

  1. Define what the end user must supply to the function. A good application developer will try to make the request simple and easy to understand.
  2. Return to the end user a response easily consumed by the client's program. Again, a good application developer would make sure the response satisfies most common usage scenarios.
  3. Hide all the details of what it took to satisfy the users request.

This blog discusses using GraphQL to achieve the same goals. First, I will briefly discuss GraphQL, where it fits in with SAS Viya application integration, and how to create GraphQL-based applications. I also provide a series of examples based on real-world scenarios.

The images below display a high level comparison of the approaches between serverless and GraphQL.

serverless and GraphQL process flow

serverless and GraphQL process flow

Steps in the GraphQL flow

  1. A GraphQL server replaces the AWS API Gateway.
  2. The code that runs in the GraphQL server is referred to as "resolvers" - as the name implies, resolvers are used by the GraphQL server to execute user requests.
  3. The resolvers make the necessary REST API calls to the SAS Viya Server.

All of the code in this article resides in the restaf-graphql-demo GitHub repository. If you are not familiar with GraphQL please review the links at the end of this article before proceeding.

Why GraphQL?

Some smart folks at Facebook created GraphQL to solve problems they encountered using standard REST APIs. Companies like Github, Netflix, PayPal, The New York Times and many others are adopting GraphQL.

Some of the key motivators are:

  1. Users define and request what they need, following exact specifications
  2. A convenient way to front existing systems (REST-based or not) and databases with a Developer Experience friendly API
  3. Returning only the requested information reduces the data transferred - important for reducing network traffic
  4. GraphQL is less "chatty" - where REST API will requires multiple trips to the server, GraphQL can accomplish the same task in one round trip

Why GraphQL for SAS Viya application developers?

While the general GraphQL characteristics listed above are important, GraphQL is also a useful technology for developers creating applications integrated with SAS Viya.

  1. GraphQL is a ready-made vehicle for SAS users to deliver their applications as the next generation "stored process" developed with the data step+procedures, CAS Language (CASL) statements, custom CASL actions and SAS REST APIs.
  2. GraphQL is a great way for front-end and back-end developers to communicate.
  3. Developers can code to an agreed contract as specified by the GraphQL schema.
  4. Front-end developers can be confident what they get is exactly what they asked for.

Writing the GraphQL-based applications

The GraphQL queries used in this article are examples for demonstration purposes only and not "standards or strict guidelines" to follow. The code in the GitHub repository and the examples outlined below will help you jump-start your excellent adventure in GraphQL and SAS Viya applications.

The high-level steps for writing an application using GraphQL query are:

SAS Viya Side

  1. SAS programmers, data analysts and data scientists develop their intellectual protocol with SAS programs written with SAS procedures, CAS Actions, data step and CASL language.

Server Side

  1. Build the GraphQL schema and define the queries (see this for examples). In relation to SAS Viya, the schema describes the input and output of the SAS programs.
    • Make sure you have discussed this with the UI developers and the SAS programmers
  2. Write the resolvers - GraphQL server will call this code to resolve the requests by the user (see this for examples).
  3. Register both of these with the GraphQL server.

Client Side

  1. You can build the web apps in the normal way with these characteristics:
    • These apps will call a single end point (/graphql) with a POST method.
    • The payload is the GraphQL query
    • The response will match the query and are easily accessible

The image below shows the flow of a GraphQL-based application. User's queries are sent to the GraphQL server. The server parses the queries and calls the appropriate resolver (your code) to obtain the values for the requested fields. In this project the resolvers use restaf to make REST API calls to SAS Viya.

GraphQL-based application process flow

GraphQL-based application process flow

The rest of the blog discusses a few examples. All these examples are available in the repository. I chose to write the examples using JavaScript since it is one of the languages I am familiar with and can write reasonably decent code in. You can develop GraphQL-based SAS Viya applications in all the popular languages of today.

Example 1: Scoring a loan from client app
In this example, a data scientist working for a bank, has created a model to score a loan applicant's eligibility. The scientist outlines the following requirements:

  1. The user can only enter the desired loan amount and their current assets. All the other parameters needed for scoring have set values. All the values must be passed to the SAS code as a dictionary named _args_.
  2. Since the scientist wants to run A/B experiments the location and name of the scoring model's astore must be passed in as dictionary named _appEnv_.
  3. The code developed by the data scientist is below. The score returns as a dictionary.

    {score= <value>}

SAS Code

I wrote the SAS program in this example in CASL.

loadactionset "astore";

  /* convert arguments to a cas table */
/* _args_  and _appEnv_ are  generated by caslBase - see caslBase for details */

/* CASL function to convert a dictionary to a cas table  see lib/argsToTable.js for details*/
argsToTable(_args_, 'casuser', 'INPUTDATA' );

/* score */
action astore.score /
    table  = { caslib= 'casuser', name = 'INPUTDATA' } 
    rstore = { caslib= _appEnv_.astore.caslib,  name=_appEnv_.astore.name }
    casout  = { caslib = 'casuser', name = 'OUTPUTDATA' replace= TRUE};

/* fetch results */
action table.fetch r = result /
    table = {  caslib = 'casuser' name = 'INPUTDATA' } ;

/* extract the score and send it as a dictionary */
score = result.Fetch[1].P_BAD;
scoreo= {score= score};
send_response(scoreo);

Key points to note:

  1. The resolver creates and prepends two CASL dictionaries _args_ and _appEnv_.
  2. The CASL program returns the result using the send_response function.
    • One of the cool things is that CASL allows the programmer to customize the returned value. In this example the score extracts into a dictionary.

Schema

Based on the requirement the schema is as shown below:

type Query {
   scoreLoan(amount: Int assets: Int) : Float

Key Point:

  1. The two values the user specifies are defined as the filter parameters to the query.

Application

scoreLoan

Key point:

  1. The user enters the two values the data scientist requires.

Client code

async function runScore(amount, assets){
    let payload = {
        query: `query {
            scoreLoan(amount: ${amount} assets: ${assets} )
        }`
    }

    let config = {
        url            : host + '/graphql',
        withCredentials: true,
        method         : 'POST',
        data           : payload
    }

    let r = await axios(config);
    return r.data.data.scoreLoan;
}

Key points:

  1. The payload is the GraphQL query.
  2. I use the POST method.
  3. The end point is /graphql - this is the only endpoint the application will use.
  4. The response is available as r.data.data.scoreLoan
  5. Note the simplicity of the client code to access the GraphQL server and obtain the results.

Resolver

let caslBase = require('../lib/caslBase');

module.exports = async function scoreLoan (_, args, context) {
    let { store } = context;
    let input = {
        JOB    : 'J1',
        CLAGE  : 100, 
        CLNO   : 20, 
        DEBTINC: 20, 
        DELINQ : 2, 
        DEROG  : 0, 
        MORTDUE: 4000, 
        NINQ   : 1,
        YOJ    : 10
    };

    input.LOAN  = args.amount;
    input.VALUE = args.assets;

    let env = {
        astore: {
            caslib: 'Public',
            name  : 'GRADIENT_BOOSTING___BAD_2'
        }
    }
    let result = await caslBase(store,['argsToTable.casl', 'score.casl'], input, env);
    let score = result.items('results', 'score');
    
    return score;

}

Key points:

  1. As required, the default values for the other parameters are added to the user input.
  2. The resolver contains the location and name of the model.
  3. The names of the SAS code are passed to caslBase - this allows the code to read the SAS code from a repository.
  4. The caslBase function calls the jsonToDict to convert the json parameters to CASL dictionary and passes it on to CAS along with the code.
  5. The user receives the resulting score.
Example 2: Reporting wine production to management
The TwoBit winery management wants a simple report to view the production of different wines by year. They want to be able to pick the year range and the wines in which they are interested. The data shown below is for the TwoBit Winery. The goal is to query for selected wines and filter on years.

The data for the winery is listed below.

 
Obs year cabernet merlot pinot chardonnay twobit
1 2000 10 20 30 40 50
2 2001 5 10 15 5 0
3 2002 6 7 11 12 13
4 2003 5 8 0 0 50
5 2004 11 5 7 8 100
6 2005 1 1 0 0 1000
7 2006 0 0 0 0 3000

 

SAS Code

The SAS experts at the company created the following SAS code to meet management's request. Note that for demo purposes the wine data is created inline.

data wineList;  
 input year cabernet merlot pinot chardonnay twobit ;  
 cards;  
 2000 10 20 30 40 50   
 2001 5 10 15 5 0  
 2002 6 7 11 12 13  
 2003 5 8 0 0 50 
 2004 11 5 7 8 100  
 2005 1  1 0 0 1000  
 2006 0 0 0 0 3000  
;;;; 
run;  
/* _selections_ macro was generated in src/lib/getSelections function.
data wine ;  
    set winelist( where= (year GE &amp;from &amp;&amp; year LE &amp;to)); 
    keep &amp;_selections_; 
    run;  
ods html style=barrettsblue;  
    proc print data=wine;run;  
ods html close;run ;

Key points to note:

  1. The code requires macro variables &from, &to and &_selections_ be set before this code executes.
  2. The name of the returned table is wine.

Schema

type Query{
wineProduction(from: Int, to: Int): WineProduction
}

type WineProduction {
"""
An array containing wine production
"""
wines : [WineList]

"""
ODS output and Log output
"""
report: SASResults
}

type WineList {
year : Int
cabernet : Int
merlot : Int
pinot : Int
chardonnay: Int
twobit : Int
}

type WineProductionCas {
wines : [WineList]
}

type SASResults {
        """
        ODS output from the server
        """
        ods: String
        """
        Log output from the server
        """
        log: String
    
    }

Key points:

  1. As required, the year range is specified as filters for the query.
  2. As required, the user can pick the wines in which they are interested.

Application

The application is shown below.

Client code

The relevant client code is shown below (see this in the repository for the full program).

 let gqString = `query userQuery($from: Int, $to: Int) {
                           results: wineProduction(from: $from to: $to) {
                              wines { 
                                  ${wineList} 
                                } 
                                ${reportList}
                             } 
                            }`;
        let payload = {
            url   : host + '/graphql',
            method: 'POST',
            data: { 
                query: gqString,
                variables: {
                    from: fromYear.value,
                    to  : toYear.value
                }
            }
        }
        setReportValues(null);
        setResultValues(null);
        axios(payload)
         .then ( r => {
            let res = r.data.data.results;
           // Simple to extract the results
            setResultValues(res.wines);
            if (res.report != null ) {
                setReportValues(res.report);
            }
        
         })
         .catch( e => alert(e))
    }
})

Key points:

  1. The GraphQL query string is sent as the payload (wineList and reportList are strings computed earlier in the program based on user selection).
  2. The endpoint is again /graphql with a POST method.
  3. This snippet also shows the preferred way to send the filter values.

Resolver

The root resolver is shown below.

let getProgram    = require('../lib/getProgram');
let getSelections = require('../lib/getSelections');
let spBase        = require('../lib/spBase');

module.exports = async function wineProduction (_, args, context, info){
    let {store} = context;

<span style="font-size: 14px;">   // read source - reads in the sas program</span>
    let src = await getProgram(store, ['wines.sas']); 

    // update args with the wine list specified by the user
    let selections = getSelections(info, 'wines', args);

   // execute the sas code with compute server and get results
    let resultSummary = await spBase(store, selections.args, src);
    
    // resultSummary is now passed to the resolvers for wines and results fields.
    return resultSummary;
}

Key points:

  1. Code from the GitHub repo uses winelist.js to resolve the list of wines.
  2. Code from sasresults.js, sasOds.js and sasLog.js returns ODS output and the SAS log.
  3. The SAS code reads in from a repository using the getPrograms function.
Example 3: List SAS Visual Analytics reports
Another common use case is retrieving information about reports developed with SAS Visual Analytics. The GraphQL query to get the list of reports, who edited it last and when is shown below. This example uses the reports REST API.

Schema

{
    reports {
        name
        modifiedBy
        modifiedOn
   }
}

Creating a UI for this is a challenge exercise for the reader (meaning I did not get around to writing it 🙂 ). The returned results look something like this:

{
    "data": {
    "reports": [
        {
            "name": "Application Activity",
            "modifiedBy": "SAS Supplied",
            "modifiedOn": "2018-04-20T14:24:05.258Z"
       },
      {
           "name": "CAS Activity",
           "modifiedBy": "SAS Supplied",
          "modifiedOn": "2018-06-08T20:21:14.727Z"
        }
...

Resolver

module.exports = async function reports (_, args, context) {
    let {store} = context;
    let reports = store.getService ('reports');
    let list =await getList(store, reports);
    return list;
}

async function getList(store, reports) {
    let reportsList =await store.apiCall (reports.links ('reports'));
    if (reportsList.itemsList().size ===0) {
       return [];
     }
    let r = reportsList.itemsList().map (name => {
         let t = {
             name : name,
             modifiedBy: reportsList.items(name, 'data', 'modifiedBy'),
             modifiedOn: reportsList.items(name, 'data', 'modifiedTimeStamp')
         };
        return t;
     });
   return r;
}

Example 4: Getting the URL and image of a specific report
The query below can be used to obtain the URL to display the interactive report and svg image of a specific report.

Schema

{
      report(name:"Application Activity"){
           url
          image
      }
}

The returned value will be along these lines:

{
  "data": {
    "report": {
      "url": "http://superuser.com/?reportUri=/reports/reports/ecec39ad-994f-4055-8e40-4360f410bc6e...",
      "image: {the svg of the image}
    }
}

Resolver

There are 3 resolvers associated with this query, the root resolver and resolvers for image and url. For the sake of brevity, I will not review those here. please visit the code in the repository.

In conclusion

The examples above cover some basic scenarios for SAS Viya applications.

  1. Using CAS actions
  2. Using traditional data step and procs
  3. Obtaining ODS output
  4. Working with SAS Visual Analytics

The simplicity of the client code and the resolvers are what makes GraphQL attractive for writing SAS Viya applications. You can also exploit other features in SAS Viya using the same pattern. Further, you can use the examples in this repository to easily customize your own use cases. The resolvers and helper functions are written to be reusable with minimal effort. The instructions are in the README file in the repository. If you create interesting schema and resolvers for SAS Viya, please share them with the SAS user community.

Opinion

Like all new technologies GraphQL has its proponents and detractors. Also, many people get caught in the low-value arguments about GraphQL being better or worse than REST. I personally do not follow these discussions since you should use the best tool for the job.

I find GraphQL most attractive when developing a back-end for SAS Viya applications. Both front and back-end developers will benefit from the clear definition of the schema. Having well supported GraphQL servers by Apollo and Facebook makes it easier to adopt GraphQL.

Useful links

There are a growing number of resources from which to learn and model. Below is small starter list.

  1. graphql.org
  2. Apollo
  3. Relay
  4. GraphQL Concepts Visualized by Dhaivat Pandya
  5. GraphQL tutorial from TutorialsPoint
  6. How to GraphQL

GraphQL and SAS Viya applications - a good match was published on SAS Users.

4月 052019
 

Recently, you may have heard about the release of the new SAS Analytics Cloud. The platform allows fast access to data-science applications in the cloud! Running on the SAS Cloud and using the latest container technology, Analytics Cloud eliminates the need to install, update, or maintain software or related infrastructure.

SAS Machine Learning on SAS Analytics Cloud is designed for SAS and open source data scientists to gain on-demand programmatic access to SAS Viya. All the algorithms provided by SAS Visual Data Mining and Machine Learning (VDMML), SAS Visual Statistics and SAS Visual Analytics are available through the offering. Developers and data scientists access SAS through a programming interface using either the SAS or Python programming languages.

A free trial for Analytics Cloud is available, and registration is simple. The trial environment allows users to manage and collaborate with others, share data, and create runtime models to analyze their data. The system is pre-loaded with sample data for learning, and allows users to upload their own data. My colleague Joe Furbee explains how to register for the trial and takes you on a tour of the system in his article, Zero to SAS in 60 Seconds- SAS Machine Learning on SAS Analytics Cloud.

Luckily, I had the privilege of being the technical writer for the documentation for SAS Analytics Cloud, and through this met two of my now close friends at SAS.

Alyssa Andrews (pictured left) and Mariah Bragg (pictured right) are both Software Developers at SAS, but worked on the UI for SAS Analytics Cloud. Mariah works in the Research and Development (R&D) division of SAS while Alyssa works in the Information Technology (IT) division. As you can see this project ended up being an interesting mix of SAS teams!

As Mariah told me the history, I learned that SAS Analytics Cloud “was a collaborative project between IT and R&D. The IT team presented the container technology idea to Dr. Goodnight but went to R&D because they wanted this idea run like an R&D project.”

As we prepared for the release of SAS Analytics Cloud to the public, I asked Mariah and Alyssa about their experience working on the UI for SAS Analytics Cloud, and about all the work that they had completed to bring this powerful platform to life!


What is SAS Analytics Cloud for you? How do you believe it will help SAS users?

Alyssa: For me, it is SAS getting to do Software as a Service. So now you can click on our SAS Software and it can magically run without having to add the complexity of shipping a technical support agent to the customers site to install a bunch of complex software.

Mariah: I agree. This will be a great opportunity for SAS to unify and have all our SAS products on cloud.

Alyssa: Now, you can trial and then pay for SAS products on the fly without having to go through any complexities.

What did you do on the project as UI Developers?

Alyssa: I was lent out to the SAS Analytics Cloud team from another team and given a tour-of-duty because I had a background in Django (a high-level Python Web design tool) which is another type of API framework you can build a UI on top of. Then I met Mariah, who came from an Angular background, and we decided to build the project on Angular. So, I would say Mariah was the lead developer and I was learning from her. She did more of the connecting to the API backend and building the store part out, and I did more of the tweaks and the overlays.

What is something you are proud of creating for SAS Analytics Cloud?

Mariah: I’m really proud to be a part of something that uses Angular. I think I was one of the first people to start using Angular at SAS and I am so excited that we have something out there that is using this new technology. I am also really proud of how our team works together, and I’m really proud of how we architectured the application. We went through multiple redesigns, but they were very manageable, and we really built and designed such that we could pull out components and modify parts without much stress.

Alyssa: That we implemented good design practices. It is a lot more work on the front-end, but it helps so much not to have just snowflake code (a term used by developers to describe code that isn’t reusable or extremely unique to where it becomes a problem later on and adds weight to the program) floating. Each piece of code is there for a reason, it’s very modular.

What are your hopes for the future of SAS Analytics Cloud?

Alyssa: I hope that it continues to grow and that we add even more applications to this new container technology, so that SAS can move even more into the cloud arena. I hope it brings success. It is a really cool platform, so I can’t wait to hear about users and their success with it.

Mariah:
I agree with Alyssa. I also hope it is successful so that we keep moving into the Cloud with SAS.

Learning more

As a Developmental Editor with SAS Press, it was a new and engaging experience to get to work with such an innovative technology like SAS Analytics Cloud. I was happy I got to work with such an exciting team and I also look forward to what is next for SAS Analytics Cloud.

And as a SAS Press team member, I hope you check out the new way to trial SAS Machine Learning with SAS Analytics Cloud. And while you are learning SAS, check out some of our great books that can help you get started with SAS Studio, like Ron Cody’s Biostatistics by Example Using SAS® Studio and also explore Geoff Der and Brian Everitt’s Essential Statistics Using SAS® University Edition.

Already experienced but want to know more about how to integrate R and Python into SAS? Check out Kevin D. Smith’s blogs on R and Python with SAS Viya. Also take a moment to investigate our new books on using open source R and Python with SAS Viya: SAS Viya: The R Perspective by Yue Qi, Kevin D. Smith, and XingXing Meng and SAS Viya: The Phyton Perspective by Kevin D. Smith and XingXing Meng.

These great books can set you on the right path to learning SAS before you begin your jump into SAS Analytics Cloud, the new way to experience SAS.

SAS® Analytics Cloud—an interview with the women involved was published on SAS Users.

3月 272019
 

PAYG financial services: coming to a bank near you

You walk into your neighborhood bank to see about a mortgage. You and your spouse have your eye on the perfect 3BR, 2BA brick ranch near your child's school, and it won't be on the market long. An hour later, you burst through the front door with a bottle of champagne: "We're qualified!"

Also celebrating is your bank's branch manager. She was skeptical when headquarters analysts equipped branches for "Cloud-based application using SAS" , saying it would speed up loan applications. But your quick, frictionless transaction proved them right.The bank's accountants are happy too. The new pay-as-you-go mode of using SAS software in the cloud means big savings.

The above scenario is possible now through serverless functions, which enable your SAS Viya applications to take input from end users, score the loan application, and return results.

The rest of this post gets into the nitty gritty of serverless functions and SAS Viya, detailing what happens in a bank's computers after a customer applies for a loan. The qualification process starts by running a previously built scoring model to generate a score. You will see how the combination of REST APIs in SAS Viya, analytic models and the restaf library make the task of building the serverless function relatively simple.

The blog titled "SAS REST APIs: a sample application" demonstrated building a SAS Viya application using REST APIs, SAS Visual Analytics and SAS Operational Research. This is typical web applications with application server and SAS Viya running on premise.

If you are one of many users using(or considering) a cloud provider, serverless functions is an useful alternate way to deliver your applications to your users. This eliminates the need to manage the application server associated with your application. Additionally you get zero administration and auto-scaling among other benefits. Many SAS applications that respond quickly to user requests are ideal candidates to be deployed as serverless functions.

The example in this article is available on SAS software’s GitHub site in the viya-apps-serverless-score repository.  If you want to see the end application for frame of reference, see the Using the serverless functions section at the bottom of this article.

Let’s begin with a bit of background on serverless computing and then dig into the details of the application and functions.

Serverless computing explained

The benefits of serverless functions as touted by AWS serverless, Azure and serverless.com:

AWS Lambda

AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume– there is no charge when your code is not running. With Lambda, you can run code for virtually any type of application or backend service – all with zero administration. Just upload your code and Lambda takes care of everything required to run and scale your code with high availability. You can set up your code to automatically trigger from other AWS services or call it directly from any web or mobile app.

What is serverless computing?

According to Azure serverless computing is the abstraction of servers, infrastructure, and operating systems. When you build serverless apps you don’t need to provision and manage any servers, so you can take your mind off infrastructure concerns. Serverless computing is driven by the reaction to events and triggers happening in near-real-time—in the cloud. As a fully managed service, server management and capacity planning are invisible to the developer and billing is based just on resources consumed or the actual time your code is running.

Four core benefits of serverless computing from serverless.com:

  1. Zero administration – Deploy code without provisioning anything beforehand or managing anything afterward. There is no concept of a fleet, an instance, or even an operating system. No more bothering the Ops department.
  2. Auto-scaling – Let your service providers manage the scaling challenges. No need to fire alerts or write scripts to scale up and down. Handle quick bursts of traffic and weekend lulls the same way — with peace of mind.
  3. Pay-per-use – Function-as-a-service compute and managed services charged based on usage rather than pre-provisioned capacity. You can have complete resource utilization without paying a cent for idle time. The results? 90% cost-savings over a cloud VM, and the satisfaction of knowing that you never pay for resources you don’t use.
  4. Increased velocity – Shorten the loop between having an idea and deploying to production. Because there’s less to provision up front and less to manage after deployment, smaller teams can ship more features. It’s easier than ever to make your idea live.

OK, so there is a server involved in serverless computing. The beauty in this technology is that once you deploy your code, you don't have to worry about the underlying infrastructure. You just know that the app should work and you only incur costs when the app is running.

Basic flow

Serverless functions are loaded and executed based on the occurrence of one of the triggers/events supported by the cloud vendor. In this example the API Gateway triggers the serverless functions when an http call invokes the function. The API Gateway calls the handler for the function and passes in the user data. On return from the handler the response is sent to the client. This article focuses on the code inside the Serverless Function box in the picture below.

Figure 1: Request Workflow

This example utilizes two key functions:

  1. app – This function serves up an html application for user to enter the data. This is an example of a web application as a serverless function.
  2. score – This function takes user input from the web app, executes scoring on a Viya Server and returns the results.

Serverless.yml

The serverless.yml defines the serverless functions and the handlers, used to execute and other system related information. We will focus only on the application specific information.

The code snippet below shows the definition of the path and handler for the two functions in the serverles.yml file.

functions:
  app: 
    handler: src/app.app
    events:
      - http:
          path: app
          method: get
          cors: 
            origin: '*'
          request:
            parameters:
              paths:
                id: true  
 
  score:
    handler: src/score.score
    events:
      - http:
          path: score
          method: post
          cors: 
            origin: '*'

The functions(app & score) in the yaml define:

  1. event - http event will trigger this function
  2. path - this is path to the function - similar to what you define in Express or hapijs
  3. method - http standard GET, PUT etc...
  4. others - refer to the cloud vendor's documentation for other available options.

The serverless.yml file also sets application related information using environment variables. In this particular use case we define how to access SAS Viya and which scoring model to use.

environment:
#
# Information for logging into SAS Viya
#
  VIYA_SERVER: http://example.viya.server.com
  CLIENTID: raf
  CLIENTSECRET: raf
  USER:rafuser
  PASSWORD: rafpass
 
#
# astore to be used for scoring
#
  caslib: casuser
  name: GRADIENT_BOOSTING___BAD_2

A note on securing your password

In this example we store the userid and password in environment variables. This is to the keep the focus on the internals of serverless functions for SAS Viya. Locally you can use "serverless variables" to secure the information during development. However, for production deployment, refer to your provider's recommendations and the user community for best practices.

Sounds like a followup blog in the future 🙂

Anatomy of the serverless function

Figure 2 shows the flow inside the serverless function for this example. This pattern will repeat itself in your serverless functions.

Figure 2: Serverless Function Flow

Serverless function score

The code below is the handler for the score function. The rest of this section will discuss each of the key features of the handler.

//
// See src/score.js for the full code
//
module.exports.score = async function (event, context ) {
 
   let store      =  restaf.initStore(); /* initialize restaf     */
   let inParms = parseEvent(event);  /* get user input        */
   let payload = getLogonPayload(); /* get logon information */
 
   return store.logon(payload)               /* logon to SAS Viya */
        .then (()    > scoreMain( store, inParms )) /* score     */
        .then(result > setPayload(result)) /* return results     */
        .catch(err   > setError(err))	      /* else return errors */
}

Step 1: Parse the input

The event parameter contains the input from the caller (web application, another serverless function, etc).
The content of the event parameter is whatever the designer of the serverless function desires. In this particular case, a sample event data is shown below.

{
    "input": {
        "JOB"    : "J1",
        "CLAGE"  : 100,
        "CLNO"   : 20,
        "DEBTINC": 20,
        "DELINQ" : 2,
        "DEROG"  : 0,
        "MORTDUE": 4000,
        "NINQ"   : 1,
        "YOJ"    : 10,
        "LOAN"   : 10000,
        "VALUE"  : 1000000
    }
}

The parseEvent function validates the incoming information.

module.exports = function parseEvent(event)
    let input = null;
    let body = {};
    let rstore = {
        caslib:  process.env.ASTORE_CASLIB,
        name  : process.env.ASTORE_NAME
    }
    if ( event.body !=  null ) {
        body = ( typeof event.body === 'string') ? JSON.parse(event.body) : Object.assign({}, event.body);
       if ( body.hasOwnProperty('input') === true ) {
          input = body.input;
    }
    return { rstore: rstore, input: input }
}

Step 2: Logon to SAS Viya

The server.yml defines the SAS Viya logon information. Note there are other secure ways to manage sensitive information like passwords. You should refer to your provider’s documentation.

module.exports = function getLogonPayload() {
    let p = {
        authType    : 'password',
        host        : `${process.env.VIYA_SERVER}`,
        user        : process.env['USER'],
        password    : process.env['PASSWORD'],
        clientID    : process.env['CLIENTID'],
        clientSecret: (process.env.hasOwnProperty('CLIENTSECRET')) ? process.env[ 'CLIENTSECRET' ] : ''
        };
    return p;
 }

The line restaf.logon(payload) in function in the handler code logs on to the SAS Viya Server using this information.

Step 3 and Step 4: Create Payload and make REST API calls

On successful logon the server is called to do the scoring. This particular example uses the sccasl.runcasl method to run CAS Language (CASL) statements and return the scores. Creating the score has two steps:

  1. upload user input: The user input is converted to a csv and uploaded to a CAS table
  2. Submit CASL statements to SAS Viya (CAS) to do the scoring

The code in src/scoreMain in the repository accomplishes both these steps.

Each of these steps use a CAS action:

    • table.upload – to upload the user data into a CAS Table. The input data is converted into a comma-delimited file(csv) and then uploaded. The REST call using restaf looks like this:
    let csv = makecsv(input); /* create a csv */
    let JSON_Parameters = {
        casout: {
            caslib : 'casuser', /* a valid caslib */
            name   : 'INPUTDATA', /* name of output file on cas server */
            replace: true
        },
 
        importOptions: {
            fileType: 'csv' /* type of the file being uploaded */
        }
    };
 
    let payload = {
        headers: { 'JSON-Parameters': JSON_Parameters },
        data   : csv,
        action : 'table.upload'
    };
 
    let result = await store.runAction(session, payload);
    • sccasl.runcasl – execute CASL statements to do the scoring
 // Setup casl statements 	 	 
 let caslStatements = `	 	 
 loadactionset "astore";	 	 
 action table.loadTable /	 	 
 caslib = "${rstore.caslib}" 	 	 
 path = "${rstore.name}.sashdat"	 	 
 casout = { caslib = "${rstore.caslib}" name = "${rstore.name}" replace=TRUE};	 	 
 
 action astore.score /	 	 
 table = { caslib= 'casuser' name = 'INPUTDATA' } 	 	 
 rstore = { caslib= "${rstore.caslib}" name = '${rstore.name}' }	 	 
 out = { caslib = 'casuser' name = 'OUTPUTDATA' replace= TRUE};	 	 
 action table.fetch r = result/	 	 
 format = TRUE	 	 
 table = { caslib = 'casuser' name = 'OUTPUTDATA' } ;	 	 
 send_response(result);	 	 
 `;	 	 
 // execute cas actions	 	 
 payload = {	 	 
 action: 'sccasl.runcasl',	 	 
 data : { code: caslStatements}	 	 
 }	 	 
 result = await store.runAction(session, payload);

Step 5: Create response

AWS serverless functions must return data and error(s) in a certain form. The two functions setPayload.js and setError.js accomplish this.

module.exports = function setPayload (body) {
    return {
        "statusCode": 200,
        "headers"   : {
            'Access-Control-Allow-Origin'     : '*',
            'Access-Control-Allow-Credentials': true
          },
        "isBase64Encoded": false,
        "body"           : JSON.stringify(body)
    }
  }

Using the serverless functions

When the serverless function is deployed you will get a link for each of the functions. In our case we receive the request shown below (with xxxx replaced with appropriate information).

GET - https://xxxx.amazonaws.com/demo/app

The first link serves up the web application. The user enters some values and the app calls the score serverless function to get the results.
Alternatively, you can write your own application and make an http POST call to the score function using a link such as:

POST - https://xxxx.amazonaws.com/demo/score

To invoke the web application, you will visit the link

https://xxxx.amazonaws.com/demo/app

with your browser. You should see a display shown in Figure 3:

Figure 3: Application Input Screen

Entering values into the two fields and pressing Submit calls the second serverless function, score, and results in a pie chart as seen in Figure 4:

Figure 4: Score Report Screen

Please see the loan.html file in the GitHub repository for details on the application. Displayed below is the relevant part of the Javascript in the loan.html. The score-function-url is the url for the score function. The payload was described earlier in this article. The http call is made using axios.

async function runScore(inputValues ){
 
    let payload = {
        astore: {
            caslib: 'Public',
            name: 'GRADIENT_BOOSTING___BAD_2'
        },
        input: inputValues
    }
    let config = {
        url: {score-function-url}
        method: 'POST',
        data: payload
    }
    let r = await axios(config);
    return r.data.score;
 
}

Porting to other cloud providers

The cloud provider dependent information is handled in the following functions: score.js, parseEvent.js, setPayload.js and setError.js. The rest of the code is host agnostic. In writing your own functions the recommendation is to follow the same pattern as much as possible. The generic code is then available in its own repository for reuse with other providers and applications.

Go try it yourself

I have shown you how to deliver your SAS Viya applications as serverless functions. To access more examples please see the GitHub restaf-demos repository.

Supporting Resources

Serverless functions and SAS Viya - a good match was published on SAS Users.

2月 122019
 

Multi-tenancy is one of the exciting new capabilities of SAS Viya. Because it is so new, there is quite a lot of misinformation going around about it. I would like to offer you five key things to know about multi-tenancy before implementing a project using this new paradigm.

All tenants share one SAS Viya deployment

Just as apartment units exist within a larger, common building, all tenants, including the provider, exist within one, single SAS Viya deployment. Tenants share some SAS Viya resources such as the physical machines, most microservices, and possibly the SAS Infrastructure Data Server. Other SAS Viya resources are duplicated per tenant such as the CAS server and compute launcher. Regardless, the key point here is that because there is one SAS Viya deployment, there is one, and only one, SAS license that applies to all tenants. Adding a new tenant to a multi-tenant deployment could have licensing ramifications depending upon how the CAS server resources are allocated.

Decision to use multi-tenancy must be made at deployment time

Many people, myself included, are not very comfortable with commitment. Making a decision that cannot be changed is something we avoid. Deciding whether your SAS Viya deployment supports multi-tenancy cannot be put off for later.

This decision must be made at the time the software is deployed. There is currently no way to convert a multi-tenant deployment to a single-tenant deployment or vice versa short of redeployment, so choose wisely. As with marriage, the decision to go single-tenant or multi-tenant should not be taken lightly and there are benefits to each configuration that should be considered.

Each tenant is accessed by separate login

Let’s return to our apartment analogy. Just as each apartment owner has a separate key that opens only the apartment unit they lease, SAS Viya requires users to log on (authenticate) to a specific tenant space before allowing them access.

SAS Viya facilitates this by accessing each tenant by way of a separate sub-domain address. As shown in the diagram below, a user wishing to use the Acme tenant must access the deployment with a URL of acme.viya.sas.com while a GELCorp user would use a URL of gelcorp.viya.sas.com.

This helps create total separation of tenant access and allows administrators to define and restrict user access for each tenant. It does, however, mean that each tenant space is authenticated individually and there is no notion of single sign-on between tenants.

No content is visible between tenants

You will notice in both images above that there are brick walls between each of the tenants. This is to illustrate how tenants are completely separated from one another. One tenant cannot see any other tenant’s content, data, users, groups or even that other tenants exist in the system.

One common scenario for multi-tenancy is to keep business units within a single corporation separated. For example, we could set up Sales as a tenant, Finance as a tenant, and Human Resources as a tenant. This works very well if we want to truly segregate the departments' work. But what happens when Sales wants to share a report with Finance or Finance wants to publish a report for the entire company to view?

There are two options for this situation:
• We could export content from one tenant and import it into the other tenant(s). For example, we would export a report from the Sales tenant and import it into the Finance tenant, assuming that data the report needs is available to both. But now we have the report (and data) in two places and if Sales updates the report we must repeat the export/import process.
• We could set up a separate tenant at the company level for shared content. Because identities are not shared between tenants, this would require users to log off the departmental tenant and log on to the corporate tenant to see shared reports.

There are pros and cons to using multi-tenancy for departmental separation and the user experience must be considered.

Higher administrative burden

Managing and maintaining a multi-tenancy deployment is more complex than taking care of a single-tenant deployment. Multi-tenancy requires additional CAS servers, additional micro-services, possibly additional machines, and multiple administrative personas. The additional resources can complicate backup strategies, authorization models, operating system security, and resource management of shared resources.

There are also more levels of administration which requires an administrator persona for the provider of the environment and separate administrator personas for each tenant. Each of these administration personas have varying scope into which aspects of the entire deployment they can interact with. For example, the provider administrator can see all system resources, all system activity, logs and tenants, but cannot see any tenant content.

Tenant administrators can only see and interact with dedicated tenant resources such as their CAS server and can also manage all tenant content. They cannot, however, see system resources, other tenants, or logs.

Therefore, coordinating management of a complete multi-tenant deployment will require multiple administration personas, careful design of operating system group membership to protect and maintain sufficient access to files and processes, and possibly multiple logins to accomplish administrative tasks.

Now what?

I have pointed out a handful of key concepts that differ between the usual single-tenant deployments and what you can expect with a multi-tenant deployment of SAS Viya. I am obviously just scratching the surface on these topics. Here are a couple of other resources to check out if you want to dig in further.

Documentation: Multi-tenancy: Concepts
Article: Get ready! SAS Viya 3.4 highlights for the Technical Architect

5 things to know about multi-tenancy was published on SAS Users.

11月 142018
 

Prior to SAS Viya

With the creation of SAS Viya, the ability to run DATA Step code in a distributed manner became a reality. Prior to distributed DATA Step, DATA Step programmers never had to think about achieving repeatable results when SAS7BDAT datasets were the sources to their DATA Step code that contains a BY statement. This is because prior to SAS Cloud Analytics Services (CAS), DATA Step ran single-threaded and the source SAS7BDAT dataset was stored on disk. Every time one would run the code we obtained repeatable results because the sequence of rows within the BY group were preserved between runs. To illustrate this, review figures 1, 2, and 3.

Figure 1 is the source SAS7BDAT dataset WORK.TEST1. Notice the sequence of VAR2, especially on row 1 and 4 (i.e., _N_ =1 and 4).

_n_ VAR1 VAR2
1 1 N
2 1 Y
3 1 Y
4 2 Y
5 2 Y
6 2 N


Figure 1. WORK.TEST1 the original SAS7BDAT dataset

In figure 2, we see a BY statement with variable VAR1. This will ensure VAR1 is in ascending order. We are also using FIRST. processing to identify the first occurrence of the BY group. Because this data is stored on disk and because the DATA Step is executed using a single thread, the result table will be repeatable no matter how many times we run the DATA Step code.

Figure 2. Focus on the IF statement, especially VAR2

In figure 3, we see the output SAS7BDAT dataset WORK.TEST2.

_n_ VAR1 VAR2
1 1 N

Figure 3. WORK.TEST2 result dataset from running the code in Figure 2

In figure 4, we are running the same DATA Step but this time our source and target tables are CAS tables. The source table CASLIB.TEST1 was created by lifting the original SAS7BDAT dataset WORK.TEST1 (review figure 1) into CAS.

Figure 4. DATA Step executing in CAS

In figure 5, we see that the DATA Step logic is being respected in runs 1, 2 and 3; but we are not achieving repeatable results. This is due to CAS running on multiple threads. Note that the BY statement – which will group the data correctly for each BY group – is done on the fly. Also, the BY statement will not preserve the sequence of rows within the BY group between runs.

For some processes, this is not a concern but for others it could be. If you need to obtain repeatable results in DATA Step code that runs distributed in CAS as well as match your SAS 9 single-threaded DATA Step results, I suggest the following workaround be used.

Figure 5. DATA Step logic is respected but yields different results with each run

With SAS Viya

The workaround is very simplistic to understand and implement. For each SAS7BDAT dataset being lifted into a CAS table, see figure 6, we need to add a new variable ROW_ID.

_n_ VAR1 VAR2
1 1 N
2 1 Y
3 1 Y
4 2 Y
5 2 Y
6 2 N

Figure 6. Original SAS7BDAT dataset source WORK.TEST1

To accomplish this, we will leverage the automatic variable _N_ that is available to all DATA Step programmers. _N_ is initially set to 1. Each time the DATA step loops past the DATA statement, the variable _N_ increments by 1. The value of _N_ represents the number of times the DATA step has iterated. In our case, the value for each row is the row sequence in the original SAS7BDAT dataset. Figure 7 contains the SAS code we ran on the SAS 9.4M5 workspace server or the SAS Viya compute server to add the new variable ROW_ID.

 

Figure 7. Creating the new variable ROW_ID

By reviewing figure 8 we can see the new variable ROW_ID in the SAS7BDAT dataset WORK.TEST1. Now that we have the new variable, we are ready to lift this dataset into CAS.

_N_ VAR1 VAR2 ROW_ID
1 1 N 1
2 1 Y 2
3 1 Y 3
4 2 Y 4
5 2 Y 5
6 2 N 6

Figure 8. WORK.TEST1 with the new variable ROW_ID

There are many ways to lift a SAS7BDAT dataset into CAS. One way is to use a DATA Step like we did in figure 9.

Figure 9. DATA Step code to create distributed CAS table CASLIB.TEST1 

To obtain the repeatable results, we need to control the sequence of rows within each BY group. We accomplish this by adding the new variable ROW_ID as the last variable to the BY statement in our DATA Step code, see figure 10.

Figure 10. Add ROW_ID as last variable of the BY group

Figure 11 shows us the output CAS table created by the code in figure 10. By adding the new variable ROW_ID and using that variable as the last variable of the BY statement, we are controlling the sequencing of rows within the BY groups for all 3 runs.

VAR1 VAR2 ROW_ID
1 N 1

Figure 11. Distrusted CAS table CASLIB.TEST2

Conclusion

With distributed DATA Step comes great opportunities to improve runtimes. It also means we need to understand differences between single-threaded processing of SAS7BDAT datasets that are stored on disk and distributed processing of CAS tables store in-memory. To help you with that journey I suggest you read the SAS Global Forum paper, Parallel Programming with the DATA Step: Next Steps.

How to achieve repeatable results with distributed DATA Step BY Groups was published on SAS Users.

11月 132018
 

In my previous blog post I demonstrated how to create your own CAS actions and action sets.  In this post, we will explore how to create your own CAS functions using the CAS Language (CASL).  A function is a component of the CASL programming language that can accept arguments, perform a computation or other operation, and return a value.  The value that is returned can be used in an assignment statement or elsewhere in expressions.

About SAS functions

SAS provides two types of supplied functions: built-in functions and common functions.  Built-in functions contain functionality that is unique to CASL.  These allow you to perform operations on your result tables, arrays, and dictionaries, and provide run-time support for your CASL programs.  Built-in functions cannot be replaced with user-defined functions.

Conversely, common functions provide functionality that is common to other SAS functions.  When used in a CASL program, SAS functions take a CASL value and a CASL value is returned.  Unlike built-in functions, you can replace these functions with user-defined functions.

Since the capabilities of built-in functions are unique to CASL, let’s look at these in-depth and demonstrate with an example.  Save the following FedSQL code in an external file called hmeqsql.sas.  This code will be read into CAS and stored as a variable.

The execDirect action executes FedSQL code in CAS.  The READPATH built-in function reads the FedSQL code saved in hmeqsql.sas and stores it in the CASL variable sqlcode which is used as input to the query parameter.

The fetch action displays the first 20 rows from the output table hmeq.out.

If you don’t feel like looking through the documentation for a built-in or common function, a list of each can be generated programmatically.  Run the following code to see a list of built-in functions.

Partial list of CASL built-in functions

Run the following code to see a list of common functions.

Partial list of common functions

User-defined CASL functions

In addition to the customizable capabilities of built-in functions supplied by SAS, you can also create your own functions using the FUNCTION statement.  User-defined functions can be called in expressions using CASL and they provide a large amount of flexibility.  The following example creates four different functions for temperature conversion.

After creating these functions, they can be called immediately, or you can store them in an external file and call them via a %include statement.  In this example, the user-defined functions have been stored in an external file called FunctionStore.sas.  You can call one, all, or any number of your user-defined functions.

The output from each function call is displayed in the log.

Lastly, if you want to see all user-defined functions, run the FUNCTIONLIST statement.  A list will be printed to the log.

More about CASL programming and using functions in CASL

Check out these resources for further information on programming in the CASL language and using functions in CASL.

Customize your CASL code with built-in and user-defined functions was published on SAS Users.

11月 062018
 

This post was also written by SAS' Xiangxiang Meng.

You can communicate with various clients (SAS, Python, Lua, Java, and REST) in the same place using Pandas Data Analysis Library, CAS actions should come naturally. CAS enables you to subset tables using Python expressions. Using Python, you can create conditions that are based on the data pulled, instead of creating the conditions yourself. SAS® will use the information you want pulled to determine which rows to select.

For example, rather than using fixed values of rows and columns to select data, SAS can create conditions based on the data in the table to determine which rows to select. This is done using the same syntax as DataFrames. CASColumn objects support Python’s various comparison operators and builds a filter that subsets the rows in the table. You can then use the result of that comparison to index into a CASTable. It sounds much more complicated than it is, so let’s look at an example.

The examples below are from the Iris flower data set, which is available in the SASHELP library, in all distributions of SAS. The listed code and output are produced using the IPython interface but can be employed with Jupyter Notebook just as easily.

If we want to get a CASTable that only contains values where petal_length is greater than 7, we can use the following expression to create our filter.


Behind the scenes, this expression creates a computed column that is used in a WHERE expression on the CASTable. This expression can then be used as an index value for a CASTable. Indexing this way essentially creates a boolean mask. Wherever the expression values are true, the rows of the table are returned. Wherever the expression is false, the rows are filtered out.

These two steps are more commonly done in one line.


We can further filter rows out by indexing another comparison.

Comparisons can be joined using the bitwise comparison operators & (and) and | (or). You do have to be careful with these though due to the operator precedence. Bitwise comparison has a higher precedence than comparisons such as greater-than and less-than, so you need to wrap your comparisons in parentheses.


In all cases, we are not changing anything about the underlying data in CAS. We are simply constructing a query that is executed with the CASTable when it is used as the parameter in a CAS action. You can see what is happening behind the scenes by displaying the resulting CASTable objects.


You can also do mathematical operations on columns with constants or other columns within your comparisons.

The list of supported operations is shown in the table below.

The supported comparison and operators are shown in the following table.

As you can see in the tables above, it is possible to do comparisons on character columns as well. This includes using many of Python’s string methods on the column values. These are accessed using the str attribute of the column, just like in DataFrames.

This easy syntax allows the Python client to manipulate data much easier when working in SAS Viya.

Another great tip? The Python client allows you to manipulate data on the fly, without moving or copying the data to another location. Creating computed columns allows you to speed up the wrangling of data, while giving you options for how want to get there.

Want to learn more great tips about integrating Python with SAS Viya? Check out Kevin Smith and Xiangxiang Meng’s SAS Viya: The Python Perspective to learn how Python can be intergraded into SAS® Viya® —and help you manipulate data with ease.

Great tip for dynamic data selection using SAS Viya and Python was published on SAS Users.

11月 032018
 

When you begin to work within the SAS Viya ecosystem, you learn that the central piece is SAS Cloud Analytic Services (CAS). CAS allows all clients in the SAS Viya ecosystem to communicate and run analytic methods. The great part about SAS Viya is that the R client can drive CAS directly using familiar objects and constructs for R programmers.

The SAS Scripting Wrapper for Analytics Transfer (SWAT) package is an R interface to CAS. With this package, you can load data into memory and apply CAS actions to transform, summarize, model and score the data. You can still retain the ease-of-use of R on the client side to further post process CAS result tables.

But before you can do any analysis in CAS, you need some data to work with and a way to get to it. There are two data access components in CAS:

  1. Caslibs, definitions that give access to a resource that contains data.
  2. CASTables, for analyzing data from a caslib resource. You load the data into a CASTable, which contains information about the data in the columns.

Other references you may find of interest include this GitHub repository where you can find more information on installing and configuring CAS and SWAT. Also available is this article on using RStudio with SAS Viya.

The following excerpt from SAS® Viya® : the R Perspective, the book I co-authored with my SAS colleague Xiangxiang Meng, demonstrates the way the R client in SAS Viya allows you to select data with precision. The examples come from the iris flower data set, which is available in the SASHELP library, in all distributions of SAS. The CASTable object sorttbl is sorted by the Sepal.Width column.

Rather than using fixed values of rows and columns to select data, we can create conditions that are based on the data in the table to determine which rows to select. The specification of conditions is done using the same syntax as that used by data.frame objectsCASTable objects support R’s various comparison operators and build a filter that subsets the rows in the table. You can then use the result of that comparison to index into a CASTableIt sounds much more complicated than it is, so let’s look at an example.

This expression creates a computed column that is used in a where expression on the CASTable. This expression can then be used as an index value for a CASTable. Indexing this way essentially creates a Boolean mask. Wherever the expression values are true, the rows of the table are returned. Wherever the expression is false, the rows are filtered out.

> newtbl <- sorttbl[expr,] > head(newtbl) 
 
  Sepal.Length Sepal.Width Petal.Length Petal.Width   Species 
1          7.7         2.6          6.9         2.3 virginica 
2          7.7         2.8          6.7         2.0 virginica 
3          7.6         3.0          6.6         2.1 virginica 
4          7.7         3.8          6.7         2.2 virginica

These two steps are commonly entered on one line.

> newtbl <- sorttbl[sorttbl$Petal.Length > 6.5,]
> head(newtbl) 
 
  Sepal.Length Sepal.Width Petal.Length Petal.Width   Species 
1          7.7         2.6          6.9         2.3 virginica 
2          7.7         2.8          6.7         2.0 virginica 
3          7.6         3.0          6.6         2.1 virginica 
4          7.7         3.8          6.7         2.2 virginica

We can further filter rows out by indexing another comparison expression.

> newtbl2 <- newtbl[newtbl$Petal.Width < 2.2,] > head(newtbl2) 
 
  Sepal.Length Sepal.Width Petal.Length Petal.Width   Species 
1          7.7         2.8          6.7         2.0 virginica 
2          7.6         3.0          6.6         2.1 virginica

Comparisons can be joined using the bitwise comparison operators & (and) and | (or). You must be careful with these operators though due to operator precedence. Bitwise comparison has a lower precedence than comparisons such as greater-than and less-than, but it is still safer to enclose your comparisons in parentheses.

> newtbl3 <- sorttbl[(sorttbl$Petal.Length > 6.5) & (sorttbl$Petal.Width < 2.2),] > head(newtbl3) 
 
  Sepal.Length Sepal.Width Petal.Length Petal.Width   Species 
1          7.7         2.8          6.7         2.0 virginica 
2          7.6         3.0          6.6         2.1 virginica

In all cases, we are not changing anything about the underlying data in CAS. We are simply constructing a query that is executed with the CASTable when it is used as the parameter in a CAS action. You can see what is happening behind the scenes by displaying the attributes of the resulting CASTable objects.

> attributes(newtbl3) 
 
$conn 
CAS(hostname=server-name.mycompany.com, port=8777, username=username, session=11ed56e2-f9dd-9346-8d01-44a496e68880, protocol=http) 
 
$tname
[1] "iris" 
 
$caslib 
[1] "" 
 
$where 
[1] "((\"Petal.Length\"n > 6.5) AND (\"Petal.Width\"n < 2.2))" 
 
$orderby 
[1] "Sepal.Width" 
 
$groupby 
[1] "" 
 
$gbmode 
[1] "" 
 
$computedOnDemand 
[1] FALSE 
 
$computedVars 
[1] "" 
 
$computedVarsProgram 
[1] "" 
 
$XcomputedVarsProgram 
[1] "" 
 
$XcomputedVars 
[1] "" 
 
$names 
[1] "Sepal.Length" "Sepal.Width"  "Petal.Length" "Petal.Width"  
[5] "Species"      
 
$class 
[1] "CASTable" 
attr(,"package") 
[1] "swat"

You can also do mathematical operations on columns with constants or on other columns within your comparisons.

> iris[(iris$Petal.Length + iris$Petal.Width) * 2 > 17.5,] 
 
    Sepal.Length Sepal.Width Petal.Length Petal.Width   Species 
118          7.7         3.8          6.7         2.2 virginica 
119          7.7         2.6          6.9         2.3 virginica

The list of supported operators is shown in the following table:

Operator Numeric Data Character Data
+ (add) ✔
- (subtract) ✔
* (multiply) ✔
/ (divide) ✔
%% (modulo) ✔
%/% (integer division) ✔
^ (power) [✔

The supported comparison operators are shown in the following table.

Operator Numeric Data Character Data
== (equality) ✔ ✔
!= (inequality) ✔ ✔
< (less than) ✔ ✔
> (greater than) ✔ ✔
<= (less than or equal to) ✔ ✔
>= (greater than or equal to) ✔ ✔

 

As you can see in the preceding tables, you can do comparisons on character columns as well. In the following example, all of the rows in which Species is equal to "virginica" are selected and saved to a new CASTable object virginica. Note that in this case, data is still not duplicated.

> tbl <- defCasTable(conn, 'iris') > virginica <- tbl[tbl$Species == 'virginica',] > dim(virginica) 
 
[1] 50  5 
 
> head(virginica) 
 
  Sepal.Length Sepal.Width Petal.Length Petal.Width   Species 
1          7.7         3.0          6.1         2.3 virginica 
2          6.3         3.4          5.6         2.4 virginica 
3          6.4         3.1          5.5         1.8 virginica 
4          6.0         3.0          4.8         1.8 virginica 
5          6.9         3.1          5.4         2.1 virginica 
6          6.7         3.1          5.6         2.4 virginica

It’s easy to create powerful filters that are executed in CAS while still using the R syntax. However, the similarities to dataframe don’t end there. CASTable objects can also create computed columns and by groups using similar techniques.

Want to learn more? Get your copy of SAS Viya: The R Perspective

How to use SAS® Viya® and R for dynamic data selection was published on SAS Users.

10月 292018
 

CASL is a language specification that can be used by the SAS client to interact with and provide easy access to Cloud Analytic Services (CAS).  CASL is a statement-based scripting language with many uses and strengths including:

  • Specifying CAS actions to submit requests to the CAS server to perform work and return results.
  • Evaluating and manipulating the results returned by an action.
  • Creating user-defined actions and functions and creating the arguments to an action.
  • Developing analytic pipelines.

CASL uses PROC CAS which enables you to program and execute CAS actions from the SAS client and use the results to prepare the parameters for a subsequent action.  A single PROC CAS statement can contain several CASL programs.  With the CAS procedure you can run any CAS action supported by the server, load new action sets into the server, use multiple sessions to perform asynchronous execution and operate on parameters and results as variables using the function expression parser.

CASL, and the CAS actions, provide the most control, flexibility and options when interacting with CAS.  One can use DATA Step, CAS-enabled PROCS and CASL for optimal flexibility and control.  CASL works well with traditional SAS interfaces and the Base SAS language.

Each CAS action belongs to an action set.  Each action set is further categorized by product (i.e. VA, VS, VDMML, etc.).  In addition to the many CAS actions supplied by SAS, as of SAS® Viya™ 3.4, you can create your own actions using CASL.  Developing and utilizing your own CAS actions allows you to further customize your code and increase your ability to work with CAS in a manner that best suits you and your organization.

About user-defined action sets

Developing a CASL program that is stored on the CAS server for processing is defined as a user-defined action set.  Since the action set is stored on the CAS server, the CASL statements can be written once and executed by many users. This can reduce the need to exchange files between users that store common code.  Note that you cannot add, remove, or modify a single user-defined action. You must redefine the entire action set.

Before creating any user-defined actions, test your routines and functions first to ensure they execute successfully in CAS when submitted from the programming client.  To create user-defined actions, use the defineActionSet action in the builtins action set and add your code.  You also need to modify your code to use CASL functions such as SEND_RESPONSE, so the resulting objects on the server are returned to the client.

Developing new actions by combining SAS-provided CAS actions

One method for creating user-defined CAS actions is to combine one or more SAS provided CAS actions into a user-defined CAS action.  This allows you to execute just one PROC CAS statement and call all user-defined CAS actions.  This is beneficial if you repeatedly run many of the same actions against a CAS table.  An example of this is shown below. If you would like copy of the actual code, feel free to leave a reply below.

In this example, four user-defined CAS actions named listTableInfo, simplefreq, detailfreq, and corr have been created by using the corresponding SAS-provided CAS actions tableInfo, freq, freqTab, and correlation.  These four actions return information about a CAS table, simple frequency information, detailed frequency and tabulate information, and Pearson correlation coefficients respectively.  These four actions are now part of the newly created user-defined action set myActionSet.  When this code is executed, the log will display a note that the new action set has been added.

Once the new action set and actions have been created, you can call all four or any combination of them via a PROC CAS statement.  Specify the user-defined action set, user-defined action(s), and parameters for each.

Developing new actions by writing your own code

Another way to create user-defined CAS actions is to apply user-defined code, functions, and statements instead of SAS-provided CAS actions.

In this example, two user-defined CAS actions have been created, bdayPct and sos.  These actions belong to the new user-defined action set myFunctionSet.

To call one or both actions, specify the user-defined action set, user-defined action(s), and parameters for each.

The results for each action are shown in the log.

Save and load custom actions across CAS sessions

User-defined action sets only exist in the current CAS session.  If the current CAS session is terminated, the program to create the user-defined action set must be executed again unless an in-memory table is created from the action set and the in-memory table is subsequently persisted to a SASHDAT file.  Note: SASHDAT files can only be saved to path-based caslibs such as Path, DNFS, HDFS, etc.  To create an in-memory table and persist it to a SASHDAT file, use the actionSetToTable and save CAS actions.

To use the user-defined action set, it needs to be restored from the saved SASHDAT file.  This is done with the actionSetFromTable action.

More about CASL programming and CAS actions

Check out these resources for further information on programming in the CASL language and running actions with CASL.

How to use CASL to develop and work with user-defined CAS actions was published on SAS Users.