Developers

11月 202020
 

If you’re like me and the rest of the conference team, you’ve probably attended more virtual events this year than you ever thought possible. You can see the general evolution of virtual events by watching the early ones from April or May and compare them to the recent ones. We at SAS Global Forum are studying the virtual event world, and we’re learning what works and what needs to be tweaked. We’re using that knowledge to plan the best possible virtual SAS Global Forum 2021.

Everything is virtual these days, so what do we mean by virtual?

Planning a good virtual event takes time, and we’re working through the process now. One thing is certain -- we know the importance of providing quality content and an engaging experience for our attendees. We want to provide attendees with the opportunity as always, but virtually, to continue to learn from other SAS users, hear about new and exciting developments from SAS, and connect and network with experts, peers, partners and SAS. Yes, I said network. We realize it won’t be the same as a live event, but we are hopeful we can provide attendees with an incredible experience where you connect, learn and share with others.

Call for content is open

One of the differences between SAS Global Forum and other conferences is that SAS users are front and center, and the soul of the conference. We can’t have an event without user content. And that’s where you come in! The call for content opened November 17 and lasts through December 21, 2020. Selected presenters will be notified in January 2021. Presentations will be different in 2021; they will be 30 minutes in length, including time for Q&A when able. And since everything is virtual, video is a key component to your content submission. We ask for a 3-minute video along with your title and abstract.

The Student Symposium is back

Calling all postsecondary students -- there’s still time to build a team for the Student Symposium. If you are interested in data science and want to showcase your skills, grab a teammate or two and a faculty advisor and put your thinking caps on. Applications are due by December 21, 2020.

Learn more

I encourage you to visit the SAS Global Forum website for up-to-date information, follow #SASGF on social channels and join the SAS communities group to engage with the conference team and other attendees.

Connect, learn and share during virtual SAS Global Forum 2021 was published on SAS Users.

11月 102020
 

The code and data that drive analytics projects are important assets to the organizations that sponsor them. As such, there is a growing trend to manage these items in the source management systems of record. For most companies these days, that means Git. The specific system might be GitHub Enterprise, GitLab, or Bitbucket -- all platforms that are based on Git.

Many SAS products support direct integration with Git. This includes SAS Studio, SAS Enterprise Guide, and the SAS programming language. (That last one checks a lot of boxes for ways to use Git and SAS together.) While we have good documentation and videos to help you learn about Git and SAS, we often get questions around "best practices" -- what is the best/correct way to organize your SAS projects in Git?

In this article I'll dodge that question, but I'll still try to provide some helpful advice in the process.

Ask the Expert resource: Using SAS® With Git: Bring a DevOps Mindset to Your SAS® Code

Guidelines for managing SAS projects in Git

It’s difficult for us to prescribe exactly how to organize project repositories in source control. Your best approach will depend so much on the type of work, the company organization, and the culture of collaboration. But I can provide some guidance -- mainly things to do and things to avoid -- based on experience.

Do not create one huge repository

DO NOT build one huge repository that contains everything you currently maintain. Your work only grows over time and you'll come to regret/revisit the internal organization of a huge project. Once established, it can be tricky to change the folder structure and organization. If you later try to break a large project into smaller pieces, it can be difficult or impossible to maintain the integrity of source management benefits like file histories and differences.

Design with collaboration in mind

DO NOT organize projects based only on the teams that maintain them. And of course, don't organize projects based on individual team members.

  • Good repo names: risk-adjustment-model, engagement-campaigns
  • Bad repo names: joes-code, claims-dept

All teams reorganize over time, and you don't want to have to reorganize all of your code each time that happens. And code projects change hands, so keep the structure personnel-agnostic if you can. Major refactoring of code can introduce errors, and you don't want to risk that just because you got a new VP or someone changed departments.

Instead, DO organize projects based on function/work that the code accomplishes. Think modular...but don't make projects too granular (or you'll have a million projects). I personally maintain several SAS code projects. The one thing they have in common is that I'm the main contributor -- but I organize them into functional repos that theoretically (oh please oh please) someone else could step in to take over.

The Git view of my YouTube API project in SAS Enterprise Guide

Up with reuse, down with ownership

This might seem a bit communist, but collaboration works best when we don't regard code that we write as "our turf." DO NOT cling to notions of code "ownership." It makes sense for teams/subject-matter experts to have primary responsibility for a project, but systems like Git are designed to help with transparency and collaboration. Be open to another team member suggesting and merging (with review and approval) a change that improves things. GitHub, GitLab, and Bitbucket all support mechanisms for issue tracking and merge requests. These allow changes to be suggested, submitted, revised, and approved in an efficient, transparent way.

DO use source control to enable code reuse. Many teams have foundational "shared code" for standard operations, coded in SAS macros or shared statements. Consider placing these into their own project that other projects and teams can import. You can even use Git functions within SAS to fetch and include this code directly from your Git repository:

/* create a temp folder to hold the shared code */
options dlcreatedir;
%let repoPath = %sysfunc(getoption(WORK))/shared-code;
libname repo "&repoPath.";
libname repo clear;
 
/* Fetch latest code from Git */
data _null_;
 rc = git_clone( 
   "https://gitlab.mycompany.com/sas-projects/shared-code/",
   "&repoPath.");
run;
 
options source2;
/* run the code in this session */
%include "&repoPath./bootstrap-macros.sas";

If you rely on a repository for shared code and components, make sure that tests are in place so changes can be validated and will not break downstream systems. You can even automate tests with continuous integration tools like Jenkins.

DO document how projects relate to each other, dependencies, and prepare guidance for new team members to get started quickly. For most of us, we feel more accountable when we know that our code will be placed in central repositories visible to our peers. It may inspire cleaner code, more complete documentation, and a robust on-boarding process for new team members. Use the Markdown files (README.md and others) in a repository to keep your documentation close to the code.

My SAS code to check Pagespeed Insights, with documentation

Work with Git features (and not against them)

Once your project files are in a Git repository, you might need to change your way of working so that you aren't going against the grain of Git benefits.

DO NOT work on code changes in a shared directory with multiple team members –- you'll step on each other. The advantage of Git is that it's a distributed workflow and each developer can work with their own copy of the repository, and merge/accept changes from others at their own pace.

DO use Git branching to organize and isolate changes until you are ready to merge them with the main branch. It takes a little bit of learning and practice, but when you adopt a branching approach you'll find it much easier to manage -- it beats keeping multiple copies of your code with slightly different file and folder names to mark "works in progress."

DO consider learning and using Git tools such as Git Bash (command line), Git GUI, and a code IDE like VS Code. These don't replace the SAS-provided coding tools with their Git integration, but they can supplement your workflow and make it easier to manage content among several projects.

Learning more

When you're ready to learn more about working with Git and SAS, we have many webinars, videos, and documentation resources:

The post How to organize your SAS projects in Git appeared first on The SAS Dummy.

9月 262020
 

Welcome back to my series on securely integrating custom applications into your SAS Viya platform. My goal today is to lay out some examples of the concepts I introduced in the previous posts. As a quick recap:

  1. In the first installment of this series I shared my experiences on a customer project, where we helped achieve value-add on top of their SAS Viya platform. We built a custom application for their buyers to use at auto auctions and embedded repeatable analytics from a SAS Viya decisioning workflow. That customization now helps their buyers make data-driven decisions based on both business rules and analytical models trained by historical data from past purchases and sales. The models are re-trained by the workflow as buyers "commit" new purchases.
  2. In part two, I outlined the security frameworks (OAuth 2.0 and OpenID Connect) and main integration points for bringing custom apps and SAS Viya together. I also introduced the main decision points before setting up this integration.
  3. The most recent post detailed the OAuth flow choices for access tokens from SAS Logon used for custom application and SAS Viya integration. I outlined some suggestions on when to use each flow in order to retrieve the appropriate scoping for your app.

I would encourage you to check out the previous parts in this series before reading on so we all start on a level playing field. In this post, I will assume you have, and thus are familiar with the concepts of user- and general-scoping. Before we jump into some examples for both of those scenarios, two things to note:

  • Recall for both user-scoped and general-scoped scenarios, the Password authentication flow is technically valid, but the industry is suggesting phasing out the use of this flow (source one) (source two). Accordingly, I won't include examples of it here.
  • This post assumes that you are familiar with the SAS Administration concepts of Custom Groups, granting access to SAS Content, and creating authorization rules. If those topics are unfamiliar to you, and you will be implementing this custom application registration, please take a look at the resources linked inline.

General-scoped access tokens

This refers to scenarios where we don't want end users to have to log in (directly to SAS Viya, at least). You don't want the tokens returned by SAS Logon to your custom applications generating authorization decisions based on the individual using your application. In these cases, however, you do still want control over the endpoints/resources your app calls and which CRUD operations (which map to HTTP methods) it performs on them. The general steps are:

  • Create a Custom Group for the use of the application.
  • Grant necessary access to that Custom Group for the appropriate endpoints and resources.
  • Register a new, unique client to your SAS Viya environment, using the client_credentials grant type, and specifying the created Custom Group as the value for the authorities property.

Detailed steps

  1. As a member of your SAS Viya environment's SAS Administrators group, create a new Custom Group for your application. The naming convention of App.<your_app> is recommended. Using prefixes in this manner logically organizes Custom Groups used for similar purposes together. Consider including a description like the one pictured below.

    (Click to enlarge)

  2. Determine the necessary resource(s), content, and/or endpoint(s) application users need. Next, create the authorization rules to provide appropriate access using the Custom Group you created in the previous step. I've often found this to be an iterative process which is perfectly fine. You (a SAS Admin) can adjust the authorization rules granted to this Custom Group, and thus your app, at any time.
    _

  3. Review the general documentation provided for registering clients to your SAS Viya environment. The following is an adaptation of those instructions, please take note of the differences and be sure to include them as you see below.

    (Click to enlarge)

    For readability, the required data for the last step above was:

    {
        "client_id": "{{YOUR_APP_NAME}}",
        "client_secret": "{{YOUR_SECRET}}",
        "authorized_grant_types": "client_credentials",
        "authorities": "{{CUSTOM_GROUP_FROM_PREVIOUS_STEPS}}",
        "scope": "openid"
    }

    Where scope should include openid for authentication purposes.

    And where authorities will include your Custom Group linking this client with the authorization rules you created. This effectively grants those same permissions to any downstream API requests made by your application when it requests a token in the manner below (next step).
    _

  4. Once registered, here is an example (using Postman) of how your application requests a token using the Client Credentials flow:

    (Click to enlarge)

    _

  5. And here is an example of how your application would make an API call using that token. In this example, an authorization rule was created for this endpoint with Read (GET) permissions for the Custom Group linked to this client:

    (Click to enlarge)

Some important things to note

  • You'll want to use dynamic variables instead of hard coded values as you see in the Postman calls above. This is just for illustrative purposes.
  • If you make a GET request against the client you just registered for your application, you will not get back the client_secret value you specified upon creation. Note that as we saw in Step 4 above, your client will need this value to request it's tokens so plan accordingly.
  • In addition to any values that could change depending on where your application is deployed (including hostnames, etc.), ensure your client_id and client_secret values are secured properly. If you're using Docker, feed an environment variables file to docker-compose.yml. Make sure to include the variables file in your gitignore file. Moreover, if you're using k8s you can always use secrets. Just have a plan and never hard code these into your application.
  • When making requests to SAS Logon Manager for tokens within the context of your registered application as shown in the first screenshot above, the default timeout for the tokens returned is just shy of 12 hours. You can augment that by adding the access_token_validity property with a numeric value in seconds when you register your client. Any subsequent tokens obtained using your client will use that timeout value.
  • OAuth's Refresh Token flow isn't available in combination with the client_credentials grant type. Keep this in mind when deciding on the value to use for the access_token_validity property. You may want to consider adding logic to your application to request a new token before the current one expires. Parsing out the value of the expires_in property dynamically from the original token request response will be helpful in accomplishing that. Keep in mind you will want enough of a time buffer if your application makes a string of endpoint requests or any long-running requests, so your process does not get interrupted by an expired token.

_


User-scoped access tokens

This refers to scenarios where it does make sense for your application's end users to log in to SAS. All subsequent API calls into SAS Viya resources your application makes should be within the context of what they, as an individual user, is allowed to do/see/access in SAS Viya. The authorization decisions for those requests will be based on the union of anything the logged-in user could do while logged into a native SAS Viya UI (see autoapprove below) and any authorization rules that were explicitly granted to the Custom Group(s) selected while registering this client. A SAS Administrator can place all the users of your application in that Custom Group, ensuring a similar level of functionality for those users, while still obeying any more-granular rules that might apply, such as on specific data or content. The general steps are:

  • Create a Custom Group for the use of the application
  • Grant necessary access to that Custom Group for the appropriate endpoints and resources.
  • Register a new, unique client to your SAS Viya environment, using the authorization_code grant type, specifying the created Custom Group as the value in the scope property, and some additional specific properties shown below
  • Include logic in your application to handle the redirection to SAS Logon Manager when needed and to temporarily store authorization codes and OAuth tokens per session/user.

Detailed steps: prepping and registering the client

  1. Perform steps 1 and 2 from the General-scoped tokens section above.
    _

  2. Review the general documentation provided for registering clients to your SAS Viya environment. The following is an adaptation of those instructions, please take note of the differences and be sure to include them as you see below.

    (Click to enlarge)

    For readability, the required data for the last step above was:

     {
        "client_id": "{{YOUR_APP_NAME}}",
        "client_secret": "{{YOUR_SECRET}}",
        "authorized_grant_types": ["authorization_code", "refresh_token"],
        "scope": ["openid", "{{CUSTOM_GROUP_FROM_PREVIOUS_STEPS"],
        "redirect_uri": ["{{URL_SASLOGON_SHOULD_REDIRECT_BACK_TO}}/callback"],
        "autoapprove": "{{TRUE_OR_FALSE_IN_LOWCASE}}" 
    }

    Where scope should include openid for authentication purposes and, for this authorization flow, must also include the Custom Group you created. This is what links your client to any authorization rules you created, effectively granting those same permissions to any downstream API requests made by your application.

    And where redirect_uri includes the URL SAS Logon Manager sends your users back to after they authenticate. It's formatted in a list above in case you want to include both HTTP and HTTPS protocols for development purposes. This is helpful in avoiding edits to this client down the road. Take note of the inclusion of the /callback postfix in the screenshot above. We'll see more on this later.

    And where autoapprove is either true or false, depending on whether or not you want to present the user with a list of Custom Groups they need to approve or deny before proceeding. My personal opinion is to stick with true, not only because it more-closely mimics the behavior already present with native SAS Viya applications. In addition, the user likely won't know the implications of selecting (or not selecting) certain Custom Groups. This could lead to downstream authorization issues for your application.
    _

Detailed steps: partial example for your app's logic

Your app has several requirements pertaining to authentication and authorization. First, it needs to know when to redirect users to SAS Logon Manager (when they're not already logged in or if their session expired). The app also must handle the authorization code returned to your user. Further, it needs to request an OAuth token for your user and how to use that token to make downstream requests to other APIs. Finally, your app needs to revoke tokens when necessary.

This is not a complete example and exactly how this is implemented depends heavily on the language your application was developed in. I don't want to distract from the topic at hand, so I'm not going to include complete code blocks. The application used for this example is a Javascript React web UI and I'll focus on the main order of operations involved in getting this to work. Therefore, I've intentionally removed some syntax to illustrate this in a more agnostic way.

  1. User navigates to our application's home page. Logic determines that a boolean state variable we're setting called isAuthenticated is false (default), so it redirects to the SAS Logon page using the URL below. Note that all items in {{ }} are substituted with environment variables rather than hardcoded. Recall, in the previous step when we registered the client, I mentioned you might want to include /callback after the URL to your application. When you're developing the routes in your code logic, it helps to differentiate between someone who has navigated directly to your / or /home page versus a user that's been redirected there from SAS Logon Manager. You'll see why in the next step.
    https://{{VIYA_HOSTNAME}}/SASLogon/oauth/authorize/response_type=code&client_id={{CLIENT_ID}}&redirect_uri={{URL_POSTFIXED_WITH_/callback}}
  2. This automagically presents the user with the same SAS Logon screen they see when accessing a native SAS Viya application. After they successfully authenticate, the URL redirects them back to our application (we included the /callback postfix in the previous step, so we see that in the redirected URL below) with an extra URL parameter like this (the code will of course be dynamic):
    https://{{YOUR_APPS_HOSTNAME}}/callback?code=pe3vyay7LJ
  3. So now, we substring off of the value of the user's current URL path and store that code in a variable, say code, temporarily. Next, the conditional logic will pick up the fact the user does have a code but isAuthenticated is false, so we kick off another function, feeding the code as an argument.
    _
  4. This function makes the second call to SAS Logon Manager; this time to exchange the user's code for an OAuth token by making a request to:
    https://{{VIYA_HOSTNAME}}/SASLogon/oauth/token/

    _
    With headers similar to the snippet below, again where anything in {{ }} is dynamically set:

    -H "Accept: application/json"
    -H "Content-Type: application/x-www-form-urlencoded"
    -u "{{CLIENT_ID}}:{{CLIENT_SECRET}}" -d "grant_type=authorization_code&code={{CODE}}"
  5. If the request is successful, we then call another function, which uses the js-cookie package in our case, to set a new piece of information in the user's cookie called access_token. This value gets pulled from the result of the previous request. At the same time, we update our isAuthenticated state variable to true.
    _
  6. Now that the state has changed (isAuthenticated is true), the page gets re-rendered in typical React fashion. If our URL included /callback then we know the user came from SAS Logon, and our application logic redirects the user to the main /home page of our application. Now we have the user's OAuth token stored in a cookie for them.

At this point, I think we've seen enough examples to know how to make downstream requests with the token by adding an Authorization header and setting it's value to Bearer {{TOKEN}}, so I'll leave those pieces out from here. Don't forget each of those requests will be made for your application's individual logged-in users, so their unique identities will need to be able to access the endpoints/resources in SAS Viya through appropriate authorization rules.

On a similar note, and because this is getting really long 🙂 , it feels a little out-of-scope to include details on the refresh token logic. Just know, it follows a very similar pattern as above. If you've included refresh_token in the authorized_grant_types list when registering your client, then subsequent token requests made within the context of your client automatically include a refresh_token property in the response you can parse out and use to get a new bearer token. The logout logic includes removing the user's cookie.

Just one more thing: if you have a multi-machine SAS Viya environment then using {{VIYA_HOSTNAME}} might have been confusing. Check your Ansible inventory.ini file and look for the machine specification for the hostgroup [CoreServices].

Thank you!

Whew. That was a doozy. Thanks for sticking with me through this post and the series! I hope this content gives you a better understanding of how to integrate your custom applications with SAS Viya and how to do it securely. Remember to evaluate each application's needs and intended use separately to choose the most appropriate type of scoping. Then use the guidance presented here to choose an OAuth flow so your application makes HTTP REST API calls into the SAS Viya endpoints and resources it needs. Feel free to reach out if you have any questions.

9月 162020
 

There are three types of visualization APIs defined in the SAS Viya REST API reference documetation: Reports, Report Images and Report Transforms. You may have seen the posts on how to use Reports and Report Images. In this post, I'm going to show you how to use the Report Transforms API. The scenario I am using changes the data source of a SAS Visual Analytics report and saves the transformed report.

Overview of the Report Transforms API

The Report Transforms API provides simple alterations to SAS Visual Analytics reports, and it uses the 'application/vnd.sas.report.transform' media type for the transformation (passed in the REST API call header). When part of a request, the transform performs editing or modifications to a report. If part of a response, the transform describes the operation performed on the report. Some typical transformations include:

  • Replace a data source of a report.
  • Change the theme of a report.
  • Translate labels in a report.
  • Generate an automatic visualization report of a specified data source and columns.

To use the Transforms API, we need to properly set the request body and some attributes for the transform. After the transform, the response contains the transformed report or a reference to the report.

Prepare the source report

This step is very straight-forward. In SAS Visual Analytics, create a report and save it to a folder (instructions on creating a report are found in this video). For this example, I'll use the 'HPS.CARS' table as the data source and create a bar chart. I save the report with name 'Report 1' in 'My Folder'. I'll use this report as the original report in the transform.

Generate the request body

I will use PROC HTTP to call the Transforms API using the 'POST' method and appending the URL with '/reportTransforms/dataMappedReports'. The call needs to set the request body.

  1. Get the ReportURI: In an earlier post I outlined how to get the reportURI via REST API, so I won't go into details. If you'd like an easy way, try this: in SAS Visual Analytics, choose 'Copy Link…'item from the menu. In the pop-up dialog, expand the 'Options' and choose 'Embedded Web Component', and you see there is a string in the form reportUri='/reports/reports/…', that's it. In the request body, we set the string to the 'inputReportUri' to specify the original report - the 'Report 1'.
  2. Report URI from SAS Visual Analytics

  3. Decide on changes to the data source: Here I’d like to change the data source from ‘HPS.CARS’ to ‘CASUSER.CARS_NEW’. The new table uses three columns from ‘HPS.CARS’ as mapped below.
  4. Select columns to include in new table

  5. Specify the data sources in the request body: The request requires two data sources, 'original' and 'replacement', respectively, representing the data sources in original report and the transformed report. Note that the 'namePattern' value is used to enumerate the way of identifying the data source. If it is set to 'uniqueName', the data source is identified by its unique data item name in the XML file of the report. If it is set to 'serverLibraryTable', the data source is identified by the CAS Server, CAS Library and Table names together. The snippets below show the data source section in the request body. I like to use the 'serverLibraryTable' to specify the data source for both original and transformed report, which is clear and easy.
  6. /* data source identification for original report */
      {
        "namePattern": "serverLibraryTable",
        "purpose": "original",
        "server": "cas-shared-default",
        "library": "HPS",
        "table": "CARS"
      }
     
    /* data source identification for transformed report */
      {
        "namePattern": "serverLibraryTable",
        "purpose": "replacement",
        "server": "cas-shared-default",
        "library": "CASUSER",
        "table": "CARS_NEW",
        "replacementLabel": "NEW CARS",
        "dataItemReplacements": [
          {
            "originalColumn": "dte",
            "replacementColumn": "date"
          },
          {
            "originalColumn": "wght",
            "replacementColumn": "weight"
          },	
          {
            "originalColumn": "dest",
            "replacementColumn": "region"
          }
        ]
      }

    Set more attributes for transform

    Besides the request body, we need to set some other attributes for the transform API when changing the data source. These include 'useSavedReport', 'saveResult', 'failOnDataSourceError' and 'validate'.

    • useSavedReport specifies whether to find the input (original) report as a permanent resource. Since I am using the saved report in the repository, I will set it to true.
    • saveResult specifies to save the transformed report permanently in the repository or not. I am going to save the transformed report in the repository, so I set it to true.
    • failOnDataSourceError specifies whether the transform continues if there is a data source failure. The default value is false, and I leave it as such.
    • The validate value decides if the transform will perform the XML schema validation or not. The default value is false, and I leave it is as such.

    Decide on a target report and folder

    I'll save the transformed report with the name 'Transformed Report 1' in the same folder as the original 'Report 1'. I set the 'resultReportName' to 'Transformed Report 1', and set the 'resultReport' with 'name" and 'description' attributes. I also need to get the folderURI of the 'My Folder' directory. You may refer my previous post to see how to get the folderURI using REST APIs.

    Below is the section of the settings for the target report and folder:

    "resultReportName": "Transformed Report 1",
    "resultParentFolderUri": "/folders/folders/cf981702-fb8f-4c6f-bef3-742dd898a69c",
    "resultReport": {
    			    "name": "Transformed Report 1",
    			    "description": "TEST report transform"
    			}

    Perform the transform

    Now, we have set all the necessary parameters for the transform and are ready to run the transform. I put my entire set of code on GitHub. Running the code creates the 'Transformed Report 1' report in 'My Folder', with the data source changing to CASUSER.CARS_NEW', containing the three mapped columns.

    Check the result

    If the API failed to create the transformed report, the PROC SQL statements displays an error code and error message. For example, if the replacement data source is not valid, it returns errors similar to the following.

    Invalid data source error message

    If the API successfully creates the transformed report, “201 Created” is displayed in the log. You may find more info about the transformed report from the response body of tranFile from PROC HTTP. You can also log into the SAS Visual Analytics user interface to check the transformed report is opened successfully, and the data sources are changed as expected. Below is the screenshot of the original report and transformed report. You may have already noticed they use different data sources from data labels.

    Original and transformed reports in SAS Visual Analytics

    Finally

    There are a wealth of other transformations available through the Report Transform APIs. Check out the SAS Developer site for more information.

Using REST API to transform a Visual Analytics Report was published on SAS Users.

9月 152020
 

This is the third installment of a series focused on discussing the secure integration of custom applications into your SAS Viya platform. I began the series with an example of how a recent customer achieved value-add on top of their existing investments by building a custom app for their employees and embedding repeatable analytics from SAS Viya into it. Then in the second part, I outlined the security frameworks and main integration points used to accomplish that.

Key takeaways from the previous post

As a quick recap, in the last post I covered how to integrate custom-written applications with the SAS Viya platform using scoped access tokens from SAS Logon Manager. I outlined the returned access tokens need to be included with the REST calls those custom apps make to SAS Viya endpoints and resources. The access tokens contain the necessary information SAS Viya will use to make authorization decisions for every call your app makes. Administrators of your SAS Viya environment have complete control over the rules that determine the outcome of those decisions. I also introduced the primary decision you need to make before you begin integrating your custom app into your SAS Viya environment: whether those access tokens will be scoped in the context of your application (general-scoped) or in the context of each logged-in user of your application (user-scoped).

Now let's go a bit deeper into the authorization flows for both scenarios. TL;DR at the bottom.

Quick disclaimer: The rest of this post assumes you are familiar with the SAS Administration concepts of Custom Groups, granting access to SAS Content, and creating authorization rules. If those topics are unfamiliar to you and you will be implementing this custom application registration, please take a look at the resources linked inline.

General-scoped access tokens

First, when I say "general-scoped access" here, I mean cases where you don't want end users to have to log in (directly to SAS Viya, at least). You don't want the tokens returned by SAS Logon to your custom applications generating authorization decisions based on the individual using your application. In these cases, however, you do still want control over the endpoints/resources your app calls and which CRUD operations it performs on them. Any situation where you'd like to expose some content you've created in SAS Viya to a large scale of users, particularly those that wouldn't have valid login credentials to your SAS Viya environment, fall in this category. There are two main “flows” available to make this happen. Here, “flow” is a term used to describe the process for accessing a protected resource within the OAuth framework.

Password flow

I'll won't go into much detail on this one because the folks behind OAuth 2.0 are suggesting the phase-out of the use of this flow (source one) (source two). Though it technically works for this use case, it does so by essentially using a "service account" so all REST calls your application makes are within the scope of a single UID. You could create specific authorization grants for that user (or a group it's a member of) to make this functional, but it requires that you include the username and password as clear text URL parameters when you request your token. It's just not a great idea and not secure to send this over the wire even if you are using HTTPS. It also creates another sensitive environment variable to manage and, worse, I have even seen this hard coded it into custom application logic.

Client Credentials flow

Interestingly enough, the combination of the security concerns with the Password flow, while still needing to provide specific access levels for a custom application our team was building, is how I found myself wanting to write this series. If you're reading this, you've likely checked out the page on how to register clients to SAS Logon Manger on developer.sas.com. These instructions do not cover the Client Credintials flow though Mike Roda's SAS Global Forum paper, OpenID Connect Opens the Door to SAS Viya APIs, does provide guidance on the client credentials grant type. Remember, these types of scenarios don't warrant concern for which individual users are making the calls into SAS Viya; however, you still will want to restrict which calls your app makes, so follow least-privilege access best practices.

The secret sauce to making this work is registering your application to SAS Logon Manager using a specific, optional key called authorities and setting its value to be a list of custom group(s) giving your application the access to the proper resources and endpoints. Here's where some prior knowledge of creating authorization rules for specific URIs using endpoint patterns and specific principals comes in handy. Check out the documentation linked above if those are unfamiliar terms. We'll see examples of how to implement this flow in the next post in this series.

The diagram below illustrates this flow. Viewing hint: click on the photo to focus on it, and click on the full-screen button to enlarge it.

Figure 1: Visual representation of the Client Credentials OAuth flow

User-scoped access tokens

As a quick recap, here I'm talking about the cases where it does make sense for your application's end users to log in to SAS. All subsequent API calls into SAS Viya resources your application makes should be within the context of what they as an individual user is allowed to do/see/access in SAS Viya. For example, if a user is not in the SAS Administrator's group, then you'll want to ensure any REST calls your custom app makes for that user that would require him to be in the SAS Administrator's group will indeed fail.

Admittedly, this takes more work on the application development side to implement; however, as far as creating and registering the client for your application, this isn't any more complicated than what I've already outlined in this post. By the nature of the flow I'll outline in a minute, the application developer is going to need to add additional logic to handle sessions and/or cookies to obtain, store, check, update, and eventually revoke individual user's authorization codes. When using SAS Viya applications this all occurs behind the scenes. For example, when you first navigate to SAS Visual Analytics (VA) you are re-directed to SAS Logon Manager to log in and get your user-scoped access token. Then you are redirected back to VA with your token. In turn, VA and the supporting services use this token for any subsequent REST calls to other SAS Viya endpoints and resources. When creating custom-developed apps you integrate with SAS Viya for user-scoped access, we need to recreate that baked-in behavior. This is the process I outline below.

Authorization Code flow

A shortened-for-readability version of how this works is:

    1. The user navigates to your application. If they don't already have a valid session with SAS Logon Manager redirection sends them to the same screen they would see when accessing native SAS Viya applications.
      _
    2.  They enter credentials and log in. Then depending on how the client was registered, either:
      • the user manually approves their group memberships which SAS Logon Manager presents to them (thus all the authorization grants for those group(s)). An example of this is seen when logging into SAS Viya and being presented with the option to accept or deny your SAS Administrators group privileges, or
      • their group memberships will be automatically approved for them. This option is more SAS Viya-esque in that, when you log in as a regular user you're not asked to accept your role in any Custom Groups. It is implicitly allowed behind the scenes, so you needn't know what each Custom Group allows. Rather, it is assumed that a SAS Administrator has provided you the right memberships.
      • Note that we will see an example of how to toggle these two options in the next post in this series._
    3. If the user credentials are valid, SAS Logon Manager responds with a user-scoped authorization code. We'll see how to use redirects to get it back to your application in the next part of this series. Your application needs to retrieve and temporarily store the authorization code which will be in a URL parameter on the redirect.
      _
    4. Your application will then need to make another request to SAS Logon Manager on behalf of your user, this time trading the code for an OAuth access token. This is where your apps ability to handle sessions and/or cookies, regardless of programming and code language, starts to come in handy.
      _
    5. Finally, when your user interacts with some part of your application that needs to make an API call to a SAS Viya endpoint or resource, your application needs to pass their user-scoped OAuth token in with the request. Remember, it's a good idea to include some logic to check for expired tokens and be able to refresh and revoke them when necessary as well.

Joe Furbee covers the authorization code process in detail in Authentication to SAS Viya: a couple of approaches. The whole point of using this flow in this manner is to ensure that SAS Logon Manager is responsible for handling individual user credentials rather than your application, which by contrast would then require submitting them over the wire. The diagram below represents the flow.

Figure 2: Visual representation of the Authorization Code OAuth flow

Password flow

In fairness, this technically works here as well, but we already know the drill.

_

Summing it all up (or TL;DR)...

My goal between this post (and in part 2) was to introduce the necessary planning steps prior to registering a custom-developed application to SAS Logon Manager, so it can make REST API calls to SAS Viya endpoints and resources. To recap, the steps are:

1.  Decide whether users should have to log into SAS Viya or not.

    • Is your application serving up generic or "public" content that anyone using your app should have the same level of access to? Note that in this case you could still have a layer of custom login logic to restrict user access to your application, but I'm referencing scenarios where it doesn't make sense for users to log into SAS Viya, especially if they aren't in the configured user base with valid credentials. If you find yourself in this scenario, then proceed to Step 2A.
      _
    • Or, do you want any application API calls to SAS Viya to be within the context of what that individual logged-in user should be able to do? Then proceed to Step 2B.

2A.  Consider using the Client Credentials flow. Set up the security principal(s) and authorization rule(s) to give your application the appropriate permissions to the needed resources and then register your client with the relevant settings.

2B.  In this case, you should consider using the Authorization Code flow. Enhance your custom application's ability to request and handle user- and session-specific authorization codes and tokens, substituting them into subsequent API calls when needed, and then register your client with the relevant settings.

_
In either case, I'll cover examples in the next post in this series.

Building custom apps on top of SAS Viya, Part Three: Choosing an OAuth flow was published on SAS Users.

9月 092020
 

Welcome back! Today, I continue with part 2 of my series on building custom applications on SAS Viya. The goal of this series is to discuss securely integrating custom-written applications into your SAS Viya platform.

In the first installment of this series, I outlined my experiences on a recent project. In that example, our team worked with a pre-owned auto mall to create a custom application for their buyers to use at auction, facilitating data-driven decisions and maximizing potential profits. Buyers use the app's UI to select/enter the characteristics of the specific car on which they'd like to bid. Those details are then sent via an HTTP REST call to a specific API endpoint in the customer's SAS Viya environment which exposes a packaged version of a detailed decision flow. Business rules and analytic models evaluate the car details as part of that flow. The endpoint returns a maximum bid recommendation and other helpful information via the custom applications's UI.

If you haven't already, I encourage you to read part 1 for more details.

If the concept of exposing endpoints to custom applications elicits some alarming thoughts for you, fear not. The goal of this post is to introduce the security-related considerations when integrating custom applications into your SAS Viya environment. In fact, that is the topic of the next two sections. Today we'll take a look at the security underpinnings of SAS Viya so we can start to flesh out how that integration works. In part 3, we'll dive into more specific decision points. To round things out, the series finishes with some examples of the concepts discussed in parts 1-3.

Looking a little deeper

So how does SAS Viya know it can trust your custom app? How does it know your app should have the authority to call specific API endpoints in your SAS Viya environment and what HTTP methods it should allow? Moreover, who should be able to make those REST calls, the app itself or only individual logged-in users of that app? To answer these questions, we need to understand the foundation of the security frameworks in SAS Viya.

Security in SAS Viya is built around OAuth 2.0 (authorization) and OpenID Connect (authentication), both of which are industry-standard frameworks. The details of how they work, separately and together, are out of the scope of this post. We will, however, delve into a few key things to be aware of within their implementation context for SAS Viya.

First, SAS Logon Manager is the centerpiece. When a client (application) requests access to endpoints and resources, they must first communicate with SAS Logon Manager which returns a scoped access token if it was a valid request. This is true both within SAS Viya (like when internal services hosting APIs communicate with each other) and for external applications like we’re talking about here. The client/application subsequently hands that token in with their request to the desired endpoint.

Workflow: applications requesting access to SAS Viya endpoints

The long and short of it is that we don’t want users of our custom applications to see or manipulate anything they shouldn’t, so these tokens need to be scoped. The authorization systems in SAS Viya need to be able to understand what the bearer of that specific token should be able to do. For a detailed overview of how SAS Viya services leverage OAuth and OpenID Connect tokens for authentication, I'd recommend reading this SAS Global Forum paper from SAS' Mike Roda.

Later in this series, we'll see examples of obtaining a scoped token and then using it to make REST calls to SAS Viya API endpoints, like the auto mall's custom app does to return recommendations for its users. For now, suffice it to say the scope of the token is the critical component to answering our questions regarding how SAS Viya returns specific responses to our custom application.

The glue that binds the two

We've established that native SAS Viya applications and services register themselves as clients to SAS Logon Manager. When a client requests access to an endpoint and/or resource, they must first communicate with SAS Logon Manager which returns a scoped access token if it was a valid request. If the request was unauthorized, it receives a 403 unauthorized (sometimes 401) response. Your SAS Viya environment's administrator has complete control over the rules that determine the authorization decision. We can register additional custom applications, like the auto mall's auction bidding app, to SAS Logon Manager so they too can request scoped tokens. This provides our custom applications with fine-grained access to endpoints and content within SAS Viya. But we’ve already hit a fork in the road. We need to understand the purpose of the application we’re looking to build, so we can decide if the access should be user-scoped or general-scoped.

In the pre-owned auto mall's custom application, a decision and modeling flow is exposed by way of a SAS Micro Analytics Score (MAS) module at a specific HTTP REST API endpoint. All the users of the application have the same level of access to the same endpoint by design; there was no need for differentiated access levels. This is considered an instance of general-scoped access. Abiding by a least-privilege approach, the app should only access certain data and endpoints in SAS Viya, that is, only what it needs to rather than having free reign. In this case, the appropriate level of access can be granted to the registered client (custom application) itself. The result is an access token from SAS Logon Manager scoped within the context of the application.

In other applications you might find it necessary to make more sensitive, or user-specific resources available. In this instance, you'll want to make all authorization decisions within the context of the individual logged-in user. For these situations, it is more prudent to redirect the user directly to SAS Logon Manager to log in and then redirect them back to your application. This results in SAS Logon Manager returning an access token scoped within the context of the logged-in user.

What's next?

Once you've decided which of these two scenarios most closely fits your custom application's needs, the next step is to decide on a flow (or grant) to provide the associated type of access token, general- or user-scoped. Here “flow” is a term used to describe the process for accessing a protected resource within the OAuth framework and that's what I'll cover in the next post in this series.

Building custom apps on top of SAS Viya, Part Two: Understanding the integration points was published on SAS Users.

8月 312020
 

In this multi-part series we're going to explore a real-life example of operationalizing your analytics and then quickly dive into the technical details to make it happen. The phrase Operationalize your Analytics itself encompasses a framework for realizing timely and relevant business value from models, business rules, and algorithms. It also refers to the development-to-deployment life cycle commonly used in the ModelOps space. This series is geared towards walking you through a piece of the puzzle, exposing your analytics as REST APIs, but to get a better feel for the whole picture I would encourage taking a look at these resources:

Fair warning: I get detailed here. This is a journey. I'm not trying to pitch the story for the next Netflix original, but I want to outline business challenges companies face and how they can utilize technology to solve these problems. The more background you have, the more the solution resonates. I also want to have a little fun and give props to my champion salesman.

Framing the use case

Our goal in this first part of the series is to better understand the value of building custom applications on top of SAS Viya. To illustrate my point, I'm going to use Tara's Auto Mall, a used car buying and selling business I hypothetically own. We incorporated the business back in 1980 when neon and hairspray were king and MTV played music videos. Kurt, pictured at our lot, was my best salesman.

This man knew how to spot a great buy. Every auction we'd send him to looking for cars, he'd find the gems that just needed a little polishing, a little clean up, and knew exactly how much to pay for them. Sure, we still bust on him today for a couple of heaps he bought that we never made a dime off, but for the most part he knew how not to get caught up in the heat of the auction. It's an easy thing to do while just trying to outbid the others that you end up paying too much for a car. After all, we've got to turn a profit to keep the lights on and we do that by bringing used cars from auction to our lot for resale. Over the years, Kurt learned which makes and models we could get the best profit margins out of, while also learning how to spot cars that might end up racking up too many unexpected costs. "Salt and sun damage will take a huge bite out of your commission" he'd always tell the new guys.

But these days Kurt, and most of the other great salesman I've had over the years, spends his days fishing in the middle of nowhere, happily retired. Sure, I've got a few folks still on the team that really know a good buy when they see it, but more and more often we're losing money on deals. As hard as these new kids work, they just don't have the years of experience to know what they're getting into at auction.

Sometimes a car will come up that looks like it's in great shape. Clean and well-kept, low miles even, and everyone starts spouting off bids, one right after the other. That'll make just about anyone think "they must know something I don't know, I've got to win this one!" But unless you really know what to look for, like Kurt did, that good-looking deal might end up being a dud. And while Kurt had experience under his belt to help his eye for that, keeping track of the good buys and the bad ones, learning from all those lessons - that's the best yard stick for comparisons when you're trying to make a quick call at auction. How much have we been able to sell cars with similar characteristics for before? Have they ended up costing us more money to fix up than they're worth?

Needing to modernizing our business

Okay, you get it. It's a challenge many are actively facing: a retiring workforce taking their years of experience with them, combined with agile competitors and market disrupters. Not only to better compete but also to cut down on poor "gut" decisions, Tara's Auto Mall decides to build an app. This way all their employees can use the app at auction to help make the best possible decisions for which cars to bid on and how much is too much before overpaying.  Over the years they've bought a lot of cars, thousands, and no one can really keep track of all that data in their head to know how to truly evaluate a good buy.

The buying team should be able to pull up this custom app on their phone, put in the details for the car they're thinking about making a bid on, and almost instantly get back a suggestion on whether or not they should bid at all and if so, a maximum bid amount and some other helpful information. Those results should be based on both the historical data from previous sales and from business logic. And they should be able to record how much each car up at auction went for and it's characteristics, whether they won it or not, so we can feed that back into our database to make the models even more accurate over time.

How can Tara's Auto Mall improve?

Now that we've got the main requirements for the application that Tara's Auto Mall needs to build so the buying team can make data-driven decisions at auction, we can start to hash out how those recommendations will be generated. The solution they decided on is to use SAS Viya to create a decision flow to evaluate used cars up for bidding at auction by using that specific car's details as inputs and first checking them against some business rules that they've come to trust over time. This adds a layer of intelligence into the decision-making process that models alone cannot provide.

We heard a bit about the lessons learned over time, like looking for salt and sun damage, so we can layer this into the recommendation logic by checking for certain states where that type of damage is typical. This allows us to add more "human knowledge" into the computer-driven process and automatically alert the buyer that they should be aware of this potential damage. That decision flow then uses both supervised and unsupervised models to evaluate the car's unique characteristics against previous sales, the profits from them, and market data, to generate a maximum bid suggestion.

Here's the fun part

During an auction we don't have time to wait for some batch process to give recommendations and we don't want our buyers to have to log in and enter values into a written program either. Instead, we can publish the entire decision flow as an API endpoint to the SAS Micro Analytic Service (MAS), a “compile-once, execute-many-times” service in SAS Viya. Then we can have our custom application make REST calls to that endpoint with the specific car details as inputs, where it will execute against a packaged version of that entire flow we outlined. The result is a recommendation from both the business rules and models.

The SAS decisioning process

The app is part of the SAS decisioning process, and evolves over time. As more sales are added to the database through the app, the solution automatically retrains the recommendation model. It yields more-accurate suggestions as the data changes and then republishes to MAS so our custom app can automatically take advantage of the updated model.

I want to pause here and note that, while this example is for an application making REST calls to MAS, that is really the tip of the iceberg. One of the greatest advantages to the modernization of the SAS platform with SAS Viya is the sheer number of underlying services that are now exposed as API endpoints for you to incorporate into your larger IT landscape and business processes. Check out developer.sas.com to take a look.

What's next?

Tara's Auto Mall has now decided on the custom application to help their buyers at auction and the solution to building repeatable analytics and using that to provide near-real-time recommendations. Next, we need to work through integrating the custom app into their SAS Viya environment so they can embed the recommendations.

Questions from the author

While you might not have gotten to work with Kurt, Tara's Auto Mall's best and most fashionably-dressed salesman, is your business experiencing similar challenges? Do you have homegrown applications that could benefit from injecting repeatable analytics and intelligent decision making into them?

 

Building custom apps on top of SAS Viya, Part One: Where's the value? was published on SAS Users.

8月 212020
 

SAS Viya is an open analytics platform accessible from interfaces or various coding languages. REST API is one of the widely used interfaces. Multiple resources exist on how to access SAS Visual Analytics reports using SAS Viya REST API. For example Programmatically listing data sources in SAS Visual Analytics by my colleague Michael Drutar. His post shows how to list the data sources of VA reports. Also, in Using SAS Viya REST APIs to access images from SAS Visual Analytics, Joe Furbee demonstrates how to retrieve report images. In this post, I am going to show you how to get the path for SAS Visual Analytics reports using REST APIs.

Full API reference documentation for SAS REST APIs is on developer.sas.com. You can exercise REST APIs in several ways such as curl, browsers, browser plugins, or any other REST client. Here I am going to access the SAS Viya Visualization and Core Services REST API with SAS Code. The Visualization service APIs provide access for reports, report images, and report transforms. The Core Services APIs provides operations for resources like folders, files, authorization, and so on.

Composition of a report object

The chart below describes the object composition of VA reports, from an API perspective. We see the report object itself has metadata storing the report properties like id, name, creator, modified date, and links, etc. Each VA report object is identified uniquely by its ID in SAS Viya. The report content object, presented in either XML or JSON format, is stored separately from the report object. The report content object enumerates the data and image resources, generating visual elements such as graphs, tables, and images.

Reports API definition

Get a list of reports

Let's begin with a scenario of getting a list of reports. These reports may be returned from a search or a filter in Viya, or a list you've got at hand. (The SAS Viya support filter link has more information on using the filter.) Here I'm using a filter to get a list of reports named 'Report 2'. I use Proc HTTP to access the Reports API in the Visualization service with a 'GET' request and '/reports/reports?filter=eq(name,'Report 2')' in the URL. Note, the HEADERS of Proc HTTP need to be set properly to generate expected results. Below is a snippet for this.

%let BASE_URI=%sysfunc(getoption(SERVICESBASEURL));
FILENAME rptFile TEMP ENCODING='UTF-8';
PROC HTTP METHOD = "GET" oauth_bearer=sas_services OUT = rptFile
      /* get a list of reports, say report name is 'Report 2' */
      URL = "&BASE_URI/reports/reports?filter=eq(name,'Report 2')";
      HEADERS "Accept" = "application/vnd.sas.collection+json"
               "Accept-Item" = "application/vnd.sas.summary+json";
RUN;
LIBNAME rptFile json;

The results of running the code above returns a list in the ITEMS table, in the rptFile json library. It returns about 10 reports with the same name of 'Report 2', each with a unique id.

ITEMS table report for 'Report2' query

Get the report content object of a VA report

Using the Reports API of the Visualization service, we can get the report content object of a VA report. As shown in the snippet below, by making a 'GET' request to the SAS Viya server followed by the '/reports/reports//content' in the URL, the report content object is retrieved.

%let BASE_URI=%sysfunc(getoption(SERVICESBASEURL));
FILENAME rptFile TEMP ENCODING='UTF-8';
PROC HTTP METHOD="GET" oauth_bearer=sas_services OUT=rptFile
           URL = "&BASE_URI/reports/reports/<report id>/content";
           HEADERS "Accept" = "application/vnd.sas.report.content+json";
RUN;
LIBNAME rptFile json;

In the output, we see the rptFile json library enumerates the data and image resources in the report content object. Below shows what I retrieved from a report content object.

Contents of rptFile json library

Notice the DATASOURCES_CASRESOURCE table, which Michael uses in Programmatically listing data sources in SAS Visual Analytics. You may explore more information in these tables if interested, such as report states, visual elements, etc. In this post, I won't dig further into the report content object.

Get the metadata of a report object

Next, I am going to get the metadata of a report object with its unique report id using the Reports API in the Visualization service. I use the 'GET' request and '/reports/reports/' in the URL. By runing the code snippet below, I get the metadata of the report object in the rptFile json library.

%let BASE_URI=%sysfunc(getoption(SERVICESBASEURL));
FILENAME rptFile TEMP ENCODING='UTF-8';
PROC HTTP METHOD="GET" oauth_bearer=sas_services OUT=rptFile
        URL = "&BASE_URI/reports/reports/cecac7d7-b957-412e-9709-a3fe504f00b1";
        HEADERS "Accept" = "application/vnd.sas.report+json";
RUN;
LIBNAME rptFile json;

Below is part of the ALLDATA table from the rptFile library. The table contains metadata of the report object, including its unique id, name, creator, creationTimeStamp, modifiedTimeStamp, links, and so on. But in the table, I can't find the folder location of the report object.

ALLDATA table from the rptFile library

Get the report object folder location

So far, I've retrieved most of the metadata info we are looking for, but not the report object folder location. All VA reports are put under the /SAS Content/ folder or its subfolders in SAS Viya. Yet, no such information exists in the report object or the report content object. How can I get the path of a VA report under the /SAS Content/ folder?

The answer is to use the Folders service on the Core Services API. Folders provide an organizational structure for SAS content as well as external content in Viya. The Folders object itself is a virtual container for other resources or folders, and it persists only the URI of resources managed by other services.

A folder object has two types of members: child and reference. Whereas resources can have references in multiple folders, they are restricted to being the child in a single folder. Resources like VA reports are added as child members of a folder, and the folder persists the URI of the is VA report. Thus, we get the folder reversely from the child report by looking for the ancestors of this report object.

By using the Folders API in Core services with a 'GET' request and '/folders/ancestors?childUri=' in the URL, the Proc HTTP code below gets the ancestors of the VA report before getting the full path.

%let BASE_URI=%sysfunc(getoption(SERVICESBASEURL));
FILENAME fldFile TEMP ENCODING='UTF-8';
PROC HTTP METHOD="GET" oauth_bearer=sas_services OUT=fldFile
          URL = "&BASE_URI/folders/ancestors?childUri=/reports/reports/cecac7d7-b957-412e-9709-a3fe504f00b1";
          HEADERS "Accept" = "application/vnd.sas.content.folder.ancestor+json";
RUN;
LIBNAME fldFile json;

From the fldFile.ANCESTORS table, we see the metadata of the ancestor folders, including folder id, folder name, creator, type, and its parentFolderURI, etc. The screenshot below is part of the ANCESTORS table. Thus, the path of the specific report concatenates these subfolders to a full path of /SAS Content/NLS/Cindy/.

Folder path detailed in the ANCESTORS table

Get the path for VA reports

Now I have several reports, I need to go through the above steps repeatedly for each report. So, I wrote SAS code to handle these:

  1. Filter those reports named 'Report 2', using the reports API in Visualization service. Save the list of reports in the ds_rpts dataset. The results include metadata for report id, name, createdBy, CreatedAt, and LastModified.
  2. For each report in the ds_rpts data set, call the macro named 'save_VA_Report_Path(reportURI)'. The macro accesses the Folders API in Core Services, and saves the path for a given report back in the rptPath column of the ds_rpts data set.
  3. Print the list of reports with path and other metadata.

The code yields the following output:

List of reports with paths and metadata

You may access my code samples from GitHub and give it a try in your environment. I run the code with SAS Studio 5.2 and VA on SAS Viya 3.5. You may prefer to modify the filter condition as needed (such as createdBy, contains, or more from SAS Viya support filter).

Finally

The Reports API is one of many SAS Viya REST APIs. In this post, I've provided multiple discovery paths to follow. You can find more information about this and other APIs on the SAS Viya REST APIs page on the developers portal.

Discover Visual Analytics Report Paths with REST APIs was published on SAS Users.

7月 072020
 

Are you looking for a specific CAS action to use in your project? Maybe you need to create a linear or logistic regression and can't seem to find the CAS action? In this post in the Getting Started with Python Integration to SAS® Viya® series, we are going to look at exploring and loading CAS action sets.

In this example, I have already made my connection to CAS using SAS Viya for Learners, and my connection object is named conn. Visit Part 1 of the series to learn more about making a connection to CAS.

Exploring CAS Action Sets

To explore CAS action sets you can use the actionSetInfo CAS action. This should look familiar from Part 2 - Working with CAS Actions and CASResults Objects.

conn.actionsetinfo()['setinfo']

Your results might differ depending on your SAS Viya installation.

The results show the default CAS action sets loaded with SAS Viya. However, many more are available. To see all available CAS action sets use the parameter all=True. The results from running the command with the all=True parameter includes both loaded and available action sets.

conn.actionsetinfo(all=True)['setinfo']

Your results might differ depending on your SAS Viya installation.

As a result of the all=True parameter, we can see over a hundred available CAS action sets. The next question is, how do I load an action set?

Loading a CAS Action Set

I think of loading an action set like importing a package in Python.  To load an action set you need to complete two tasks:

  1. Find the action set
  2. Load the action set

Find the action set

First, let's find the action set. Let's assume we need to complete a logistic regression but are unsure of where the necessary CAS action is located. Instead of of manually scrolling through the CASResults object from the actionSetInfo CAS action, we can instead search the SASDataFrame (summarized data from CAS) for keywords.

First,  I'll set the SASDataFrame from the CASResults object equal to the variable df by calling the setinfo key.

df = conn.actionsetinfo(all=True)['setinfo']

Next, I'll use the loc method on the SASDataFrame to search for any action set that contains the string regression. There are a variety of ways to do this; I'll use the string contains method. I'll make the search case insensitive by using case=False.

df.loc[df['actionset'].str.contains('regression', case=False), :]

Great! In the results I see a regression CAS action set. That looks exactly like what I need. Next, it's time to load the action set.

Load the action set

To load the action use the loadActionSet action with the actionSet='regression' parameter.

conn.loadActionSet(actionSet='regression')


That's it! You have now loaded a CAS action set! Finally, let's explore the actions inside the regression CAS action set.

Exploring CAS Actions in a CAS Action Set

Once a CAS action set is loaded, you can explore the available CAS actions by using the help action with the parameter actionSet='regression'.

conn.help(actionSet='regression')

The results display a list of all the CAS actions in the regression action set. I see the logistic CAS action that I need!

Summary

In conclusion, exploring and loading CAS action sets is important when working on your projects in SAS Viya. A couple of key points to remember:

  • The actionSetInfo CAS action returns all loaded CAS action sets.
  • The actionSetInfo parameter all=True returns all available CAS action sets.
  • The loadActionSet CAS action loads a CAS action set.

In the next post we will talk about exploring data stored in the CAS environment in Exploring Caslibs.

If you enjoyed this blog post, share it with a friend!

Additional Information and Resources

SAS® Viya® 3.5 Actions and Action Sets by Name and Product

Getting Started with Python Integration to SAS® Viya® Index

Getting Started with Python Integration to SAS® Viya® - Part 3 - Loading a CAS Action Set was published on SAS Users.

6月 192020
 

In the second post of the Getting Started with Python Integration to SAS® Viya® series we will learn about Working with CAS Actions and CASResults Objects. CAS actions are commands sent to the CAS server to run a task, and CASResults objects contain information returned from the CAS server. This post will cover a few basic CAS actions, and how to easily work with the information returned.

CAS Actions Overview

First, you need to understand CAS actions. From performing data preparation, modeling, imputing missing values, or even retrieving information about your CAS session, CAS actions perform a single task on the CAS server. CAS actions are organized with other CAS actions in a CAS action set. CAS action sets contain actions that are based on common functionality.

In the end, I like to think of CAS action sets as a package, and all the CAS actions inside an action set as a method.

Getting Started with CAS Actions

We will start with a basic CAS action to view all available loaded action sets in the current CAS session. Before you use any CAS action you must connect to the CAS server. For more information on connecting to the CAS server, visit Part 1 of the series.

I have already made my connection to CAS using SAS Viya for Learners, and my connection object is named conn.

display(conn)
CAS('svflhost.demo.sas.com', 5570, 'peter.test@sas.com', protocol='cas', name='py-session-1', session='efff4323-a862-bd6e-beea737b4249')

Next, to view all available action sets loaded in your CAS session use the actionSetInfo CAS action from the Builtins action set on the CAS connection object. CAS actions and CAS action sets are case insensitive, and you do not need to qualify the CAS action with the action set name (although it is best practice to include both). In the example below I qualify the CAS action.

The actionSetInfo CAS action returns the following output:

Your output information might differ depending on your SAS Viya installation.

The output shows a list of the loaded action sets on the CAS server with additional information for each. With a quick scan, it almost looks like a Pandas DataFrame. However, let's go back and run the actionSetInfo CAS action, but this time use the Python type function to see the data type of the output.

type(conn.builtins.actionsetinfo())
swat.cas.results.CASResults

The actionSetInfo CAS action returns a CASResults object. The million dollar question is, what exactly is a CASResults object?

CASResults Object

CAS actions return a CASResults object. A CASResults object is an ordered Python dictionary with additional methods and attributes. There are no specific rules about how many keys are contained in the CASResults object, or what values the keys return. That information depends on the CAS action used.

One thing to note is that CASResults objects are local on your client machine. That is, the data is not in CAS anymore, it has been processed by CAS and returned locally. This is something to keep in mind when you are working with big data in CAS and try to return more data than your local computer can handle.

Let's continue by examining the output from our previous example. The CASResults object returns one key named setinfo, and one value. You can quickly tell by looking at the output.

However, another method to view all the keys in the CAResults object is to use the Python dictionary keys method. In this example, it will return one key named setinfo.

conn.actionsetinfo().keys()
odict_keys(['setinfo'])

Be aware of how I specified the CAS action in the above example. This time I did not qualify it with the Builtins CAS action set. While you can specify with or without the CAS action set, I recommend to be consistent. I typically do not type the CAS action set. I'll keep that method moving forward.

To view a specific value of a CASResults object you can call the key like a typical Python dictionary. In this example you can call the setinfo key to return the key's value.

conn.actionsetinfo()['setinfo']

As mentioned earlier, the output seems to resemble a Pandas DataFrame. To confirm our suspicion, we can use the type function to see the data type of the value returned by the setinfo key,

type(conn.actionsetinfo()['setinfo'])
swat.dataframe.SASDataFrame

Interesting! The CASResults object contains a SASDataFrame for the setinfo key. What exactly is a SASDataFrame?

Understanding a SASDataFrame

A SASDataFrame is a subclass of a Pandas DataFrame. As a result, you can work with them as you normally would a Pandas DataFrame! Another thing to note is a SASDataFrame is local data. Again, as you use CAS you must be aware CAS can handle more data than your local computer can handle. When bringing data locally make sure it's usable on your local machine. We will discuss this in future posts.

Let's work with the SASDataFrame. First, I will create a new variable named df that holds the SASDataFrame from the actionSetInfo action.

df = conn.actionsetinfo()['setinfo']

Next, I'll use some Pandas methods on the df object. Let's start with the head method to view the first five rows.

df.head()

Or filter the data using the loc method. In this scenario I want to find all action sets with the name simple, and return the actionset and label columns.

df.loc[df['actionset'] == 'simple', ['actionset', 'label']]

Or check unique values of the product_name column by using the value_counts method.

df['product_name'].value_counts()
tkcas        8
crsstat      3
crssearch    1

Or...

No that's it. I think you get my point! You can work with a SASDataFrame object like you would a Pandas DataFrame!

CASResults Object With Multiple Keys

Lastly, let's use another CAS action, but this time the CASResults object contains multiple keys, with multiple value types.

In this example I use the serverStatus CAS action from the Builtins CAS action set. The CAS action returns the status of the server.

conn.serverstatus()

From the looks of the output, I see there are three keys; About, server, and nodestatus. However, let's check our assumption by using the keys method on the CASResults object.

conn.serverstatus().keys()
odict_keys(['About', 'server', 'nodestatus'])

In the output there are three keys as expected. Let's look at the data type of each key by writing a quick Python loop to print the name of the key and the value type returned by that key.

for key,value in conn.serverstatus().items():
      print('Key : {}, Value Type : {}'.format(key,type(value)))
Key : About, Value Type : &lt;class 'dict'&gt;
Key : server, Value Type : &lt;class 'swat.dataframe.SASDataFrame'&gt;
Key : nodestatus, Value Type : &lt;class 'swat.dataframe.SASDataFrame'&gt;

The serverStatus CAS action contains three keys, one key contains a dictionary object, and the other two keys contain a SASDataFrame.

Summary

In conclusion, understanding CAS actions and the CASResults objects are essential when working with CAS. A couple of key points to remember:

  • CAS actions reside in CAS action sets and perform a single task on the CAS server.
  • CAS actions return a CASResults object, which is simply a Python dictionary.
  • A CASResults object contains a single or multiple keys, corresponding to a value or values of any data type.
  • A SASDataFrame resides in a CASResults object, and is a subclass of a Pandas DataFrame.

That was a quick overview of a few basic CAS actions, and how to work with the CASResults objects they return.  You can check out the SAS documentation below for all available CAS Actions.

Additional Resources

SAS® Viya® 3.5 Actions and Action Sets by Name and Product

Getting Started with Python Integration to SAS® Viya® Series Index

Getting Started with Python Integration to SAS® Viya® - Part 2 - Working with CAS Actions and CASResults Objects was published on SAS Users.