SAS Viya

3月 302018
 

As a follow on from my previous blog post, where we looked at the different use cases for using Kerberos in SAS Viya 3.3, in this post I want to delve into more details on configuring Kerberos delegation with SAS Viya 3.3. SAS Viya 3.3 supports the use of Kerberos delegation to authenticate to SAS Logon Manager and then use the delegated credentials to access SAS Cloud Analytic Services. This was the first use case we illustrated in the previous blog post.

As a reminder this is the scenario we are discussing in this blog post:

Kerberos Delegation

In this post we’ll examine:

  • The implications of using Kerberos delegation.
  • The prerequisites.
  • How authentication is processed.
  • How to configure Kerberos delegation.

Why would we want to configure Kerberos delegation for SAS Viya 3.3? Kerberos will provide us with a strong authentication mechanism for the Visual interfaces, SAS Cloud Analytic Services, and Hadoop in SAS Viya 3.3. With Kerberos enabled, no end-user credentials will be sent from the browser to the SAS Viya 3.3 environment. Instead Kerberos relies on a number of encrypted tickets and a trusted third party to provide authentication. Equally, leveraging Kerberos Delegation means that both the SAS Cloud Analytic Services session and the connection to Hadoop will all be running as the end-user. This better allows you to trace operations to a specific end-user and to more thoroughly apply access controls to the end-user.

Implications

Configuring Kerberos delegation will involve configuring Kerberos authentication for both the Visual interfaces and SAS Cloud Analytic Services. First, we’ll look at the implications for the Visual interfaces.

Once we configure Kerberos for authentication of SAS Logon Manager it replaces the default LDAP provider for end-users. This means that the only way for end-users to authenticate to SAS Logon Manager will be with Kerberos. In SAS Viya 3.3 there is no concept of fallback authentication.

Kerberos will be our only option for end-user authentication and we will be unable to use the sasboot account to access the environment. Configuring Kerberos authentication for SAS Logon Manager will be an all-or-nothing approach.

While the web clients will be using Kerberos for authentication, any client using the OAuth API directly will still use the LDAP provider. This means when we connect to SAS Cloud Analytic Services from SAS Studio (which does not integrate with SAS Logon) we will still be obtaining an OAuth token using the username and password of the user accessing SAS Studio.

If we make any mistakes when we configure Kerberos, or if we have not managed to complete the prerequisites correctly, the SAS Logon Manager will not start correctly. The SAS Logon Manager bootstrap process will error and SAS Logon Manager will fail to start. If SAS Logon Manager fails to start then there is no way to gain access to the SAS Viya 3.3 visual interfaces. In such a case the SAS Boot Strap configuration tool must be used to repair or change the configuration settings. Finally, remember using Kerberos for SAS Logon Manager does not change the requirement for the identities microservice to connect to an LDAP provider. Since the identities microservice is retrieving information from LDAP about users and groups we need to ensure the username part of the Kerberos principal for the end-users match the username returned from LDAP. SAS Logon Manager will strip the realm from the user principal name and use this value in the comparison.

Then considering SAS Cloud Analytic Services, we will be adding Kerberos to the other supported mechanisms for authentication. We will not replace the other mechanisms the way we do for SAS Logon Manager. This means we will not prevent users from connecting with a username and password from the Programming interfaces. As with the configuration of SAS Logon Manager, issues with the configuration can cause SAS Cloud Analytic Services to fail to start. Therefore, it is recommended to complete the configuration of SAS Cloud Analytic Services after the deployment has completed and you are certain things are working correctly.

Prerequisites

To be able to use Kerberos delegation with SAS Viya 3.3 a number of prerequisites need to be completed.

Service Principal Name

First a Kerberos Service Principal Name (SPN) needs to be registered for both the HTTP service class and the sascas service class. This will take the form <service class>/<HOSTNAME>, where the <HOSTNAME> is the value that will be used by clients to request a Kerberos Service Ticket. In most cases for HTTP the <HOSTNAME> will just be the fully qualified hostname of the machine where the Apache HTTP Server is running. If you are using aliases or alternative DNS registrations then finding the correct name to use might not be so straight forward. For SAS Cloud Analytic Services, the <HOSTNAME> will be the CAS Controller hostnameNext by registering we mean that this Service Principal Name must be provided to the Kerberos Key Distribution Center (KDC). If we are using Microsoft Active Directory, each SPN must be registered against an object in the Active Directory database. Objects that can have a SPN registered against them are users or computers. We recommend using a user object in Active Directory to register each SPN against. We also recommend that different users are used for HTTP and CAS.

So, we have two service accounts in Active Directory and we register the SPN against each service account. There are different ways the SPN can be registered in Active Directory. The administrator could perform these tasks manually using the GUI, using an LDAP script, PowerShell script, using the setspn command, or using the ktpass command. Using these tools multiple SPNs can be registered against the service account, which is useful if there are different hostnames the end-users might use to access the service. In most cases using these tools will only register the SPN; however, using the ktpass command will also change the User Principal Name for the service account. More on this shortly.

Alternatively, to Microsoft Active Directory customers could be using a different Kerberos KDC. They could use MIT Kerberos or Heimdal Kerberos. For these implementations of Kerberos there is no difference between a user and a service. The database used by these KDCs just stores information on principals and does not provide a distinction between a User Principal Name and a Service Principal Name.

Trusted for Delegation

For the Kerberos authentication to be delegated from SAS Logon Manager to SAS Cloud Analytic Services and then from SAS Cloud Analytic Services to Secured Hadoop, the two service accounts that have the SPNs registered against them must be trusted for delegation. Without this the scenario it will not work. You can only specify that an account is trusted for delegation after the Service Principal Name has been registered. The option is not available until you have completed that step. The picture below shows an example of the delegation settings in Active Directory.

If the Secured Hadoop environment is configured using a different Kerberos Key Distribution Center (KDC) to the rest of the environment it will not prevent the end-to-end scenario working. However, it will add further complexity. You will need to ensure there is a cross-realm trust configured to the Hadoop KDC for the end-to-end scenario to work.

Kerberos Keytab

Once you have registered each of the SPNs you’ll need to create a Kerberos keytab for each service account. Again, there are multiple tools available to create the Kerberos keytab. We recommend using the ktutil command on Linux, since this is independent of the KDC and makes no changes to the Kerberos database when creating the keytab. Some tools like ktpass will make changes when generating the keytab.

In the Kerberos keytab we need to have the User Principal Name (UPN) and associated Kerberos keys for that principal. The Kerberos keys are essentially encrypted versions of the password for the principal. As we have discussed above, about the SPN, depending on the tools used to register it the UPN for the Kerberos keytab could take different forms.

When using ktpass to register SPN and create the keytab in a single step the UPN of the account in Active Directory will be set to the same value as the SPN. Whilst using the setspn command or performing the task manually will leave the UPN unchanged. Equally for MIT Kerberos or Heimdal Kerberos, since there is no differentiation between principals the UPN for the keytab, will be the SPN registered with the KDC.

Once the Kerberos keytabs have been created they will need to be made available to any hosts with the corresponding service deployed.

Kerberos Configuration File

Finally, as far as prerequisites are concerned we might need to provide a Kerberos configuration file for the host where SAS Logon Manager is deployed. This configuration should identify the default realm and other standard Kerberos settings. The Kerberos implementation in Java should be able to use network queries to find the default realm and Kerberos Key Distribution Center. However, if there are issues with the network discovery, then providing a Kerberos configuration file will allow us to specify these options.

The Kerberos configuration file should be placed in the standard location for the operating system. So on Linux this would be /etc/krb5.conf. If we want to specify a different location we can also specify a JVM option to point to a different location. This would be the java.security.krb5.conf option. Equally, if we cannot create a Kerberos configuration file we could set the java.security.krb5.realm and java.security.krb5.kdc options to identify the Kerberos Realm and Kerberos Key Distribution Center. We’ll show how to set JVM options below.

Authentication Process

The process of authenticating an end-user is shown in the figure below:

Where the steps are:

A.  Kerberos used to authenticate to SAS Logon Manager. SAS Logon Manager uses the Kerberos Keytab for HTTP/<HOSTNAME> to validate the Service Ticket. Delegated credentials are stored in the Credentials microservice.
B.  Standard internal OAuth connection to SAS Cloud Analytic Services. Where the origin field in the OAuth token includes Kerberos and the claims include the custom group ID “CASHOSTAccountRequired”.
C.  The presence of the additional Kerberos origin causes SAS Cloud Analytic Services to get the CAS client to make a second connection attempt using Kerberos. The Kerberos credentials for the end-user are obtained from the Credentials microservice. SAS Cloud Analytic Services Controller uses the Kerberos Keytab for sascas/<HOSTNAME> to validate the Service Ticket and authenticate the end-user. Delegated credentials are placed in the end-user ticket cache.
D.  SAS Cloud Analytic Services uses the credentials in the end-user ticket cache to authenticate as the end-user to the Secured Hadoop environment.

Configuration

Kerberos authentication must be configured for both SAS Logon Manager and SAS Cloud Analytic Services. Also, any end-user must be added to a new custom group.

SAS Logon Manager Configuration

SAS Logon Manager is configured in SAS Environment Manager.

Note: Before attempting any configuration, ensure at least one valid LDAP user is a member of the SAS Administrators custom group.

The configuration settings are within the Definitions section of SAS Environment Manager. For the sas.logon.kerberos definition you need to set the following properties:

For more information see the

SAS Logon Manager will need to be restarted for these new JVM options to be picked up. The same method can be used to set the JVM options for identifying the Kerberos Realm and KDC where we would add the following:

  • Name = java_option_krb5realm
  • Value = -Djava.security.krb5.realm=<REALM>
  • Name = java_option_krb5kdc
  • Value = -Djava.security.krb5.kdc=<KDC HOSTNAME>

Or for setting the location of the Kerberos configuration file where we would add:

  • Name = java_option_krb5conf
  • Value = -Djava.security.krb5.conf=/etc/krb5.conf

SAS Cloud Analytic Services Configuration

The configuration for SAS Cloud Analytic Services is not performed in SAS Environment Manager and is completed by changing files on the file system. The danger of changing files on the file system is that re-running the deployment Ansible playbook might overwrite any changes you make. The choices you have is to either remake any changes to the file system, make the changes to both the file system and the playbook files, or make the changes in the playbook files and re-run the playbook to change the file system. Here I will list the changes in both the configuration files and the playbook files.

There is only one required change and then two option changes. The required change is to define the authentication methods that SAS Cloud Analytic Services will use. In the file casconfig_usermods.lua located in:

/opt/sas/viya/config/etc/cas/default

Add the following line:

cas.provlist = 'oauth.ext.kerb'

Note: Unlike the SAS Logon Manager option above, this is separated with full-stops!

In the same file we can make two optional changes. These optional changes enable you to override default values. The first is the default Service Principal Name that SAS Cloud Analytic Services will use. If you cannot use sascas/<HOSTNAME> you can add the following to the casconfig_usermods.lua:

-- Add Env Variable for SPN
env.CAS_SERVER_PRINCIPAL = 'CAS/HOSTNAME.COMPANY.COM'

This sets an environment variable with the new value of the Service Principal Name. The second optional change is to set another environment variable. This will allow you to put the Kerberos Keytab in any location and call it anything. The default name and location is:

/etc/sascas.keytab

If you want to put the keytab somewhere else or call it something else add the following to the casconfig_usermods.lua

-- Add Env Variable for keytab location
env.KRB5_KTNAME = '/opt/sas/cas.keytab'

These changes can then be reflected in the vars.yml within the playbook by adding the following to the CAS_CONFIGURATION section:

CAS_CONFIGURATION:
   env:
     CAS_SERVER_PRINCIPAL: 'CAS/HOSTNAME.COMPANY.COM'
     KRB5_KTNAME: '/opt/sas/cas.keytab'
   cfg:
     provlist: 'oauth.ext.kerb'

With this in place we can restart the SAS Cloud Analytic Services Controller to pick-up the changes.

Custom Group

If you attempted to test accessing SAS Cloud Analytic Services at this point from the Visual interfaces as an end-user you would see that they were not delegating credentials and the CAS session was not running as the end-user. The final step is to create a custom group in SAS Environment Manager. This custom group can be called anything, perhaps “Delegated Users”, but the ID for the group must be “CASHostAccountRequired“. Without this the CAS session will not be run as the end-user and delegated Kerberos credentials will not be used to launch the session.

Summary

What we have outlined in this article is the new feature of SAS Viya 3.3 that enables Kerberos delegation throughout the environment. It allows you to have end-user sessions in SAS Cloud Analytics Services that are able to use Kerberos to connect to Secured Hadoop. I hope you found this helpful.

SAS Viya 3.3 Kerberos Delegation was published on SAS Users.

3月 302018
 

Multi Node Data TransferWith SAS Viya 3.3, a new data transfer mechanism “MultiNode Data Transfer” has been introduced to transfer data between the data source and the SAS’ Cloud Analytics Services (‘CAS’), in addition to Serial and Parallel data transfer modes. The new mechanism is an extension of the Serial Data Transfer mode. In MultiNode Data transfer mode each CAS Worker makes a simultaneous concurrent connection to read and write data from the source DBMS or Distributed data system.

In CAS, SAS Data connectors are used for Serial mode and SAS Data Connect Accelerators are used for Parallel mode data transfer between CAS and DBMS. The SAS Data connector can also be used for the MultiNode data transfer mechanism. In a multi-node CAS environment when the Data Connector is installed on all Nodes, the Data connector can take advantage of a multi-node CAS environment and make concurrent data access connections from each CAS worker to read and write data from the data source environment.

The CAS Controller controls the MultiNode Data transfer. It directs each CAS worker node on how to query the source data and obtain the needed data. The CAS Controller checks the source data table for the first numeric column and uses the values to divide the table into slices using a MOD function of the number of CAS nodes specified. The higher the Cardinality of the selected numeric column, the easier the data can be divided into slices. If CAS chooses a low cardinality column, you could end-up with poor data distribution on the CAS worker nodes. The CAS controller directs each CAS worker to submit a query to obtain the needed slice of data. During this process, each CAS worker makes an independent, concurrent request to the data source environment.

Data is transferred from the source environment to the CAS worker nodes directly using a single thread connection, bypassing the CAS Controller.

The following diagrams describe the data access from CAS to data source environment using MultiNode Data transfer Mode. CAS is hosted on a multi-node environment with SAS Data Connector installed on each node (CAS Controller and Workers). A CASLIB is defined with NUMREADNODES= and NUMWRITENODES= value other than 1. With each data table access request, the CAS controller scan through the source data table for the first numeric columns and use the value to prepare a query for each CAS worker to run. The CAS Worker node submits an individual query to get its slice of the data. Something like:

Select * from SourceTable where mod(NumericField, NUMREADNODES) = WorkerNodeNumber

The data moves from the DBMS gateway server to each CAS Worker Nodes directly using a single thread connection, bypassing the CAS Controller. It’s a kind of parallel load using the serial mechanism, but it’s not a massively parallel data load. You can notice the bottleneck at DBMS gateway server. The data transfers always passes through the DBMS gateway server to the CAS Worker nodes.

Multi Node Data Transfer

Prerequisites to enable MultiNode Data Transfer include:

  • The CAS environment is a multi-node environment (multiple CAS Worker Nodes).
  • The SAS Data Connector for the data source is installed on each CAS Worker, and Controller Node.
  • The data source client connection components are installed on each CAS Worker, and Controller Node.

By default, SAS Data connector uses serial data transfer mode. To enable MultiNode Data Transfer mode you must use the NUMREADNODES= and NUMWRITENODES= parameters in CSLIB statement and specify value other than 1. If value is specified as 0, CAS will use all available CAS worker nodes. MultiNode Data Transfer Mode can use only number of available node, if you specify more than available nodes, the log prints a warning message.

The following code example describes the data load using “MultiNode” data transfer mode. It assigns a CASLIB using serial mode with NUMREADNODES=10 and NUMWRITENODES=10 and loads data from a Hive table to CAS. As NUMREADNODES= value is other than 1, it follows the MultiNode mechanism. You can notice in log, there is a warning message stating that the Number of Read node parameter exceeds the available Worker nodes. This is one way to verify whether CAS is using MultiNode data transfer mode, by specifying the higher number than available CAS worker nodes. If you specify value for NUMREADNODES =0, it will use all available nodes but no message or warning message in SAS log about multi node usage.

CAS mySession SESSOPTS=( CASLIB=casuser TIMEOUT=99 LOCALE="en_US" metrics=true);
caslib HiveSrl datasource=(srctype="hadoop",
server="xxxxxxx.xxx",
username="hadoop",
dataTransferMode="SERIAL",
NUMREADNODES=10, 
NUMWRITENODES=10,
hadoopconfigdir="/opt/MyHadoop/CDH/Config",
hadoopjarpath="/opt/MyHadoop/CDH/Jars",
schema="default");
proc casutil;
load casdata="prdsal2_1G" casout="prdsal2_1G"
outcaslib="HiveSrl" incaslib="HiveSrl" ;
quit;

SAS Log extract:

….
77 proc casutil;
78 ! load casdata="prdsal2_1G" casout="prdsal2_1G"
79 outcaslib="HiveSrl" incaslib="HiveSrl" ;
NOTE: Executing action 'table.loadTable'.
NOTE: Performing serial LoadTable action using SAS Data Connector to Hadoop.
WARNING: The value of numReadNodes(10) exceeds the number of available worker nodes(7). The load will proceed with numReadNodes=7. 
…
..

On the Database side, in this case Hive, note the queries submitted by CAS Worker Nodes. Each include the MOD function WHERE clause as described above.

On Hadoop Resource Manager User Interface you can notice the corresponding job execution for each query submitted by CAS worker nodes.

When using MultiNode mode to load data to CAS, data distribution is dependent on the cardinality of the numeric column selected by CAS during MOD function operation. You can notice the CAS data distribution for the above loaded table is not ideal, since it selected a column (‘year’) which is not ideal (in this case) for data distribution across CAS worker nodes. There is no option with MultiNode mechanism to specify a column name to be use for query preparation and eventually for data distribution.

If CAS cannot find suitable columns for MultiNode data transfer mode, it will use standard Serial mode to transfer data as shown in the following log:

……..
74
74 ! load casdata="prdsal2_char" casout="prdsal2_char"
75 outcaslib="HiveSrl" incaslib="HiveSrl" ;
NOTE: Executing action 'table.loadTable'.
NOTE: Performing serial LoadTable action using SAS Data Connector to Hadoop.
WARNING: The value of numReadNodes(10) exceeds the number of available worker nodes(7). The load will proceed with numReadNodes=7.
WARNING: Unable to find an acceptable column for multi-node reads. Load will proceed with numReadNodes = 1. 
NOTE: Cloud Analytic Services made the external data from prdsal2_char available as table PRDSAL2_CHAR in caslib HiveSrl.
…….

List of data platform supported with MultiNode Data Transfer using Data Connector:

  • Hadoop
  • Impala
  • Oracle
  • PostgreSQL
  • Teradata
  • Amazon Redshift
  • DB2
  • MS SQL Server
  • SAP HANA

The order of data types that SAS uses to divide data into slices for MultiNode Data Read.

  • INT (includes BIGINT, INTEGER, SMALLINT, TINYINT)
  • DECIMAL
  • NUMERIC
  • DOUBLE

Multi-Node Write:

While this post focused on loading data from a data source into CAS, multi-node data transfer also works when saving from CAS back to the data source. The important parameter when saving is NUMWRITENODES instead of NUMREADNODES. The behavior of multi-node saving is similar to that of multi-node loading.

Summary:

The SAS Data Connector can be used for MultiNode data transfer by installing Data Connector and DBMS client components on all CAS Worker nodes without additional license fees. The source data is transferred directly from DBMS gateway server to CAS Worker Nodes being divided up by a simple MOD function. By using this mechanism, the optimum data distribution in CAS Nodes are not guaranteed. It’s suggested to use all CAS Worker Nodes by specifying NUMREADNODES=0 when loading data to CAS using MultiNode mode.

Important links for more information about this topic:

Multi Node Data Transfer to CAS was published on SAS Users.

3月 292018
 

With the release of SAS Viya 3.3, you now have the ability to pass implicit SQL queries to a variety of SQL data sources, including Hive. Under an implicit pass-through, users can write SAS compliant SQL code, and SAS will:

  1. Convert as much code as possible into database native SQL.
  2. Execute the resulting query in-database.
  3. Bring the result back into SAS Viya.

My SAS Viya is co-located within a Hortonworks Hadoop environment. Within this environment, I have set up multiple tables within Hive, which provides structure and a query-like environment for Hadoop data. Using the SAS Data Explorer in SAS Viya, I can easily see the different tables in the Hive environment, and visually inspect them without having to load the data into SAS. The screenshot below shows the Hive table va_service_detail, which contains anonymous data related to recent hospital stays.

SQL Pass-through to Hive in SAS Viya

In my Hive environment, I have a second table called va_member_detail, which contains information about the individuals who were hospitalized in the above table, va_service_detail. A summary of this Hive table can be found in the screenshot below.

Using this data, I would like to perform an analysis to determine why patients are readmitted to the hospital, and understand how we can preventatively keep patients healthy. I will need to join these two tables to allow me to have visit-level and patient-level information in one table. Since medical data is large and messy, I would like to only import the needed information into SAS for my analysis.  The simplest way to do this is through an implicit SQL pass-through to Hive, as shown below:

With an implicit pass-through, I write normal SAS FedSQL code on top of a SAS Library called “Hadoop” pointing to my Hive Server. Once the code is submitted, the SAS System performs the following steps:

  1. Translates the SAS FedSQL code into HiveQL.
  2. Executes the HiveQL script in Hive.
  3. Loads the resulting data in parallel into SAS.

Looking at the log, we can see that the SQL statement was “Fully offloaded to the underlying data source via fill pass-through”, meaning that SAS successfully executed the query, in its entirety, in Hive. With the SAS Embedded Process for Hadoop, the resulting table is then lifted in-parallel from Hive into SAS Viya, making it available for analysis.

As we can see in the log, it took 42 seconds to execute the query in Hive, and bring the result into SAS. To compare efficiency, I redid the analysis, loading va_service_detail and va_member_detail into the memory of the SAS server and performed the join there. The execution took 58 seconds, but required three in-memory tables to do so, along with much more data passing through the network. The implicit pass-through has the benefits of increased speed and decreased latency in data transfer by pushing the query to its source, in this case Hive.

Conclusion

The Implicit SQL Pass-through to Hive in SAS Viya is a must have tool for any analyst working with Hadoop data. With normal SQL syntax in a familiar SAS interface, analysts can push down powerful queries into Hive, speeding up their analysis while limiting data transfer. Analysts can effectively work with large ever-growing data sizes, and speed up the time to value on solving key business challenges.

Implicit SQL Pass-through to Hive in SAS Viya was published on SAS Users.

3月 172018
 

This is a continuation of my previous blog post on SAS Data Studio and the Code transform. In this post, I will review some additional examples of using the Code transform in a SAS Data Studio data plan to help you prepare your data for analytic reports and/or models.

Create a Unique Identifier Example

The DATA step code below combines the _THREADID_ and the _N_ variables to create a UniqueID for each record.

SAS Data Studio Code Transform

The variable _THREADID_ returns the number that is associated with the thread that the DATA step is running in a server session. The variable _N_ is an internal system variable that counts the iterations of the DATA step as it automatically loops through the rows of an input data set. The _N_ variable is initially set to 1 and increases by 1 each time the DATA step loops past the DATA statement. The DATA step loops past the DATA statement for every row that it encounters in the input data. Because the DATA step is a built-in loop that iterates through each row in a table, the _N_ variable can be used as a counter variable in this case.

_THREADID_ and _N_ are variables that are created automatically by the SAS DATA step and saved in memory. For more information on automatic DATA step variables refer to its

Cluster Records Example

The DATA step code below combines the _THREADID_ and the counter variables to create a unique ClusterNum for each BY group.

This code uses the concept of FIRST.variable to increase the counter if it is the beginning of a new grouping. FIRST.variable and LAST.variable are variables that CAS creates for each BY variable. CAS sets FIRST.variable when it is processing the first observation in a BY group, and sets LAST.variable when it is processing the last observation in a BY group. These assignments enable you to take different actions, based on whether processing is starting for a new BY group or ending for a BY group. For more information, refer to the topic

De-duplication Example

The DATA step code below outputs the last record of each BY group; therefore, de-duplicating the data set by writing out only one record per grouping.

Below are the de-duplication results on the data set used in the previous Cluster Records Example section.

For more information about DATA step, refer to the

Below is the resulting customers2.xlsx file in the Public CAS library.

For more information on the available action sets, refer to the SAS® Cloud Analytic Services 3.3: CASL Reference guide.

For more information on SAS Data Studio and the Code transform, please refer to this SAS Data Studio Code Transform (Part 2) was published on SAS Users.

3月 132018
 

SAS Visual Analytics 8.2 introduces the Hidden Data Role. This role can accept one or more category or date data items which will be included in the query results but will not be displayed with the object. You can use this Hidden Data Role in:

  • Mapping Data Sources.
  • Color-Mapped Display Rules.
  • External Links.

Note that this Hidden Data Role is not available for all Objects and cannot be used as both a Hidden Data Role and Data tip value, it can only be assigned to one role.

In this example, we will look at how to use the Hidden Data Role for an External Link.

Here are a few applications of this example:

  • You want to show an index of available assets, and you have a URL to point directly to that asset.
  • Your company sells products, you want to show a table summary of product profit but have a URL that points to each Product’s development page.
  • As the travel department, you want to see individual travel reports rolled up to owner, but have a URL that can link out to each individual report.

The applications are endless when applied to our customer needs.

In my blog example, I have NFL data for Super Bowl wins. I have attached two columns of URLs for demonstration purposes:

  • One URL is for each Super Bowl event, so I have 52 URLs, one for each row of data.
  • The second URL is for each winning team. There have been 20 unique Super Bowl winning teams, so I have 20 unique URLs.

Hidden Data Role in SAS Visual Analytics

In previous versions of SAS Visual Analytics, if you wanted to link out to one of these URLs, you would have to include it in the visualization like in the List Table shown above. But now, using SAS Visual Analytics 8.2, you can assign a column containing these URLs to the Hidden Data Role and it will be available as an External URL.

Here is our target report. We want to be able to link to the Winning Team’s website.

In Visual Analytics 8.2, for the List Table, assign the Winning Team URL column to the Hidden Data Role.

Then, for the List Table, create a new URL Link Action. Give the Action a name and leave the URL section blank. This is because my data column contains a fully qualified URL. If you were linking to a destination and only needed to append a name value pair, then you could put in the partial URL and pass the parameter value, but that’s a different example.

That is using the column which has 20 URLs that matches the winning team in the Hidden Data Role. Now, what if we use the column that has the 52 URLs that link out to the individual Super Bowl events?

That’s right, the cardinality of the Hidden Data Role item does impact the object. Even though the Hidden data item is not visible on the Object, remember it is included in the results query; and therefore, the cardinality of the Hidden data item impacts the aggregation of the data.

Notice that some objects will just present an information warning that a duplicate classification of the data has caused a conflict.

In conclusion, the Hidden Data Role is an exciting addition to the SAS Visual Analytics 8.2 release. I know you'll enjoy and benefit from it.

The power behind a Hidden Data Role in SAS Visual Analytics was published on SAS Users.

3月 092018
 

SAS Viya 3.3 introduces a set of command-line interfaces that SAS Viya administrators will find extremely useful. The command-line interfaces(CLI) will allow administrators to perform numerous administrative tasks in batch as an alternative to using the SAS Environment Manager interface. In addition, calls to the CLI’s can be chained together in scripts to automate more complex administration tasks. In the post I will introduce the administration CLI’s and look at a few useful examples.

The sas-admin CLI is the main interface; it acts as a wrapper for the other CLI’s. The individual CLI’s operate as interfaces to functionality from with sas-admin. The CLI’s provide a simplified interface to the SAS Viya REST services. They abstract the functionality of the REST services allowing an administrator to enter commands on a command line and receive a response back from the system. If the CLI’s do not surface, all functionality you need, calls to the REST API can be made to fill in the gaps.

In SAS Viya 3.3 the available interfaces(plug-ins) within sas-admin are:

Plugin Purpose
audit Gets SAS audit information.
authorization Gets general authorization information, creates and manages rules and permissions on folders.
backup Manages backups.
restore Manages restore operations
cas Manages CAS administration and authorization
configuration Manages the operations of the configuration service
compute Manages the operations of the compute service.
folders Gets and manages SAS folders.
fonts Manages VA fonts
devices Manages mobile device blacklist and whitelist actions and information.
identities Gets identity information, and manages custom groups and group membership
licenses Manages SAS product license status and information
job Manages the operations of the job flow scheduling service
reports Manages SAS Visual Analytics 8.2 reports
tenant Manages tenants in a multi-tenant deployment.
transfer Promotes SAS content.

 

The command-line interfaces are located on a SAS Viya machine (any machine in the commandline host group in your ansible inventory file) in the directory /opt/sas/viya/home/bin.

There are two preliminary steps required to use the command-line interface: you need to create a profile and authenticate.

To create a default profile (you can also create named profiles):

sas-admin profile set-endpoint “http://myserver.demo.myco.com”
sas-admin profile set-output text

You can also simple enter the following and respond to the prompts.

sas-admin profile init

The default profile will be stored in the user’s home directory in a file <homedir>/.sas/config.json

The output options range from text, which provides a simplified text output of the result, to full json which provides the full json output that is returned by the rest call which the CLI will submit.  The full json output is useful if you’re piping the output from one command into a tool which is expecting json.

To authenticate:

sas-admin auth login –user sasadm –password ********

The authentication step creates a token in a file stored in the user’s home directory which is valid for, by default, 12 hours.  The file location is <homedir>/.sas/credentials.json.

The syntax of a call to the sas-admin CLI is shown below. The CLI requires an interfaces(plugin) and a command.

The example shows a call to the identities interface. This command will list all the users who are members of the SAS Administrators custom group.

SAS Viya 3.3 command-line interfaces

In this execution of sas-admin:

  • the interface is identities.
  • there is a global option –output set so that the result is returned in basic text.
  • the command is list-members.
  • the command option –group-id specifies the group whose members you wish to list.

The built-in help of the CLI’s is a very useful feature.

./sas-admin --help

This command provides help on the commands and interfaces(plugins) available, and the global options that may be used.

You can also display help on a specific interface by adding the interface name and then specifying –help.

./sas-admin authorization -–help

Let’s look at an example of using the command-line interface to perform some common administrative tasks. In this example I will:

  • create a new folder that is a sub-folder of an existing folder.
  • create a rule to set authorization on a folder.
  • create and secure a caslib.

Many of the folders commands require the ID of a folder as an argument. The id of the folder is displayed when you create the folder, when you list folders using the CLI and in SAS Environment Manager.

To return a folder id based on its path you can use a rest call to the /folders/folders endpoint. The json that is returned can be parsed to retrieve the id. The folders id can then be used in subsequent calls to the CLI. The rest api call below requests the id of the /gelcontent folder.

curl -X GET “http://myserver.demo.myco.com/folders/folders/@item?path=/gelcontent” -H “Authorization: bearer $TOKEN” | python -mjson.tool

It returns the following json (partial)

{
“creationTimeStamp”: “2017-11-17T15:20:28.563Z”,
“modifiedTimeStamp”: “2017-11-20T23:03:19.939Z”,
“createdBy”: “sasadm”,
“modifiedBy”: “sasadm”,
“id”: “e928249c-7a5e-4556-8e2b-7be8b1950b88”,
“name”: “gelcontent”,
“type”: “folder”,
“memberCount”: 2,
“iconUri”: “/folders/static/icon”,
“links”: [
    {
        “method”: “GET”,
        “rel”: “self”,

NOTE: the authentication token($TOKEN) in the rest call is read from the credentials.json file created when the user authenticated via sas-admin auth login. To see how this is done check out the script at the end of the blog.

The next step is to create a folder that is a sub-folder of the /gelcontent folder. The id of the parent folder, and name of the new folder is passed to the create command of the folders interface.

./sas-admin –-output json folders create –-description “Orion Star” –-name “Orion” -–parent-id e928249c-7a5e-4556-8e2b-7be8b1950b88

Next using the folder id from the previous step set authorization on the folder. In this call to the authorization interface I will grant full control to the group gelcorpadmins on the new folder and its content.

./sas-admin authorization create-rule grant -–permissions read,create,update,delete,add,remove,secure -–group gelcorpadmins -–object-uri /folders/folders/49b7ba6a-0b2d-4e32-b9b9-2536d84cfdbe/** -–container-uri /folders/folders/49b7ba6a-0b2d-4e32-b9b9-2536d84cfdbe

Now in Environment Manager, check that the folder has been created and check the authorization settings. The authorization setting on the folder shows that a new rule has been created and applied providing explicit full access to gelcorpadmins (whose user-friendly name is “GELCorp Admins”).

The next task we might perform is to add a caslib and set authorization on it. We can do that with the following calls to the cas interface.

./sas-admin cas caslibs create path -name ordata --path /tmp/orion --server cas-shared-default
./sas-admin cas caslibs add-control --server cas-shared-default --caslib ordata –-group gelcorpadmins –-grant ReadInfo
./sas-admin cas caslibs add-control --server cas-shared-default --caslib ordata --group gelcorpadmins –-grant Select
./sas-admin cas caslibs add-control --server cas-shared-default --caslib ordata --group gelcorpadmins --grant LimitedPromote
#!/bin/bash
clidir=/opt/sas/viya/home/bin/
endpoint=http://sasserver.demo.sas.com
export TOKEN=
export TOKEN=`grep access-token ~/.sas/credentials.json | cut -d’:’ -f2 | sed s/[{}\”,]//g `
#Get gelcontent folder id
curl -X GET “$endpoint/folders/folders/@item?path=/gelcontent” -H “Authorization: bearer $TOKEN” | python -mjson.tool > /tmp/newfolder.txt
id=$(grep ‘”id”:’ /tmp/newfolder.txt | cut -d’:’ -f2 | sed s/[{}\”,]//g)
echo “The folder ID is” $id
#Create orion Folder
$clidir/sas-admin –output text folders create –name Orion –parent-id $id > /tmp/folderid.txt
orionid=$(grep “Id ” /tmp/folderid.txt | tr -s ‘ ‘ | cut -f2 -d ” “)
echo “The orion folderid is” $orionid
# set permissions
$clidir/sas-admin authorization create-rule grant –permissions read,create,update,delete,add,remove,secure –group gelcorpadmins –object-uri /folders/folders/$orionid/** –container-uri /folders/folders/$orionid
$clidir/sas-admin authorization create-rule grant –permissions read –group gelcorp –object-uri /folders/folders/$orionid

The SAS Viya command-line interfaces are a very valuable addition to the administrator’s toolbox. There is obviously much more which can be done with the CLI’s than we can cover in this article. For more information and details of the available interfaces please check out the SAS Viya 3.3 command-line interfaces for Administration was published on SAS Users.

3月 092018
 

SAS Viya 3.3 introduces a set of command-line interfaces that SAS Viya administrators will find extremely useful. The command-line interfaces(CLI) will allow administrators to perform numerous administrative tasks in batch as an alternative to using the SAS Environment Manager interface. In addition, calls to the CLI’s can be chained together in scripts to automate more complex administration tasks. In the post I will introduce the administration CLI’s and look at a few useful examples.

The sas-admin CLI is the main interface; it acts as a wrapper for the other CLI’s. The individual CLI’s operate as interfaces to functionality from with sas-admin. The CLI’s provide a simplified interface to the SAS Viya REST services. They abstract the functionality of the REST services allowing an administrator to enter commands on a command line and receive a response back from the system. If the CLI’s do not surface, all functionality you need, calls to the REST API can be made to fill in the gaps.

In SAS Viya 3.3 the available interfaces(plug-ins) within sas-admin are:

Plugin Purpose
audit Gets SAS audit information.
authorization Gets general authorization information, creates and manages rules and permissions on folders.
backup Manages backups.
restore Manages restore operations
cas Manages CAS administration and authorization
configuration Manages the operations of the configuration service
compute Manages the operations of the compute service.
folders Gets and manages SAS folders.
fonts Manages VA fonts
devices Manages mobile device blacklist and whitelist actions and information.
identities Gets identity information, and manages custom groups and group membership
licenses Manages SAS product license status and information
job Manages the operations of the job flow scheduling service
reports Manages SAS Visual Analytics 8.2 reports
tenant Manages tenants in a multi-tenant deployment.
transfer Promotes SAS content.

 

The command-line interfaces are located on a SAS Viya machine (any machine in the commandline host group in your ansible inventory file) in the directory /opt/sas/viya/home/bin.

There are two preliminary steps required to use the command-line interface: you need to create a profile and authenticate.

To create a default profile (you can also create named profiles):

sas-admin profile set-endpoint “http://myserver.demo.myco.com”
sas-admin profile set-output text

You can also simple enter the following and respond to the prompts.

sas-admin profile init

The default profile will be stored in the user’s home directory in a file <homedir>/.sas/config.json

The output options range from text, which provides a simplified text output of the result, to full json which provides the full json output that is returned by the rest call which the CLI will submit.  The full json output is useful if you’re piping the output from one command into a tool which is expecting json.

To authenticate:

sas-admin auth login –user sasadm –password ********

The authentication step creates a token in a file stored in the user’s home directory which is valid for, by default, 12 hours.  The file location is <homedir>/.sas/credentials.json.

The syntax of a call to the sas-admin CLI is shown below. The CLI requires an interfaces(plugin) and a command.

The example shows a call to the identities interface. This command will list all the users who are members of the SAS Administrators custom group.

SAS Viya 3.3 command-line interfaces

In this execution of sas-admin:

  • the interface is identities.
  • there is a global option –output set so that the result is returned in basic text.
  • the command is list-members.
  • the command option –group-id specifies the group whose members you wish to list.

The built-in help of the CLI’s is a very useful feature.

./sas-admin --help

This command provides help on the commands and interfaces(plugins) available, and the global options that may be used.

You can also display help on a specific interface by adding the interface name and then specifying –help.

./sas-admin authorization -–help

Let’s look at an example of using the command-line interface to perform some common administrative tasks. In this example I will:

  • create a new folder that is a sub-folder of an existing folder.
  • create a rule to set authorization on a folder.
  • create and secure a caslib.

Many of the folders commands require the ID of a folder as an argument. The id of the folder is displayed when you create the folder, when you list folders using the CLI and in SAS Environment Manager.

To return a folder id based on its path you can use a rest call to the /folders/folders endpoint. The json that is returned can be parsed to retrieve the id. The folders id can then be used in subsequent calls to the CLI. The rest api call below requests the id of the /gelcontent folder.

curl -X GET “http://myserver.demo.myco.com/folders/folders/@item?path=/gelcontent” -H “Authorization: bearer $TOKEN” | python -mjson.tool

It returns the following json (partial)

{
“creationTimeStamp”: “2017-11-17T15:20:28.563Z”,
“modifiedTimeStamp”: “2017-11-20T23:03:19.939Z”,
“createdBy”: “sasadm”,
“modifiedBy”: “sasadm”,
“id”: “e928249c-7a5e-4556-8e2b-7be8b1950b88”,
“name”: “gelcontent”,
“type”: “folder”,
“memberCount”: 2,
“iconUri”: “/folders/static/icon”,
“links”: [
    {
        “method”: “GET”,
        “rel”: “self”,

NOTE: the authentication token($TOKEN) in the rest call is read from the credentials.json file created when the user authenticated via sas-admin auth login. To see how this is done check out the script at the end of the blog.

The next step is to create a folder that is a sub-folder of the /gelcontent folder. The id of the parent folder, and name of the new folder is passed to the create command of the folders interface.

./sas-admin –-output json folders create –-description “Orion Star” –-name “Orion” -–parent-id e928249c-7a5e-4556-8e2b-7be8b1950b88

Next using the folder id from the previous step set authorization on the folder. In this call to the authorization interface I will grant full control to the group gelcorpadmins on the new folder and its content.

./sas-admin authorization create-rule grant -–permissions read,create,update,delete,add,remove,secure -–group gelcorpadmins -–object-uri /folders/folders/49b7ba6a-0b2d-4e32-b9b9-2536d84cfdbe/** -–container-uri /folders/folders/49b7ba6a-0b2d-4e32-b9b9-2536d84cfdbe

Now in Environment Manager, check that the folder has been created and check the authorization settings. The authorization setting on the folder shows that a new rule has been created and applied providing explicit full access to gelcorpadmins (whose user-friendly name is “GELCorp Admins”).

The next task we might perform is to add a caslib and set authorization on it. We can do that with the following calls to the cas interface.

./sas-admin cas caslibs create path -name ordata --path /tmp/orion --server cas-shared-default
./sas-admin cas caslibs add-control --server cas-shared-default --caslib ordata –-group gelcorpadmins –-grant ReadInfo
./sas-admin cas caslibs add-control --server cas-shared-default --caslib ordata --group gelcorpadmins –-grant Select
./sas-admin cas caslibs add-control --server cas-shared-default --caslib ordata --group gelcorpadmins --grant LimitedPromote
#!/bin/bash
clidir=/opt/sas/viya/home/bin/
endpoint=http://sasserver.demo.sas.com
export TOKEN=
export TOKEN=`grep access-token ~/.sas/credentials.json | cut -d’:’ -f2 | sed s/[{}\”,]//g `
#Get gelcontent folder id
curl -X GET “$endpoint/folders/folders/@item?path=/gelcontent” -H “Authorization: bearer $TOKEN” | python -mjson.tool > /tmp/newfolder.txt
id=$(grep ‘”id”:’ /tmp/newfolder.txt | cut -d’:’ -f2 | sed s/[{}\”,]//g)
echo “The folder ID is” $id
#Create orion Folder
$clidir/sas-admin –output text folders create –name Orion –parent-id $id > /tmp/folderid.txt
orionid=$(grep “Id ” /tmp/folderid.txt | tr -s ‘ ‘ | cut -f2 -d ” “)
echo “The orion folderid is” $orionid
# set permissions
$clidir/sas-admin authorization create-rule grant –permissions read,create,update,delete,add,remove,secure –group gelcorpadmins –object-uri /folders/folders/$orionid/** –container-uri /folders/folders/$orionid
$clidir/sas-admin authorization create-rule grant –permissions read –group gelcorp –object-uri /folders/folders/$orionid

The SAS Viya command-line interfaces are a very valuable addition to the administrator’s toolbox. There is obviously much more which can be done with the CLI’s than we can cover in this article. For more information and details of the available interfaces please check out the SAS Viya 3.3 command-line interfaces for Administration was published on SAS Users.

3月 072018
 

The R SWAT package (SAS Wrapper for Analytics Transfer) enables you to upload big data into an in-memory distributed environment to manage data and create predictive models using familiar R syntax. In the SAS Viya Integration with Open Source Languages: R course, you learn the syntax and methodology required to [...]

The post Use R to interface with SAS Cloud Analytics Services appeared first on SAS Learning Post.

2月 152018
 

In this article, I will set out clear principles for how SAS Viya 3.3 will interoperate with Kerberos. My aim is to present some overview concepts for how we can use Kerberos authentication with SAS Viya 3.3. We will look at both SAS Viya 3.3 clients and SAS 9.4M5 clents. In future blog posts, we’ll examine some of these use cases in more detail.

With SAS Viya 3.3 clients we have different use cases for how we can use Kerberos with the environment. In the first case, we use Kerberos delegation throughout the environment.

Use Case 1 – SAS Viya 3.3

The diagram below illustrates the use case where Kerberos delegation is used into, within, and out from the environment.

How SAS Viya 3.3 will interoperate with Kerberos

In this diagram, we show the end-user relying on Kerberos or Integrated Windows Authentication to log onto the SAS Logon Manager as part of their access to the visual interfaces. SAS Logon Manager is provided with a Kerberos keytab and HTTP principal to enable the Kerberos connection. In addition, the HTTP principal is flagged as “trusted for delegation” so that the credentials sent by the client include the delegated or forwardable Ticket-Granting Ticket (TGT). The configuration of SAS Logon Manager with SAS Viya 3.3 includes a new option to store this delegated credential. The delegated credential is stored in the credentials microservice, and secured so that only the end-user to which the credential belongs can access it.

When the end-user accesses SAS CAS from the visual interfaces the initial authentication takes place with the standard internal OAuth token. However, since the end-user stored a delegated credential when accessing the SAS Logon Manager an additional Origin attribute is set on the token of “Kerberos.” The internal OAuth token also contains the groups the end-user is a member of within the Claims. Since we want this end-user to run the SAS CAS session as themselves they must have been added to a custom group with the ID=CASHostAccountRequired. When the SAS CAS Controller receives the OAuth token with the additional Kerberos Origin, it requests the visual interface to make a second Kerberized connection. So, the visual interface retrieves the delegated credential from the credentials microservice and uses this to request a Service Ticket to connect to SAS CAS.

SAS CAS has been provided with a Kerberos keytab and a sascas principal to enable the Kerberos connection. Since the sascas principal is flagged as “trusted for delegation,” the credentials sent by the visual interfaces include a delegated or forwardable Ticket-Granting Ticket (TGT). SAS CAS validates the Service Ticket, which in turn authenticates the end-user. The SAS CAS Controller then launches the session as the end-user and constructs a Kerberos ticket cache containing the delegated TGT. Now, within their SAS CAS session the end-user can connect to the Secured Hadoop environment as themselves since the SAS CAS session has access to a TGT for the end-user.

This means in this first use case all access to, within, and out from the SAS Viya 3.3 environment leverages strong Kerberos authentication. This is our “gold-standard” for authenticating the end-user to each part of the environment.

But, it is strictly dependent on the end-user being a member of the custom group with ID=CASHostAccountRequired, and the two principals (HTTP and sascas) being trusted for delegation. Without both the Kerberos delegation will not take place.

Use Case 1a – SAS Viya 3.3

The diagram below illustrates a slight deviation on the first use case.

Here, either through choice or by omission, the end-user is not a member of the custom group with the ID=CASHostAccountRequired. Now even though the end-user connects with Kerberos and irrespective of the configuration of SAS Logon Manager to store delegated credentials the second connection using Kerberos is not made to SAS CAS. Now the SAS CAS session runs as the account that launched the SAS CAS controller, cas by default. Since, the session is not running as the end-user and SAS CAS did not receive a Kerberos connection, the Kerberos ticket cache that is generated for the session does not contain the credentials of the end-user. Instead, the Kerberos keytab and principal supplied to SAS CAS are used to establish the credentials in the Kerberos ticket cache.

This means that even though Kerberos was used to connect to SAS Logon Manager the connection to the Secured Hadoop environment is as the sascas principal and not the end-user.

The same situation could be arrived at if the HTTP principal for SAS Logon Manager is not trusted for delegation.

Use Case 1b – SAS Viya 3.3

A final deviation to the initial use case is shown in the following diagram.

In this case the end-user connects to SAS Logon Manager with any other form of authentication. This could be the default LDAP authentication, external OAuth, or external SAML authentication. Just as in use case 1a, this means that the connection to SAS CAS from the visual interfaces only uses the internal OAuth token. Again, since no delegated credentials are used to connect to SAS CAS the session is run as the account that launched the SAS CAS controller. Also, the ticket cache created by the SAS Cloud Analytic Service Controller contains the credentials from the Kerberos keytab, i.e. the sascas principal. This means that access to the Secured Hadoop environment is as the sascas principal and not the end-user.

Use Case 2 – SAS Viya 3.3

Our second use case covers those users entering the environment via the programming interfaces, for example SAS Studio. In this case, the end-users have entered a username and password, a credential set, into SAS Studio. This credential set is used to start their individual SAS Workspace Session and to connect to SAS CAS from the SAS Workspace Server. This is illustrated in the following figure.

Since the end-users are providing their username and password to SAS CAS it behaves differently. SAS CAS uses its own Pluggable Authentication Modules (PAM) configuration to validate the end-user’s credentials and hence launch the SAS CAS session process running as the end-user. However, in addition the SAS CAS Controller also uses the username and password to obtain an OAuth token from SAS Logon Manager and then can obtain any access control information from the SAS Viya 3.3 microservices. Obtaining the OAuth token from the SAS Logon Manager ensures any restrictions or global caslibs defined in the visual interfaces are observed in the programming interfaces.

With the SAS CAS session running as the end-user and any access controls validated, the SAS CAS session can access the Secured Hadoop cluster. Now since the SAS CAS session was launched using the PAM configuration, the Kerberos credentials used to access Hadoop will be those of the end-user. This means the PAM configuration on the machines hosting SAS CAS must be linked to Kerberos. This PAM configuration then ensures the Kerberos Ticket-Granting Ticket is available to the CAS Session as is it launched.

Next, we consider three further use cases where the client is SAS 9.4 maintenance 5. Remember that SAS 9.4 maintenance 5 can make a direct connection to SAS CAS without requiring SAS/CONNECT. The use cases we will discuss will illustrate the example with a SAS 9.4 maintenance 5 web application, such as SAS Studio. However, the statements and basic flows remain the same if the SAS 9.4 maintenance 5 client is a desktop application like SAS Enterprise Guide.

Use Case 3 – SAS 9.4 maintenance 5

First, let’s consider the case where our SAS 9.4 maintenance 5 end-user enters their username and password to access the SAS 9.4 environment. This is illustrated in the following diagram.

In this case, since the SAS 9.4 Workspace Server is launched using a username and password, these are cached on the launch of the process. This enables the SAS 9.4 Workspace Server to use these cached credentials when connecting to SAS CAS. However, the same process occurs if instead of the cached credentials being provided by the launching process, they are provided by another mechanism. These credentials could be provided from SAS 9.4 Metadata Server or from an authinfo file in the user’s home directory on the SAS 9.4 environment. In any case, the process on the SAS Cloud Analytic Server Controller is the same.

The username and password used to connect are validated through the PAM stack on the SAS CAS Controller, as well as being used to generate an internal OAuth token from the SAS Viya 3.3 Logon Manager. The PAM stack, just as in the SAS Viya 3.3 programming interface use case 2 above, is responsible for initializing the Kerberos credentials for the end-user. These Kerberos credentials are placed into a Kerberos Ticket cache which makes them available to the SAS CAS session for the connection to the Secured Hadoop environment. Therefore, all the different sessions within SAS 9.4, SAS Viya 3.3, and the Secured Hadoop environment run as the end-user.

Use Case 4 – SAS 9.4 maintenance 5

Now what about the case where the SAS 9.4 maintenance 5 environment is configured for Kerberos authentication throughout. The case where we have Kerberos delegation configured in SAS 9.4 is shown here.

Here the SAS 9.4 Workspace Server is launched with Kerberos credentials, the Service Principal for the SAS 9.4 Object Spawner will need to be trusted for delegation. This means that a Kerberos credential for the end-user is available to the SAS 9.4 Workspace Server. The SAS 9.4 Workspace Server can use this end-user Kerberos credential to request a Service Ticket for the connection to SAS CAS. SAS CAS is provided with a Kerberos keytab and principal it can use to validate this Service Ticket. Validating the Service Ticket authenticates the SAS 9.4 end-user to SAS CAS. The principal for SAS CAS must also be trusted for delegation. We need the SAS CAS session to have access to the Kerberos credentials of the SAS 9.4 end-user.

These Kerberos credentials made available to the SAS CAS are used for two purposes. First, they are used to make a Kerberized connection to the SAS Viya Logon Manager, this is to obtain the SAS Viya internal OAuth token. Therefore, the SAS Viya Logon Manager must be configured to accept Kerberos connections. Second, the Kerberos credentials of the SAS 9.4 end-user are used to connect to the Secure Hadoop environment.

In this case since all the various principals are trusted for delegation, our SAS 9.4 end-user can perform multiple authentication hops using Kerberos with each component. This means that through the use of Kerberos authentication the SAS 9.4 end-user is authenticated into SAS CAS and out to the Secure Hadoop environment.

Use Case 5 – SAS 9.4 maintenance 5

Finally, what about cases where the SAS 9.4 maintenance 5 session is not running as the end-user but as a launch credential; this is illustrated here.

The SAS 9.4 session in this case could be a SAS Stored Process Server, Pooled Workspace Server, or a SAS Workspace server leveraging a launch credential such as sassrv. The key point being that now the SAS 9.4 session is not running as the end-user and has no access to the end-user credentials. In this case we can still connect to SAS CAS and from there out to the Secured Hadoop environment. However, this requires some additional configuration. This setup will leverage One-Time-Passwords generated by the SAS 9.4 Metadata Server, so the SAS 9.4 Metadata Server must be made aware of the SAS CAS. This is done by adding a SAS 9.4 metadata definition for the SAS CAS. Our connection from SAS 9.4 must then be “metadata aware,” achieved by using authdomain=_sasmeta_ on the connection.

Equally, the SAS Viya 3.3 side of the environment must be able to validate the One-Time-Password used to connect to SAS CAS. When SAS CAS receives the One-Time-Password on the connection, it is sent to the SAS Viya Logon Manager for validation and to obtain a SAS Viya internal OAuth token. We need to add some configuration to the SAS Viya Logon Manager to enable this to validate the One-Time-Password. We configured the SAS Viya Logon Manager with the details of where the SAS 9.4 Web Infrastructure Platform is running. The SAS Viya Logon Manager passes the One-Time-Password to the SAS 9.4 Web Infrastructure Platform to validate the One-Time-Password. After the One-Time-Password is validated a SAS Viya internal OAuth token is generated and passed back to SAS CAS.

Since SAS CAS does not have access to the end-user credentials, the session that is created will be run using the account used to launch the controller process, cas by default. Since the end-user credentials are not available, the Kerberos credentials that are initialized for the session are from the Kerberos keytab provided to SAS CAS. Then the connection to the Secured Hadoop environment will be made using those Kerberos credentials of the principal assigned to the SAS CAS.

Summary

We have presented several use cases above. The table below can be used to summarize and differentiate the use cases based on key factors.

SAS Viya 3.3 - Some Kerberos principles was published on SAS Users.

1月 262018
 

If you have worked with the different types of score code generated by the high-performance modeling nodes in SAS® Enterprise Miner™ 14.1, you have probably come across the Analytic Store (or ASTORE) file type for scoring.  The ASTOREfile type works very well for scoring complex machine learning models like random forests, gradient boosting, support vector machines and others. In this article, we will focus on ASTORE files generated by SAS® Viya® Visual Data Mining and Machine Learning (VDMML) procedures. An introduction to analytic stores on SAS Viya can be found here.

In this post, we will:

  1. Generate an ASTORE file for a PROC ASTORE in SAS Visual Data Mining and Machine Learning.

Generate an ASTORE file for a gradient boosting model

Our example dataset is a distributed in-memory CAS table that contains information about applicants who were granted credit for a certain home equity loan. The categorical binary-valued target variable ‘BAD’ identifies if a client either defaulted or repaid their loan. The remainder of the variables indicating the candidate’s credit history, debt-to-income ratio, occupation, etc., are used as predictors for the model. In the code below, we are training a gradient boosting model on a randomly sampled 70% of the data and validating against 30% of the data. The statement SAVESTATE creates an analytic store file (ASTORE) for the model and saves it as a binary file named “astore_gb.”

proc gradboost data=PUBLIC.HMEQ;
partition fraction(validate=0.3);
target BAD / level=nominal;
input LOAN MORTDUE DEBTINC VALUE YOJ DEROG DELINQ CLAGE NINQ CLNO /
level=interval;
input REASON JOB / level=nominal;
score out=public.hmeq_scored copyvars=(_all_);
savestate rstore=public.astore_gb;
id _all_;
run;

Shown below are a few observations from the scored dataset hmeq_scored  where YOJ (years at present job) is greater than 10 years.

PROC ASTORE

Override the scoring decision using PROC ASTORE

In this segment, we will use PROC ASTORE to override the scoring decision from the gradient boosting model. To that end, we will first make use of the DESCRIBE statement in PROC ASTORE to produce basic DS2 scoring code using the EPCODE option. We will then edit the score code in DS2 language syntax to override the scoring decision produced from the gradient boosting model.

proc astore;
    describe rstore=public.astore_gb
        epcode="/viyafiles/jukhar/gb_epcode.sas"; run;

A snapshot of the output from the above code statements are shown below. The analytic store is assigned to a unique string identifier. We also get information about the analytic engine that produced the store (gradient boosting, in this case) and the time when the store was created. In addition, though not shown in the snapshot below, we get a list of the input and output variables used.

Let’s take a look at the DS2 score code (“gb_epcode.sas”) produced by the EPCODE option in the DESCRIBE statement within PROC ASTORE.

data sasep.out;
	 dcl package score sc();
	 dcl double "LOAN";
	 dcl double "MORTDUE";
	 dcl double "DEBTINC";
	 dcl double "VALUE";
	 dcl double "YOJ";
	 dcl double "DEROG";
	 dcl double "DELINQ";
	 dcl double "CLAGE";
	 dcl double "NINQ";
	 dcl double "CLNO";
	 dcl nchar(7) "REASON";
	 dcl nchar(7) "JOB";
	 dcl double "BAD";
	 dcl double "P_BAD1" having label n'Predicted: BAD=1';
	 dcl double "P_BAD0" having label n'Predicted: BAD=0';
	 dcl nchar(32) "I_BAD" having label n'Into: BAD';
	 dcl nchar(4) "_WARN_" having label n'Warnings';
	 Keep 
		 "P_BAD1" 
		 "P_BAD0" 
		 "I_BAD" 
		 "_WARN_" 
		 "BAD" 
		 "LOAN" 
		 "MORTDUE" 
		 "VALUE" 
		 "REASON" 
		 "JOB" 
		 "YOJ" 
		 "DEROG" 
		 "DELINQ" 
		 "CLAGE" 
		 "NINQ" 
		 "CLNO" 
		 "DEBTINC" 
		;
	 varlist allvars[_all_];
	 method init();
		 sc.setvars(allvars);
		 sc.setKey(n'F8E7B0B4B71C8F39D679ECDCC70F6C3533C21BD5');
	 end;
	 method preScoreRecord();
	 end;
	 method postScoreRecord();
	 end;
	 method term();
	 end;
	 method run();
		 set sasep.in;
		 preScoreRecord();
		 sc.scoreRecord();
		 postScoreRecord();
	 end;
 enddata;

The sc.setKey in the method init () method block contains a string identifier for the analytic store; this is the same ASTORE identifier that was previously outputted as part of PROC ASTORE. In order to override the scoring decision created from the original gradient boosting model, we will edit the gb_epcode.sas file (shown above) by inserting new statements in the postScoreRecord method block; the edited file must follow DS2 language syntax. For more information about the DS2 language, see 

method postScoreRecord();
      if YOJ>10 then do; I_BAD_NEW='0'; end; else do; I_BAD_NEW=I_BAD; end;
    end;

Because we are saving the outcome into a new variable called “I_BAD_NEW,” we will need to declare this variable upfront along with the rest of the variables in the score file.

In order for this override to take effect, we will need to run the SCORE statement in PROC ASTORE and provide both the original ASTORE file (astore_gb), as well as the edited DS2 score code (gb_epcode.sas).

proc astore;
    score data=public.hmeq epcode="/viyafiles/jukhar/gb_epcode.sas"
          rstore=public.astore_gb
          out=public.hmeq_new; run;

A comparison of “I_BAD” and “I_BAD_NEW” in the output of the above code for select variables shows that the override rule for scoring has indeed taken place.

In this article we explored how to override the scoring decision produced from a machine learning model in SAS Viya. You will find more information about scoring in the Using PROC ASTORE to override scoring decisions in SAS® Viya® was published on SAS Users.