SAS Viya

5月 232017
 

I do not like to stand still. I am a lifelong learner, I take guitar lessons during the week, and I am known for riding my Segway around SAS campus and at events. SAS works the same way; we never stand still. We are continuously innovating and moving forward. Last [...]

Developing with passion: 4 new technologies shaping the future of SAS was published on SAS Voices by Oliver Schabenberger

5月 092017
 

Microservices are a key component of the SAS Viya architecture. In this post, I’ll introduce and explain the benefits of microservices. In a future post we’ll dig deeper into the microservices architecture.

What are microservices?

When we look at SAS Viya architecture diagrams, we can find, among the new core components, microservices.

Microservices are self-contained, lightweight pieces of software that

  • Do one thing.
  • Depend on one another to the least extent possible.
  • Are deployed independently.
  • Provide a language-agnostic API.
  • Can run one or more instances of these processes at any given time.

Note that the prefix “micro” doesn’t mean small in CPU or memory consumption. Rather, it refers to the software performing a single function or being narrow in scope.

Let’s compare to SAS 9

The SAS 9 Web Infrastructure Platform services and the overall platform are tightly coupled to metadata structures and schemas. Every maintenance action takes a bit of effort: can you apply a fix to a single application without first stopping the whole infrastructure? Can you upgrade one component and leave all of the other ones at the previous release? Can you…?

To address these and other issues, SAS R&D decomposed the metadata server,  the Web Infrastructure Platform, and  many web applications. As a result, we got many functional units. Each one is a microservice.

Let’s have a look at the following examples.

In SAS 9.4 we can open the SAS Management Console to manage users and groups:

In SAS Viya, we can do the same using the SAS Environment Manager web application:

You may think we simply switched to a different, web-based client. Actually, the real difference lies in the backend implementation. With SAS 9, the metadata server was responsible for servicing that functionality in addition to a host of other features. With SAS Viya, we have a dedicated microservice for it: the Identities microservice.

Here’s another example. We want to edit an option in the configuration of an application, like the address of the Open Street Map server to use with Visual Analytics geo maps. With SAS 9, we use the SAS Management Console to interact, as usual, with the metadata server.

With SAS Viya, we set the property with Environment Manager. And, guess what? We are using the Configuration microservice.

If you are curious and want to see a list of all the microservices deployed in your SAS Viya environment, you can, again, use the Environment Manager.

Note that in all these examples, the Environment Manager is simply serving as the GUI to a particular microservice supporting the associated feature.

What are the benefits of Microservices?

The move to a microservice-oriented architecture brings many benefits to all stakeholders, first and foremost to SAS users and SAS administrators.

Microservices are independently updatable

It is now easier for you to manage and maintain your environment. Hot fixes for a specific microservice are released just as normal updates, and the official installation process is documented in the

Just as with the previous point, there are a few exceptions: almost everything requires the SASLogon and Identities microservices, so, if they are down, nothing works.

Scalability and High Availability

When microservices are spun up, they self-register, making themselves available for processing requests. This way, supporting failover is as easy as ensuring you have at least two instances of the associated microservice up and running. It is possible to scale further for increased capacity/performance, and you can do so at the microservice level, based on the specific demand for each function (e.g., you likely won’t need as many instances of the Import VA SPK microservice as you do for the Authorization microservice).

Microservices are “open”

Microservices can run in different environments – bare OS, Cloud Foundry, Docker. Also, they are accessible to non-SAS developers through REST APIs. As an example, let’s say I’d like to retrieve the same properties for the SAS Administrators group that were shown above in Environment Manager. It’s as easy as calling a REST endpoint: http://<myserver>/identities/groups/SASAdministrators
The result can be in either XML or json.

In fact, even microservices communicate with one another using REST interfaces!

I hope this blog has been helpful.

Feel free to add comments or questions below.

Let’s talk about Microservices was published on SAS Users.

5月 092017
 

Microservices are a key component of the SAS Viya architecture. In this post, I’ll introduce and explain the benefits of microservices. In a future post we’ll dig deeper into the microservices architecture.

What are microservices?

When we look at SAS Viya architecture diagrams, we can find, among the new core components, microservices.

Microservices are self-contained, lightweight pieces of software that

  • Do one thing.
  • Depend on one another to the least extent possible.
  • Are deployed independently.
  • Provide a language-agnostic API.
  • Can run one or more instances of these processes at any given time.

Note that the prefix “micro” doesn’t mean small in CPU or memory consumption. Rather, it refers to the software performing a single function or being narrow in scope.

Let’s compare to SAS 9

The SAS 9 Web Infrastructure Platform services and the overall platform are tightly coupled to metadata structures and schemas. Every maintenance action takes a bit of effort: can you apply a fix to a single application without first stopping the whole infrastructure? Can you upgrade one component and leave all of the other ones at the previous release? Can you…?

To address these and other issues, SAS R&D decomposed the metadata server,  the Web Infrastructure Platform, and  many web applications. As a result, we got many functional units. Each one is a microservice.

Let’s have a look at the following examples.

In SAS 9.4 we can open the SAS Management Console to manage users and groups:

In SAS Viya, we can do the same using the SAS Environment Manager web application:

You may think we simply switched to a different, web-based client. Actually, the real difference lies in the backend implementation. With SAS 9, the metadata server was responsible for servicing that functionality in addition to a host of other features. With SAS Viya, we have a dedicated microservice for it: the Identities microservice.

Here’s another example. We want to edit an option in the configuration of an application, like the address of the Open Street Map server to use with Visual Analytics geo maps. With SAS 9, we use the SAS Management Console to interact, as usual, with the metadata server.

With SAS Viya, we set the property with Environment Manager. And, guess what? We are using the Configuration microservice.

If you are curious and want to see a list of all the microservices deployed in your SAS Viya environment, you can, again, use the Environment Manager.

Note that in all these examples, the Environment Manager is simply serving as the GUI to a particular microservice supporting the associated feature.

What are the benefits of Microservices?

The move to a microservice-oriented architecture brings many benefits to all stakeholders, first and foremost to SAS users and SAS administrators.

Microservices are independently updatable

It is now easier for you to manage and maintain your environment. Hot fixes for a specific microservice are released just as normal updates, and the official installation process is documented in the

Just as with the previous point, there are a few exceptions: almost everything requires the SASLogon and Identities microservices, so, if they are down, nothing works.

Scalability and High Availability

When microservices are spun up, they self-register, making themselves available for processing requests. This way, supporting failover is as easy as ensuring you have at least two instances of the associated microservice up and running. It is possible to scale further for increased capacity/performance, and you can do so at the microservice level, based on the specific demand for each function (e.g., you likely won’t need as many instances of the Import VA SPK microservice as you do for the Authorization microservice).

Microservices are “open”

Microservices can run in different environments – bare OS, Cloud Foundry, Docker. Also, they are accessible to non-SAS developers through REST APIs. As an example, let’s say I’d like to retrieve the same properties for the SAS Administrators group that were shown above in Environment Manager. It’s as easy as calling a REST endpoint: http://<myserver>/identities/groups/SASAdministrators
The result can be in either XML or json.

In fact, even microservices communicate with one another using REST interfaces!

I hope this blog has been helpful.

Feel free to add comments or questions below.

Let’s talk about Microservices was published on SAS Users.

5月 062017
 

As SAS Viya has been gaining awareness over the past year among SAS users, there has been a lot of discussion about how SAS’ Cloud Analytic Server (CAS) handles memory vs SAS’ previous technologies such as LASR and HPA.  Recently, while I was involved in delivering several SAS Viya enablement sessions, I realised that many, including myself, held an incorrect understanding of how this works, mainly around one particular CAS option called maxTableMem.

The maxTableMem option determines the memory block size that is used per table, per CAS Worker, before converting data to memory-mapped memory.  It is not intended to directly control how much data is put into memory vs how much is put into CAS_DISK_CACHE, but rather it indirectly influences this.

Let’s unpack that a bit and try to understand what it really means.

The CAS Controller doesn’t care what the value of maxTableMem is.  In a serial load example, the CAS Controller distributes the data evenly across the CAS Workers[1], which then fill up maxTableMem-sized buckets (memory blocks), emptying them (converting them to memory-mapped memory) as they fill up, only leaving non-full buckets of table data.  You should almost never  change the default setting of this option (16MB), except perhaps in cases of extremely large tables, in order to reduce the number of file handles (up to 256MB is probably sufficient in these cases).

CAS takes advantage of standard memory mapping techniques for the CAS_DISK_CACHE, and leaves the optimisation of it up to the OS.  With SASHDAT files and LASR in SAS 9.4, the SASHDAT file essentially acts as a pre-paged file, written in a memory-mapped format, so the table data in memory doesn’t need to be written to disk when it is paged out.  Should a table need to be dropped from memory to make room for other data, and subsequently needed to be read back in to memory, it would be paged in from the SASHDAT file.

With CAS, the CAS_DISK_CACHE allows us to extend this pre-paged file approach to all data sources, not just SASHDAT.  Traditional OS swap files are written to each time memory is paged out, however with CAS, regardless of the data source (SASHDAT, database, client-uploaded file etc.) most table memory will never need to be written to disk, as it will already exist in the backing store (this could be CAS_DISK_CACHE, HDFS or NFS).   Although data will be continually paged in and out of memory, the amount of writing to disk will be minimised, which is typically slower than reading data from disk.

Another advantage of the CAS_DISK_CACHE is that when data does need to be written to disk it can happen upfront when the server is less busy, rather than at the last moment when the system detects it is out of memory (pre-paging rather than demand-paging).  Once it is written, it can be paged back into memory multiple times, by multiple concurrent processes.  The CAS_DISK_CACHE also spreads the I/O across multiple devices and servers as opposed to a typical OS swap file that may only write to a single file on a single server.

While CAS supports exceeding memory capacity by using CAS_DISK_CACHE as a backing store, read/write disk operations do have a performance cost.  Therefore, for best performance, we recommend you have enough memory capacity to hold your  most commonly used tables, meaning  most of the time the entire table will be both in memory and the backing store.

If you expect to regularly exceed memory capacity, and therefore are frequently paging data in from CAS_DISK_CACHE, consider spreading the CAS_DISK_CACHE location across multiple devices and using newer solid state storage technologies in order to improve performance.[1]

Additionally, when you need CAS to peacefully co-exist with other applications that are sharing resources on the same nodes, standard Linux cgroup settings along with Hadoop YARN configuration can be utilised to control the resources that CAS sessions can exploit.

References

Paging

Notes

[1] There are exceptions to data being evenly distributed across the CAS Workers.  The main one is if the data is partitioned and the partitions are of different sizes – all the data of a partition must be on the same node therefore resulting in an uneven distribution.  Also, if a table is very small, it may end up on only a single node, and when CAS is co-located with Hadoop the data is loaded locally from each node, so CAS receives whatever the distribution of data is that Hadoop provides.

[2] A comprehensive analysis of all possible storage combinations and the impact on performance has not yet been completed by SAS.

Dr. StrangeRAM or: How I learned to stop worrying and love CAS was published on SAS Users.

4月 262017
 

Oracle databases from SAS ViyaSAS Data Connector to Oracle lets you easily load data from your Oracle DB into SAS Viya for advanced analytics. SAS/ACCESS Interface to Oracle (on SAS Viya) provides the required SAS Data Connector to Oracle, that should be deployed to your CAS environment. Once the below described configuration steps for SAS Data Connector to Oracle are completed, SAS Studio or Visual Data Builder will be able to directly access Oracle tables and load them into the CAS engine.

SAS Data Connector to Oracle requires Oracle client components (release 12c or later) to be installed and configured and configurations deployed to your CAS server. Here is a guide that walks you through the process of installing the Oracle client on a Linux server and configuring SAS Data Connector to Oracle on SAS Viya 3.1 (or 3.2):

Step 1: Get the Oracle Instant client software (release 12.c or later)

To find the oracle client software package open a web browser and navigate to Oracle support at:
http://www.oracle.com/technetwork/topics/linuxx86-64soft-092277.html

Download the following two packages to your CAS controller server:

  • oracle-instantclient12.1-basic-12.1.0.2.0-1.x86_64.rpm
  • oracle-instantclient12.1-sqlplus-12.1.0.2.0-1.x86_64.rpm (optional for testing only)

Step 2: Install and configure Oracle instant client

On your CAS controller server, execute the following commands to install the Oracle instant client and SQLPlus utilities.

rpm –ivh oracle-instantclient12.1-basic-12.1.0.2.0-1.x86_64.rpm
rpm –ivh oracle-instantclient12.1-sqlplus-12.1.0.2.0-1.x86_64.rpm

The Oracle client should be installed to /usr/lib/oracle/12.1/client64.
To configure the Oracle client, create a file called tnsnames.ora, for example, in the /etc/ directory. Paste the following lines with the appropriate connection parameters of your Oracle DB into the tnsnames.ora file. (Replace "your_tnsname", "your_oracle_host", "your_oracle_port" and "your_oracle_db_service_name" with parameters according to your Oracle DB implementation)

your_tnsname =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = your_oracle_host)(PORT = your_oracle_port ))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME =your_oracle_db_service_name)
    )
  )

Next you need to set environment variables for the Oracle client to work:

LD_LIBRARY_PATH: Specifies the directory of your Oracle instant client libs
PATH: Must include your Oracle instant client bin directory
TNS_ADMIN: Directory of your tnsnames.ora file
ORACLE_HOME: Location of your Oracle instant client installation

In a console window on your CAS controller Linux server, issue the following commands to set environment variables for the Oracle client: (replace the directory locations if needed)

export LD_LIBRARY_PATH=/usr/lib/oracle/12.1/client64/lib:$LD_LIBRARY_PATH
export PATH=/usr/lib/oracle/12.1/client64/bin:$PATH 
export TNS_ADMIN=/etc 
export ORACLE_HOME=/usr/lib/oracle/12.1/client64

If you installed the SQLPlus package from Oracle, you can test connectivity with the following command: (replace "your_oracle_user" and "your_tnsname" with a valid oracle user and the tnsname configured previously)

sqlplus your_oracle_user@your_tnsname

When prompted for a password, use your Oracle DB password to log on.
You should see a “SQL>” prompt and you can issue SQL queries against the Oracle DB tables to verify your DB connection. This test indicates if the Oracle instant client is successfully installed and configured.

Step 3: Configure SAS Data Connector to Oracle on SAS Viya

Next you need to configure the CAS server to use the instant client.

The recommended way is to edit the vars.yml file in your Ansible playbook and deploy the required changes to your SAS Viya cluster.

Locate the vars.yml file on your cluster deployment and change the CAS_SETTINGS section to reflect the correct environment variables needed for CAS to use the Oracle instant client:
To do so, uncomment the lines for ORACLE_HOME and LD_LIBRARY_PATH and insert the respective path for your Oracle instant client installation as shown below.

CAS Specific ####
# Anything in this list will end up in the cas.settings file
CAS_SETTINGS:
1: ORACLE_HOME=/usr/lib/oracle/12.1/client64
#3: ODBCHOME=ODBC home directory
#4: JAVA_HOME=/usr/lib/jvm/jre-1.8.0
5:LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$ORACLE_HOME/lib

Run the ansible-playbook to deploy the changes to your CAS server.
After ansible finished the update, your cas.settings file should contain the following lines:

export ORACLE_HOME=/usr/lib/oracle/12.1/client64
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$ORACLE_HOME/lib

Now you are ready to use SAS/ACCESS Interface to Oracle in SAS Viya.

Step 4: Test SAS/ACCESS Interface to Oracle in SAS Studio

Log on to SAS Studio to load data from your Oracle DB into CAS.
Execute the following SAS Code example in SAS Studio to connect to your Oracle DB and load data into CAS. Change the parameters starting with "your_" in the SAS code below according to your Oracle DB implementation.

/************************************************************************/
/*  Start a CAS session   */
/************************************************************************/
cas;
/************************************************************************/
/*  Create a Caslib for the Oracle connection   */
/************************************************************************/
caslib ORACLE datasource=(                                           
    srctype="oracle",
    uid="your_oracle_user_ID",
    pwd="your_oracle_password",
    path="//your_db_hostname:your_db_port/your_oracle_service_name",
    schema="your_oracle_schema_name" );
 
/************************************************************************/
/*  Load a table from Oracle into CAS   */
/************************************************************************/
proc casutil;
   list files incaslib="ORACLE"; 
   load casdata="your_oracle_table_name" incaslib="ORACLE" outcaslib="casuser" casout="your_cas_table_name";                   
   list tables incaslib="casuser";
quit;
/************************************************************************/
/*  Assign caslib to SAS Studio   */
/************************************************************************/
 caslib _all_ assign;

The previous example is a simple SAS program to test access to Oracle and load data from an Oracle table into CAS memory. As a result, the program loads a table in your CAS library with data from your Oracle database.

How to configure Oracle client for successful access to Oracle databases from SAS Viya was published on SAS Users.

4月 032017
 

At Opening Session, SAS CEO Jim Goodnight and Alexa have a chat using the Amazon Echo and SAS Visual Analytics.

Unable to attend SAS Global Forum 2017 happening now in Orlando? We’ve got you covered! You can view live stream video from the conference, and check back here for important news from the conference, starting with the highlights from last night’s Opening Session.

While the location and record attendance made for a full house this year, CEO Jim Goodnight explained that there couldn’t be a more perfect setting to celebrate innovation than the world of Walt Disney. “Walt was a master innovator, combining art and science to create an entirely new way to make intelligent connections,” said Goodnight. “SAS is busy making another kind of intelligent connection – the kind made possible by data and analytics.”

It’s SAS’ mission to bring analytics everywhere and to make it ambient. That was exactly the motivation that drove SAS nearly four years ago when embarking on a massive undertaking known as SAS® Viya™. But SAS Viya – announced last year in Las Vegas – is more than just a fast, powerful, modernized analytics platform. Goodnight said it’s really the perfect marriage of science and art.

“Consider what would be possible if analytics could be brought into every moment and every place that data exists,” said Goodnight. “The opportunities are enormous, and like Walt Disney, it’s kind of fun to do the impossible.”

Driving an analytics economy

Executive Vice President and Chief Marketing Officer Randy Guard took the stage to update attendees on new releases available on SAS Viya and why SAS is so excited about it. And he explained the reason for SAS Viya comes from the changes being driven in the analytics marketplace. It’s what Guard referred to as an analytics economy – where the maturity of algorithms and techniques progress rapidly. “This is a place where disruption is normal, a place where you want to be the disruptor; you want to be the innovator,” said Guard. That’s exactly what you can achieve with SAS Viya.

As if SAS Viya didn’t leave enough of an impression, Guard took it one step further by inviting Goodnight back on stage to give users a preview into the newest innovation SAS has been cooking up. Using the Amazon Echo Dot – better known as Alexa – Goodnight put cognitive computing into action as he called up annual sales, forecasts and customer satisfaction reports in SAS® Visual Analytics.

Though still in its infant stages of development, the demo was just another reminder that when it comes to analytics, SAS never stops thinking of the next great thing.

AI: The illusion of intelligence

On his Segway, Executive Vice President and Chief Technology Officer Oliver Schabenberger talks AI at the SAS Global Forum Opening Session.

With his Segway Mini, Executive Vice President and Chief Technology Officer Oliver Schabenberger rolled on stage, fully trusting that his “smart legs” wouldn’t drive him off and into the audience. “I’ve accepted that algorithms and software have intelligence; I’ve accepted that they make decisions for us, but we still have choices,” said Schabenberger.

Diving into artificial intelligence, he explained that today’s algorithms operate with super-human abilities – they are reliable, repeatable and work around the clock without fatigue – yet they don’t behave like humans. And while the “AI” label is becoming trendy, true systems deserving of the AI title have two distinct things in common: they belong to the class of weak AI systems and they tend to be based on deep learning.

So, why are those distinctions important? Schabenberger explained that a weak AI system is trained to do one task only – the system driving an autonomous vehicle cannot operate the lighting in your home.

“SAS is very much engaged in weak AI, building cognitive systems into our software,” he said. “We are embedding learning and gamification into solutions and you can apply deep learning to text, images and time series.” Those cognitive systems are built into SAS Viya. And while they are powerful and great when they work, Schabenberger begged the question of whether or not they are truly intelligent.

Think about it. True intelligence requires some form of creativity, innovation and independent problem solving. The reality is, that today’s algorithms and software, no matter how smart, are being used as decision support systems to augment our own capabilities and make us better.

But it’s uncomfortable to think about fully trusting technology to make decisions on our behalf. “We make decisions based on reason, we use gut feeling and make split-second judgment calls based on incomplete information,” said Schabenberger. “How well do we expect machines to perform [in our place]when we let them loose and how quickly do we expect them to learn on the job?”

It’s those kinds of questions that prove that all we can handle today is the illusion of intelligence. “We want to get tricked by the machine in a clever way,” said Schabenberger. “The rest is just hype.”

Creating tomorrow‘s analytics leaders

With a room full of analytics leaders, Vice President of Sales Emily Baranello asked attendees to consider where the future leaders of analytics will come from. If you ask SAS, talent will be pulled from universities globally that have partnered with SAS to create 200 types of programs that teach today’s students how to work in SAS software. The commitment level to train up future leaders is evident and can be seen in SAS certifications, joint certificate programs and SAS’ track toward nearly 1 million downloads of SAS® Analytics U.

“SAS talent is continuing to building in the marketplace,” said Baranello. “Our goal is to bring analytics everywhere and we will continue to partner with universities to ready those students to be your successful employees.”

Using data for good

More than just analytics and technology, SAS’ brand is a representation of people who make the world a better place. Knowing that, SAS announced the development of GatherIQ – a customized crowdsourcing app that will begin with two International Organization for Migration (IMO) projects. One project will specifically focus on global migration, using data to keep migrants safe as they search for a better life. With GatherIQ, changing the world might be as easy as opening an app.

There's much more to come, so stay tuned to SAS blogs this week for the latest updates from SAS Global Forum!

SAS Viya, AI star at SAS Global Forum Opening Session was published on SAS Users.

3月 282017
 

I have been using the SAS Viya environment for just over six months now and I absolutely love it.  As a long-time SAS coder and data scientist I’m thrilled with the speed and greater accuracy I’m getting out of a lot of the same statistical techniques I once used in SAS9.  So why would a data scientist want to switch over to the new SAS Viya platform? The simple response is “better, faster answers.”  There are some features that are endemic to the SAS Viya architecture that provide advantages, and there are also benefits specific to different products as well.  So, let me try to distinguish between these.

SAS Viya Platform Advantages

To begin, I want to talk about the SAS Viya platform advantages.  For data processing, SAS Viya uses something called the CAS (Cloud Analytic Services) server – which takes the place of the SAS9 workspace server.  You can still use your SAS9 installation, as SAS has made it easy to work between SAS9 and SAS Viya using SAS/CONNECT, a feature that will be automated later in 2017.

Parallel Data Loads

One thing I immediately noticed was the speed with which large data sets are loaded into SAS Viya memory.  Using Hadoop, we can stage input files in either HDFS or Hive, and CAS will lift that data in parallel into its pooled memory area.  The same data conversion is occurring, like what happened in SAS9, but now all available processors can be applied to load the input data simultaneously.  And speaking of RAM, not all of the data needs to fit exactly into memory as it did with the LASR and HPA procedures, so much larger data sets can be processed in SAS Viya than you might have been able to handle before.

Multi-threaded DATA step

After initially loading data into SAS Viya, I was pleased to learn that the SAS DATA step is multi-threaded.  Most of your SAS9 programs will run ‘as is,’ however the multi-processing really only kicks in when the system finds explicit BY statements or partition statements in the DATA step code.  Surprisingly, you no longer need to sort your data before using BY statements in Procs or DATA steps.  That’s because there is no Proc Sort anymore – sorting is a thing of the past and certainly takes some getting used to in SAS Viya.  So for all of those times where I had to first sort data before I could use it, and then execute one or more DATA steps, that all transforms into a more simplified code stream.   Steven Sober has some excellent code examples of the DATA step running in full-distributed mode in his recent article.

Open API’s

While all of SAS Viya’s graphical user interfaces are designed with consistency of look and feel in mind, the R&D teams have designed it to allow almost any front-end or REST service submit commands and receive results from either CAS or its corresponding micro-service architecture.  Something new I had to learn was the concept of a CAS action set.  CAS action sets are comprised of a number of separate actions which can be executed singly or with other actions belonging to the same set.  The cool thing about CAS actions is that there is one for almost any task you can think about doing (kind of like a blend between functions and Procs in SAS9).  In fact, all of the visual interfaces SAS deploys utilize CAS actions behind the scenes and most GUI’s will automatically generate code for you if you do not want to write it.

But the real beauty of CAS actions is that you can submit them through different coding interfaces using the open Application Programming Interface’s (API’s) that SAS has written to support external languages like Python, Java, Lua and R (check out Github on this topic).  The standardization aspect of using the same CAS action within any type of external interface looks like it will pay huge dividends to anyone investing in this approach.

Write it once, re-use it elsewhere

I think another feature that old and new users alike will adore is the “write-it-once, re-use it” paradigm that CAS actions support.  Here’s an example of code that was used in Proc CAS, and then used in Jupyter notebook using Python, followed by a R/REST example.

Proc CAS

proc cas;
dnnTrain / table={name  = 'iris_with_folds'
                   where = '_fold_ ne 19'}
 modelWeights = {name='dl1_weights', replace=true}
 target = "species"
 hiddens = {10, 10} acts={'tanh', 'tanh'}
 sgdopts = {miniBatchSize=5, learningRate=0.1, 
                  maxEpochs=10};
run;

 

Python API

s.dnntrain(table = {‘name’: 'iris_with_folds’,
                                  ‘where’: '_fold_ ne 19},
   modelweights = {‘name’: 'dl1_weights', ‘replace’: True}
   target  = "species"
   hiddens  = [10, 10], acts=['tanh', ‘tanh']
   sgdopts  = {‘miniBatchSize’: 5, ‘learningRate’: 0.1, 
                      ‘maxEpochs’: 10})

 

R API

cas.deepNeural.dnnTrain(s,
  table = list(name = 'iris_with_folds’
                   where = '_fold_ ne 19’),
  modelweights = list({name='dl1_weights', replace=T),
  target = "species"
  hiddens = c(10, 10), acts=c('tanh', ‘tanh‘)
  sgdopts = list(miniBatchSize = 5, learningRate = 0.1,
                   maxEpochs=10))

 

See how nearly identical each of these three are to one another?  That is the beauty of SAS Viya.  Using a coding approach like this means that I do not need to rely exclusively on finding SAS coding talent anymore.  Younger coders who usually know several open source languages take one look at this, understand it, and can easily incorporate it into what they are already doing.  In other words, they can stay in coding environments that are familiar to them, whilst learning a few new SAS Viya objects that dramatically extend and improve their work.

Analytics Procedure Advantages

Auto-tuning

Next, I want address some of the advantages in the newer analytics procedures.  One really great new capability that has been added is the auto-tuning feature for some machine learning modeling techniques, specifically (extreme) gradient boosting, decision tree, random forest, support vector machine, factorization machine and neural network.  This capability is something that is hard to find in the open source community, namely the automatic tuning of major option settings required by most iterative machine learning techniques.  Called ‘hyperspace parameters’ among data scientists, SAS has built-in optimizing routines that try different settings and pick the best ones for you (in parallel!!!).  The process takes longer to run initially, but, wow, the increase in accuracy without going through the normal model build trial-and-error process is worth it for this amazing feature!

Extreme Gradient Boosting, plus other popular ML techniques

Admittedly, xgboost has been in the open source community for a couple of years already, but SAS Viya has its own extreme[1] gradient boosting CAS action (‘gbtreetrain’) and accompanying procedure (Gradboost).  Both are very close to what Chen (2015, 2016) originally developed, yet have some nice enhancements sprinkled throughout.  One huge bonus is the auto-tuning feature I mentioned above.  Another set of enhancements include: 1) a more flexible tree-splitting methodology that is not limited to CART (binary tree-splitting), and 2) the handling of nominal input variables is done automatically for you, versus ‘one-hot-encoding’ you need to perform in most open source tools.  Plus, lots of other detailed option settings for fine tuning and control.

In SAS Viya, all of the popular machine learning techniques are there as well, and SAS makes it easy for you to explore your data, create your own model tournaments, and generate score code that is easy to deploy.  Model management is currently done through SAS9 (at least until the next SAS Viya release later this year), but good, solid links are provided between SAS Viya and SAS9 to make transferring tasks and output fairly seamless.  Check out the full list of SAS Viya analytics available as of March 2017.

In-memory forecasting

It is hard to beat SAS9 Forecast Server with its unique 12 patents for automatic diagnosing and generating forecasts, but now all of those industry-leading innovations are also available in SAS Viya’s distributed in-memory environment. And by leveraging SAS Viya’s optimized data shuffling routines, time series data does not need to be sorted, yet it is quickly and efficiently distributed across the shared memory array. The new architecture also has given us a set of new object packages to make more efficient use of the data and run faster than anything witnessed before. For example, we have seen 1.5 million weekly time series with three years of history take 130 hours (running single-machine and single-threaded) and reduce that down to run in 5 minutes on a 40 core networked array with 10 threads per core. Accurately forecasting 870 Gigabytes of information, in 5 minutes?!? That truly is amazing!

Conclusions

Though I first ventured into SAS Viya with some trepidation, it soon became clear that the new platform would fundamentally change how I build models and analytics.  In fact, the jumps in performance and the reductions in time spent to do routine work has been so compelling for me, I am having a hard time thinking about going back to a pure SAS9 environment.  For me it’s all about getting “better, faster answers,” and SAS Viya allows me to do just that.   Multi-threaded processing is the way of the future and I want to be part of that, not only for my own personal development, but also because it will help me achieve things for my customers they may not have thought possible before.  If you have not done so already, I highly recommend you to go out and sign up for a free trial and check out the benefits of SAS Viya for yourself.


[1] The definition of ‘extreme’ refers only to the distributed, multi-threaded aspect of any boosting technique.

References

Chen , Tianqi and Carlos Guestrin , “XGBoost: Reliable Large-scale Tree Boosting System”, 2015

Chen , Tianqi and Carlos Guestrin, “XGBoost: A Scalable Tree Boosting System”, 2016

Using the Robust Analytics Environment of SAS Viya was published on SAS Users.

3月 182017
 

Editor’s note: This is the third in a series of articles to help current SAS programmers add SAS Viya to their analytics skillset. In this post, Advisory Solutions Architect Steven Sober explores how to accomplish distributed data management using SAS Viya. Read additional posts in the series.

In my last article I explained how SAS programmers can execute distributed DATA Step, PROC DS2, PROC FEDSQL and PROC TRANSPOSE in SAS Viya’s Cloud Analytic Services (CAS) which speeds up the process of staging data for analytics, visualizations and reporting. In this article we will explore how open source programmers can leverage the same SAS coding techniques while remaining in their comfort zone.

For this post, I will utilize Jupyter Notebook to run the Python script that is leveraging the same code we used in part one of this series.

Importing Package and Starting CAS

First, we import the SAS Scripting Wrapper for Analytics Transfer (SWAT) package, which is the Python client to SAS Cloud Analytic Services (CAS). To down load the SWAT package, use this url: https://github.com/sassoftware/python-swat.

Let’s review the cell “In [16]”:

1.  Import SWAT

a.  Required statement, this loads the SWAT package into our Python client

2.  s = swat.CAS("viya.host.com", port#, "userid", "password")

a.  Required statement, for our example we will use “s” in our dot notation syntax to send our statements to CAS for processing. “s” is end-user definable (i.e. I could have used “steve =” instead of “s =”).

b.  Viya.host.com is the host name of your SAS Viya platform

c.  Port#

i.  Port number used to communicate with CAS

d.  userid

i.  Your user id for the SAS Viya platform

e.  Password

i.  Your password for your userid

3.  indata_dir = "/opt/sasinside/DemoData"

a.  Creating a variable call “indata_dir”. This is a directory on the SAS Viya platform where the source data for our examples is located.

4.  indata     = "cars"

a.  Creating a variable call “indata” which contains the name of the source table we will load into CAS

Reviewing cell “Out[16]” we see the information that CAS returns to our client when we connect successfully.

Loading our Source Table and CAS Action Sets

In order to load data into CAS we first need to create a dataSource={"srcType":"PATH"},
path = indata_dir)

a.  To send statements to CAS we use dot notation syntax where:

a.  s

i.  The CAS session that we established in cell “in[16]”

b.  table

i.  CAS action set

c.  addCaslib

i.  Action set’s action

d.  name

i.  Specifies the name of the caslib to add.

e.  dataSource

i.  Specifies the data source type and type-specific parameters.

f.  path

i.  Specifies data source-specific information. For PATH and HDFS, this is a file system path. In our example we are referencing the path using the variable “indata_dir” that we established in cell “In[16]”.

casOut={"caslib":"casuser", "name":"cars",
"replace":"True"},

)

a.  As we learned s. is our connection to CAS and “table.” is the CAS action set while “Table” is the action set’s action.

a.  path=

i.  Specifies the file, directory or table name. In our example this is the physical name of the SAS data set being loaded into CAS.

b. casOut=

i.  The CAS library we established in cell “In[17]” using the “addCaslib” action.

1  caslib.casuser

a.  “caslib” - is a reserved word and is use to reference all CAS libraries
b.  “casuser” - is the CAS library we will use in our examples
c.  “name”  - is the CAS table name
d.  “replace” - provides us an option to replace the CAS table if it already exists.

Reviewing cell “Out[17]” we see the information that CAS returns to our client when we successfully load a table into CAS.

Click here for information on the loadActionSet action.

DATA Step

We are now ready to continue by running DATA Step, PROC DS2, PROC FEDSQL and PROC TRANSPOSE via our python script.

Now that we understand the dot notation syntax used to send statements to CAS, it become extremely simple to leverage the same code our SAS programmers are using.

Reviewing cell “In[19]” we notice we are using the CAS action set “dataStep” and it’s action “runCode”.  Notice between the (“”” and  “””) we have the same DATA Step code we reviewed in part one of this series. By reviewing cell “Out19]” we can review the information CAS sent back providing information on the source (casuser.cars) and target (casuser.cars_data_step) tables used in our DATA Step.

With DS2 we utilize the CAS action set “ds2” with its action “runDS2.” In reviewing cell “In[23]” we do notice a slight difference in our code. There is no “PROC DS2” prior to the “thread mythread / overwrite = yes;” statement. With the DS2 action set we simply define our DS2 THREAD program and follow that with our DS2 DATA program. Notice in the DS2 DATA program we declare the DS2 THREAD that we just created.

Review the NOTE statements: prior to “Out[23]” These statements validate the DS2 THREAD and DATA programs executed in CAS.

With FedSQL we use the CAS action set “fedsql’ with its action “execDirect.” The “query=” parameter is where we place our FedSQL statements. By reviewing the NOTE statements we can validate our FedSQL ran successfully.

With TRANSPOSE we use the CAS action set “transpose” with its action “transpose.” The syntax is different for PROC TRANSPOSE, but it is very straight forward on mapping out the parameters to accomplish the transpose you need for your analytics, visualizations and reports.

Collaborative distributed data management with open source was published on SAS Users.

3月 172017
 

As a SAS instructor, I’m often on the road, but, in April, my work travel path is going to take me to a place I haven’t visited since I was 12 years old. The occasion?  SAS Global Forum 2017.  The location?  Walt Disney World® in Orlando. While the main conference [...]

The post Visiting Open Attractions and Open SAS appeared first on SAS Learning Post.

3月 172017
 

Editor’s note: This is the second in a series of articles to help current SAS programmers add SAS Viya to their analytics skillset. In this post, Advisory Solutions Architect Steven Sober explores how to accomplish distributed data management using SAS Viya. Read additional posts in the series.

This article in the SAS Viya series will explore how to accomplish distributed data management using SAS Viya. In my next article, we will discuss how SAS programmers can collaborate with their open source colleagues to leverage SAS Viya for distributed data management.

Distributed Data Management

SAS Viya provides a robust, scalable, cloud ready distributed data management platform. This platform provides multiple techniques for data management that run distributive, i.e. using all cores on all compute nodes defined to the SAS Viya platform. The four techniques we will explore here are DATA Step, PROC DS2, PROC FEDSQL and PROC TRANSPOSE. With these four techniques SAS programmers and open source programmers can quickly apply complex business rules that stage data for downstream consumption, i.e., Analytics, visualizations, and reporting.

The rule for getting your code to run distributed is to ensure all source and target tables reside in the In-Memory component of SAS Viya i.e., Cloud Analytic Services (CAS).

Starting CAS

The following statement is an example of starting a new CAS session. In the coding examples that follow we will reference this session using the key word MYSESS. Also note, this CAS session is using one of the default CAS library, CASUSER.

Binding a LIBNAME to a CAS session

Now that we have started a CAS session we can bind a LIBNAME to that session using the following syntax:

Note: CASUSER is one of the default CAS libraries created when you start a CAS session. In the following coding examples we will utilize CASUSER for our source and target tables that reside in CAS.

To list all default and end-user CAS libraries, use the following statement:

Click here for more information on CAS libraries.

THREAD program

  • The PROC DS2 DATA program must declare a THREAD program
  • The source and target tables must reside in CAS
  • Unlike DATA Step, with PROC DS2 you use the SESSREF= parameter to identify which CAS environment the source and target tables reside in
  • SESSREF=

     For PROC TRANSPOSE to run in CAS the rules are:

    1. All source and target tables must reside in CAS
      a.   Like DATA Step you use a two-level name to reference these tables

    Collaborative distributed data management using SAS Viya was published on SAS Users.