Cloud Computing

4月 122019
 

At the end of my SAS Users blog post explaining how to install SAS Viya on the Azure Cloud for a SAS Hackathon in the Nordics, I promised to provide some technical background. I ended up with only one manual step by launching a shell script from a Linux machine and from there the whole process kicked off. In this post, I explain how we managed to automate this process as much as possible. Read on to discover the details of the script.

Pre-requisite

The script uses the Azure command-line interface (CLI) heavily. The CLI is Microsoft's cross-platform command-line experience for managing Azure resources. Make sure the CLI is installed, otherwise you cannot use the script.

The deployment process

The process contains three different steps:

  1. Test the availability of the SAS Viya installation repository.
  2. Launch a new Azure Virtual Machine. This action uses a previously created custom Azure image.
  3. Perform the actual installation.

Let’s examine the details of each step.

Test the availability of the SAS Viya installation repository

When deploying software in the cloud, Red Hat Enterprise Linux recommends using a mirror repository. Since the SAS Viya package allows for this installation method, we decided to use the mirror for the hackathon images. This is optional, but optimal, say if your deployment does not have access to the Internet or if you must always deploy the same version of software (such as for regulatory reasons or for testing/production purposes).

In our Azure Subscription we created an Azure Resource group with the name ‘Nordics Hackathon.’ Within that resource group, there is an Azure VM running a web server hosting the downloaded SAS Viya repository.

Azure VM running HTTPD Server and hosting a SAS Viya Mirror Repository

Of course, we cannot start the SAS Viya installation before being sure this VM – hosting all rpms to install SAS Viya – is running.
To validate that the VM is running, we issue the start command from the CLI:

az vm start -g [Azure Resource Group] -n [AZ VM name]

Something like:

az vm start -g my_resourcegroup -n my_viyarepo34

If the server is already running, nothing happens. If not, the command starts the VM. We can also check the Azure console:

Azure Console with 'Running' VMs

Launching the VM

The second part of the script launches a new Azure VM. We use the custom Azure image we created earlier. The SAS Viya image creation is explained in the first blog post.

The Azure image used for the Nordics hackathon was the template for all other SAS Viya installations. On this Azure image we completed several valuable tasks:

  • We performed a SAS Viya pre-deployment assessment using the SAS Viya Administration Resource Kit (Viya ARK) utility tool. The Viya ARK - Pre-installation Playbook is a great tool that checks all prerequisites and performs many pre-deployment tasks before deploying SAS Viya software.
  • Installed R-Server and R-Studio
  • Installed Ansible
  • Created a SAS Viya Playbook using the SAS Orchestration CLI.
  • Customized Ansible playbooks created by SAS colleagues used to kickoff OpenLdap & JupyterHub installation.

Every time we launch our script, an exact copy of a new Azure Virtual machine launches, fully customized according to our needs for the Hackathon.
Below is the Azure CLI command used in the script which creates a new Azure VM.

az vm create --resource-group [Azure Resource Group]--name $NAME --image viya_Base \
--admin-username azureuser --admin-password [your_pw] --subnet [subnet_id] \
--nsg [optional existing network security group] --public-ip-address-allocation static \
--size [any Azure size] --tags name=$NAME

After the creation of the VM, we install SAS Viya in the third step of the process.

Installation

After running the script three times (using a different value for $NAME), we end up with the following high-level infrastructure:

SAS Viya on Azure Cloud deployemnt

After the launch of the Azure VM, the viya-install.sh script starts the install script using the original image in the /opt/sas/install/ location.
In the last step of the deployment process, the script installs OpenLdap, SAS Viya and JupyterHub. The following command runs the script:

az vm run-command invoke -g [Azure Resource Group] -n $NAME --command-id RunShellScript --scripts "sudo /opt/sas/install/viya-install.sh &"

The steps in the script should be familiar to those with experience installing SAS Viya and/or Ansible playbooks. Below is the script in its entirety.

#!/bin/bash
touch /start
####################################################################
echo "Starting with the installation of OPENLDAP. Check the openldap.log in the playbook directory for more information" > /var/log/myScriptLog.txt
####################################################################
# install openldap
cd /opt/sas/install/OpenLDAP
ansible-playbook openldapsetup.yml
if [ $? -ne 0 ]; then { echo "Failed the openldap setup, aborting." ; exit 1; } fi
cp ./sitedefault.yml /opt/sas/install/sas_viya_playbook/roles/consul/files/sitedefault.yml
if [ $? -ne 0 ]; then { echo "Failed to copy file, aborting." ; exit 1; } fi
####################################################################
echo "Starting Viya installation" >> /var/log/myScriptLog.txt
####################################################################
# install viya
cd /opt/sas/install/sas_viya_playbook
ansible-playbook site.yml
if [ $? -ne 0 ]; then { echo "Failed to install sas viya, aborting." ; exit 1; } fi
####################################################################
echo "Starting jupyterhub installation" >> /var/log/myScriptLog.txt
####################################################################
# install jupyterhub
cd /opt/sas/install/jupy-azure
ansible-playbook deploy_jupyter.yml
if [ $? -ne 0 ]; then { echo "Failed to install jupyterhub, aborting." ; exit 1; } fi
####################################################################
touch /finish 
####################################################################

Up next

In a future blog, I hope to show you how get up and running with SAS Viya Azure Quick Start. For now, the details I provided in this and the previous blog post is enough to get you started deploying your own SAS Viya environments in the cloud.

Script for a SAS Viya installation on Azure in just one click was published on SAS Users.

4月 052019
 

You are a data scientist, in your office, doing data scientist-y things when, your manager's, manager's, manager makes an impossible request. She wants you take a raw data set from the stem cell research team, scrub the data, create and score models, and be ready to rescore when new data comes is available. And she wants it in a week. WHAT?! Your company doesn't own an analytics software license, and a spreadsheet is not going to work on this data with millions of records. Even if you received funding, how could you ever create and maintain an environment under your tight deadline? Take a deep breath, conjure your inner data scientist acumen, and realize SAS has the answer.

SAS Machine Learning on SAS Analytics Cloud provides on-demand programming access to machine learning algorithms in the cloud. No downloads, no install, no infrastructure, no maintenance. This solution provides a multithreaded, multiuser environment for concurrent access to data in memory. The solution is designed for data scientists (and others) coding in SAS or Python and allows them on-demand programmatic access to SAS Viya. You can find more details on Analytic Cloud in the fact sheet. You can even try it for free! The rest of this article will walk you through the features of this new SAS offering and outline how it can help you complete the task bestowed upon you.

Register and get started

Literally, to sign up for the trial, all you need are a SAS Profile, an email address, and a PC. You will be coding in SAS in less than a minute. From the SAS Cloud Analytics page, select the Get Free Trial button. This takes you to the SAS Profile login page (note you can create your SAS Profile here if you do not have one).

SAS Profile log in or creation

Agree to the Terms and Conditions on the License Agreement page and select the Continue button:

Trial License Agreement

You will receive an email containing a URL much like the following:

email confirmation with trial URL

Logging in

Select the link or paste it into your browser (Google Chrome 64-bit recommended) and you will see the log in screen. Enter your SAS Profile credentials and click the Sign In button.

Sign In screen

The Home screen (Applications) appears.

Home Page

We'll discuss the Data and Team pages in further detail later on in this article. You have two options for applications: SAS Studio (for SAS programming) and JupyterLab (for Python programming). This article focuses on SAS Studio. A follow up article will cover the JupyterLab use case. Select the SAS Studio button, a new tab opens to SAS Studio, and we're ready to start coding.

SAS Studio

You are familiar with the SAS language, but you need to brush up a little. Have no fear, support documentation is easily accessible. Also, the SAS Data Mining and Machine Learning Community is a great place to discover additional resources and ask questions. Finally, embedded in SAS Studio are code snippets. You decide to explore the latter.

Code snippets

In SAS Studio select the Snippets twisty in the left pane. Navigate to the SAS Viya Machine Learning section. Here you find code samples you will use to prep and analyze your data. When opening a snippet, you see code and detailed comments on what the code will accomplish. You will use these snippets as a guide when you load and prep your data and preform your analysis. Below is an image of the Prepare and Explore Data snippet. Notice each code step has accompanying comments.

You read through each snippet in the Machine Learning section. The command and structure of the code comes back to you pretty quickly and you're now ready to try it all out on your own data.

Uploading data

Now that you have an idea of what code you need to write, you need to load the data from the research department. You accomplish this by selecting the Server Files and Folders twisty and navigate to the Folder Shortcuts section. In this instance you want to upload your file into the shared/data directory (I'll explain why I chose this location in a moment). Use the Upload button to upload the research data file.

Upload file to the data directory

You're not alone

Files uploaded to shared/data are now visible by others logged into the environment. Wait, did I forget to mention this is a multi-user environment?! Well, yes, it is. You can invite others to collaborate on the project. To add and manage users, return to the Home screen (leaving SAS Studio open). Select the Team section in the left pane. The Team page lists users and displays an Invite button, used to send an invitation for system access to others.

Teams page

To invite others, click the button and enter the email address of the new user. This generates and sends an invitation in an email. The new user accepts the invite and now has access to the system. Using the URL provided in the email the new user logs in using their own SAS Profile credentials. The default role for new users is ‘User.’ A user with admin privileges can change the role to ‘Admin. In the free trial, you are permitted to have a total of five users.

Shared data

You may have guessed by now the Data section lists directories and files located in the shared directory in SAS Studio.

Data page

You also notice here you have 5 GB of storage space. This includes shared and non-shared files.

I love this. How do I get more?

Now you know your way around the system and are ready to start coding. Return to SAS Studio, open a new program, and commence your analysis of the stem cell data. When you successfully deliver the project and impress your management chain, you can mention how the SAS Analytics Cloud solution made it all possible (and simple). You now have a case for the departmental procurement of the solution opening your organization up to add more users, access more storage, and gain more power to run advanced machine learning algorithms on your data.

Your turn

In this article I've outlined how to easily register for the SAS Machine Learning trial and start coding in the matter of a minute. Try it out yourself. Register, load your data, get coding, and solve your problem.

Related Resources

For more details on the development of SAS Analytics Cloud, check out Missy Hannah's interview with two UI developers on the project.

Zero to SAS in 60 Seconds- SAS Machine Learning on SAS Analytics Cloud was published on SAS Users.

3月 272019
 

PAYG financial services: coming to a bank near you

You walk into your neighborhood bank to see about a mortgage. You and your spouse have your eye on the perfect 3BR, 2BA brick ranch near your child's school, and it won't be on the market long. An hour later, you burst through the front door with a bottle of champagne: "We're qualified!"

Also celebrating is your bank's branch manager. She was skeptical when headquarters analysts equipped branches for "Cloud-based application using SAS" , saying it would speed up loan applications. But your quick, frictionless transaction proved them right.The bank's accountants are happy too. The new pay-as-you-go mode of using SAS software in the cloud means big savings.

The above scenario is possible now through serverless functions, which enable your SAS Viya applications to take input from end users, score the loan application, and return results.

The rest of this post gets into the nitty gritty of serverless functions and SAS Viya, detailing what happens in a bank's computers after a customer applies for a loan. The qualification process starts by running a previously built scoring model to generate a score. You will see how the combination of REST APIs in SAS Viya, analytic models and the restaf library make the task of building the serverless function relatively simple.

The blog titled "SAS REST APIs: a sample application" demonstrated building a SAS Viya application using REST APIs, SAS Visual Analytics and SAS Operational Research. This is typical web applications with application server and SAS Viya running on premise.

If you are one of many users using(or considering) a cloud provider, serverless functions is an useful alternate way to deliver your applications to your users. This eliminates the need to manage the application server associated with your application. Additionally you get zero administration and auto-scaling among other benefits. Many SAS applications that respond quickly to user requests are ideal candidates to be deployed as serverless functions.

The example in this article is available on SAS software’s GitHub site in the viya-apps-serverless-score repository.  If you want to see the end application for frame of reference, see the Using the serverless functions section at the bottom of this article.

Let’s begin with a bit of background on serverless computing and then dig into the details of the application and functions.

Serverless computing explained

The benefits of serverless functions as touted by AWS serverless, Azure and serverless.com:

AWS Lambda

AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume– there is no charge when your code is not running. With Lambda, you can run code for virtually any type of application or backend service – all with zero administration. Just upload your code and Lambda takes care of everything required to run and scale your code with high availability. You can set up your code to automatically trigger from other AWS services or call it directly from any web or mobile app.

What is serverless computing?

According to Azure serverless computing is the abstraction of servers, infrastructure, and operating systems. When you build serverless apps you don’t need to provision and manage any servers, so you can take your mind off infrastructure concerns. Serverless computing is driven by the reaction to events and triggers happening in near-real-time—in the cloud. As a fully managed service, server management and capacity planning are invisible to the developer and billing is based just on resources consumed or the actual time your code is running.

Four core benefits of serverless computing from serverless.com:

  1. Zero administration – Deploy code without provisioning anything beforehand or managing anything afterward. There is no concept of a fleet, an instance, or even an operating system. No more bothering the Ops department.
  2. Auto-scaling – Let your service providers manage the scaling challenges. No need to fire alerts or write scripts to scale up and down. Handle quick bursts of traffic and weekend lulls the same way — with peace of mind.
  3. Pay-per-use – Function-as-a-service compute and managed services charged based on usage rather than pre-provisioned capacity. You can have complete resource utilization without paying a cent for idle time. The results? 90% cost-savings over a cloud VM, and the satisfaction of knowing that you never pay for resources you don’t use.
  4. Increased velocity – Shorten the loop between having an idea and deploying to production. Because there’s less to provision up front and less to manage after deployment, smaller teams can ship more features. It’s easier than ever to make your idea live.

OK, so there is a server involved in serverless computing. The beauty in this technology is that once you deploy your code, you don't have to worry about the underlying infrastructure. You just know that the app should work and you only incur costs when the app is running.

Basic flow

Serverless functions are loaded and executed based on the occurrence of one of the triggers/events supported by the cloud vendor. In this example the API Gateway triggers the serverless functions when an http call invokes the function. The API Gateway calls the handler for the function and passes in the user data. On return from the handler the response is sent to the client. This article focuses on the code inside the Serverless Function box in the picture below.

Figure 1: Request Workflow

This example utilizes two key functions:

  1. app – This function serves up an html application for user to enter the data. This is an example of a web application as a serverless function.
  2. score – This function takes user input from the web app, executes scoring on a Viya Server and returns the results.

Serverless.yml

The serverless.yml defines the serverless functions and the handlers, used to execute and other system related information. We will focus only on the application specific information.

The code snippet below shows the definition of the path and handler for the two functions in the serverles.yml file.

functions:
  app: 
    handler: src/app.app
    events:
      - http:
          path: app
          method: get
          cors: 
            origin: '*'
          request:
            parameters:
              paths:
                id: true  
 
  score:
    handler: src/score.score
    events:
      - http:
          path: score
          method: post
          cors: 
            origin: '*'

The functions(app & score) in the yaml define:

  1. event - http event will trigger this function
  2. path - this is path to the function - similar to what you define in Express or hapijs
  3. method - http standard GET, PUT etc...
  4. others - refer to the cloud vendor's documentation for other available options.

The serverless.yml file also sets application related information using environment variables. In this particular use case we define how to access SAS Viya and which scoring model to use.

environment:
#
# Information for logging into SAS Viya
#
  VIYA_SERVER: http://example.viya.server.com
  CLIENTID: raf
  CLIENTSECRET: raf
  USER:rafuser
  PASSWORD: rafpass
 
#
# astore to be used for scoring
#
  caslib: casuser
  name: GRADIENT_BOOSTING___BAD_2

A note on securing your password

In this example we store the userid and password in environment variables. This is to the keep the focus on the internals of serverless functions for SAS Viya. Locally you can use "serverless variables" to secure the information during development. However, for production deployment, refer to your provider's recommendations and the user community for best practices.

Sounds like a followup blog in the future 🙂

Anatomy of the serverless function

Figure 2 shows the flow inside the serverless function for this example. This pattern will repeat itself in your serverless functions.

Figure 2: Serverless Function Flow

Serverless function score

The code below is the handler for the score function. The rest of this section will discuss each of the key features of the handler.

//
// See src/score.js for the full code
//
module.exports.score = async function (event, context ) {
 
   let store      =  restaf.initStore(); /* initialize restaf     */
   let inParms = parseEvent(event);  /* get user input        */
   let payload = getLogonPayload(); /* get logon information */
 
   return store.logon(payload)               /* logon to SAS Viya */
        .then (()    > scoreMain( store, inParms )) /* score     */
        .then(result > setPayload(result)) /* return results     */
        .catch(err   > setError(err))	      /* else return errors */
}

Step 1: Parse the input

The event parameter contains the input from the caller (web application, another serverless function, etc).
The content of the event parameter is whatever the designer of the serverless function desires. In this particular case, a sample event data is shown below.

{
    "input": {
        "JOB"    : "J1",
        "CLAGE"  : 100,
        "CLNO"   : 20,
        "DEBTINC": 20,
        "DELINQ" : 2,
        "DEROG"  : 0,
        "MORTDUE": 4000,
        "NINQ"   : 1,
        "YOJ"    : 10,
        "LOAN"   : 10000,
        "VALUE"  : 1000000
    }
}

The parseEvent function validates the incoming information.

module.exports = function parseEvent(event)
    let input = null;
    let body = {};
    let rstore = {
        caslib:  process.env.ASTORE_CASLIB,
        name  : process.env.ASTORE_NAME
    }
    if ( event.body !=  null ) {
        body = ( typeof event.body === 'string') ? JSON.parse(event.body) : Object.assign({}, event.body);
       if ( body.hasOwnProperty('input') === true ) {
          input = body.input;
    }
    return { rstore: rstore, input: input }
}

Step 2: Logon to SAS Viya

The server.yml defines the SAS Viya logon information. Note there are other secure ways to manage sensitive information like passwords. You should refer to your provider’s documentation.

module.exports = function getLogonPayload() {
    let p = {
        authType    : 'password',
        host        : `${process.env.VIYA_SERVER}`,
        user        : process.env['USER'],
        password    : process.env['PASSWORD'],
        clientID    : process.env['CLIENTID'],
        clientSecret: (process.env.hasOwnProperty('CLIENTSECRET')) ? process.env[ 'CLIENTSECRET' ] : ''
        };
    return p;
 }

The line restaf.logon(payload) in function in the handler code logs on to the SAS Viya Server using this information.

Step 3 and Step 4: Create Payload and make REST API calls

On successful logon the server is called to do the scoring. This particular example uses the sccasl.runcasl method to run CAS Language (CASL) statements and return the scores. Creating the score has two steps:

  1. upload user input: The user input is converted to a csv and uploaded to a CAS table
  2. Submit CASL statements to SAS Viya (CAS) to do the scoring

The code in src/scoreMain in the repository accomplishes both these steps.

Each of these steps use a CAS action:

    • table.upload – to upload the user data into a CAS Table. The input data is converted into a comma-delimited file(csv) and then uploaded. The REST call using restaf looks like this:
    let csv = makecsv(input); /* create a csv */
    let JSON_Parameters = {
        casout: {
            caslib : 'casuser', /* a valid caslib */
            name   : 'INPUTDATA', /* name of output file on cas server */
            replace: true
        },
 
        importOptions: {
            fileType: 'csv' /* type of the file being uploaded */
        }
    };
 
    let payload = {
        headers: { 'JSON-Parameters': JSON_Parameters },
        data   : csv,
        action : 'table.upload'
    };
 
    let result = await store.runAction(session, payload);
    • sccasl.runcasl – execute CASL statements to do the scoring
 // Setup casl statements 	 	 
 let caslStatements = `	 	 
 loadactionset "astore";	 	 
 action table.loadTable /	 	 
 caslib = "${rstore.caslib}" 	 	 
 path = "${rstore.name}.sashdat"	 	 
 casout = { caslib = "${rstore.caslib}" name = "${rstore.name}" replace=TRUE};	 	 
 
 action astore.score /	 	 
 table = { caslib= 'casuser' name = 'INPUTDATA' } 	 	 
 rstore = { caslib= "${rstore.caslib}" name = '${rstore.name}' }	 	 
 out = { caslib = 'casuser' name = 'OUTPUTDATA' replace= TRUE};	 	 
 action table.fetch r = result/	 	 
 format = TRUE	 	 
 table = { caslib = 'casuser' name = 'OUTPUTDATA' } ;	 	 
 send_response(result);	 	 
 `;	 	 
 // execute cas actions	 	 
 payload = {	 	 
 action: 'sccasl.runcasl',	 	 
 data : { code: caslStatements}	 	 
 }	 	 
 result = await store.runAction(session, payload);

Step 5: Create response

AWS serverless functions must return data and error(s) in a certain form. The two functions setPayload.js and setError.js accomplish this.

module.exports = function setPayload (body) {
    return {
        "statusCode": 200,
        "headers"   : {
            'Access-Control-Allow-Origin'     : '*',
            'Access-Control-Allow-Credentials': true
          },
        "isBase64Encoded": false,
        "body"           : JSON.stringify(body)
    }
  }

Using the serverless functions

When the serverless function is deployed you will get a link for each of the functions. In our case we receive the request shown below (with xxxx replaced with appropriate information).

GET - https://xxxx.amazonaws.com/demo/app

The first link serves up the web application. The user enters some values and the app calls the score serverless function to get the results.
Alternatively, you can write your own application and make an http POST call to the score function using a link such as:

POST - https://xxxx.amazonaws.com/demo/score

To invoke the web application, you will visit the link

https://xxxx.amazonaws.com/demo/app

with your browser. You should see a display shown in Figure 3:

Figure 3: Application Input Screen

Entering values into the two fields and pressing Submit calls the second serverless function, score, and results in a pie chart as seen in Figure 4:

Figure 4: Score Report Screen

Please see the loan.html file in the GitHub repository for details on the application. Displayed below is the relevant part of the Javascript in the loan.html. The score-function-url is the url for the score function. The payload was described earlier in this article. The http call is made using axios.

async function runScore(inputValues ){
 
    let payload = {
        astore: {
            caslib: 'Public',
            name: 'GRADIENT_BOOSTING___BAD_2'
        },
        input: inputValues
    }
    let config = {
        url: {score-function-url}
        method: 'POST',
        data: payload
    }
    let r = await axios(config);
    return r.data.score;
 
}

Porting to other cloud providers

The cloud provider dependent information is handled in the following functions: score.js, parseEvent.js, setPayload.js and setError.js. The rest of the code is host agnostic. In writing your own functions the recommendation is to follow the same pattern as much as possible. The generic code is then available in its own repository for reuse with other providers and applications.

Go try it yourself

I have shown you how to deliver your SAS Viya applications as serverless functions. To access more examples please see the GitHub restaf-demos repository.

Supporting Resources

Serverless functions and SAS Viya - a good match was published on SAS Users.

2月 022019
 

SAS Visual Analytics

I don't know about you, but when I read challenges like:

  • Detecting hidden heart failure before it harms an individual
  • Can SAS Viya AI help to digitalize pension management?
  • How to recommend your next adventure based on travel data
  • How to use advanced analytics in building a relevant next best action
  • Can SAS help you find your future home?
  • When does a customer have their travel mood on, and to which destination will he travel?
  • How can SAS Viya, Machine Learning and Face Recognition help find missing people?

…I can continue with the list of ideas provided by the teams participating in the SAS Nordics User Group’s Hackathon. But one thing is for sure, I become enthusiastic and I'm eager to discover the answers and how analytics can help in solving these questions.

When the Nordics team asked for support for providing SAS Viya infrastructure on Azure Cloud platform, I didn't hesitate to agree and started planning the environment.

Environment needs

Colleagues from the Nordics countries informed us their Hackathon currently included fourteen registered teams. Hence, they needed at least fourteen different environments with the latest and greatest SAS Viya Tools like SAS Visual Analytics, SAS VDMML and SAS Text Analytics. In addition, participants wanted to get the chance to use open source technologies with SAS and asked us to install R-Studio and Jupyter. This would allow data scientists develop models in a programming language of choice and provide access to SAS predictive modeling capabilities.

The challenge I faced was how to automate this installation process. We didn't want to repeat an exact installation fourteen times! Also, in case of a failure we needed a way to quickly reinstall a fresh virtual machine in our environment. We wanted to create the virtual machines on the Azure Cloud platform. The goal was to quickly get SAS Viya instances up and running on Azure, with little user interaction. We ended up with a single script expecting one parameter: the name of the instance. Next, I provide an overview of how we accomplished our task.

The setup

As we need to deploy fourteen identical copies of the same SAS Viya software, we decided to make use of the SAS Mirror Manager, which is a utility for synchronizing SAS software repositories. After downloading the mirror repository, we moved the complete file structure to a Web Server hosted on a separate Nordics Hackathon repository virtual machine, but within a similar private network where the SAS Viya instances will run. This guarantees low latency when downloading the software.

Once the repository server is up and running, we have what we needed to create a SAS Viya base image. Within that image, we first need to make sure to meet the requirements described in the SAS Viya Deployment Guide. To complete this task, we turned to the Viya Infrastructure Resource Kit (VIRK). The VIRK is a collection of tools, created by Erwan Granger, that assist in infrastructure and readiness-verification tasks. The script is located in a repository on SAS software’s GitHub page. By running the VIRK script before creation of the base image, we guarantee all virtual machines based on the image meet the necessary requirements.

Next, we create within the base image the SAS Viya Playbook as described in the SAS Viya Deployment Guide. That allows us to kick off a SAS Viya installation later. The Viya installation must occur later during the initial launch of a new VM based on that image. We cannot install SAS Viya beforehand because one of the requirements is a static IP address and a static hostname, which is different for each VM we launch. However, we can install R-Studio server on the base image. Another important file we make available on this base image is a script to initiate the Ansible installations of OpenLdap, SAS Viya and Jupyter.

Deployment

After the common components are in place we follow the instructions from Azure on how to create a custom image of an Azure VM. This capability is available on other public cloud providers as well. Now all the prerequisites to create working Viya environments for the Hackathon are complete. Finally, we create a launch script to install a full SAS Viya environment with single command and one parameter, the hostname, from the Azure CLI.

$ ./launchscript.sh viya01
$ ./launchscript.sh viya02
$ ./launchscript.sh viya03
...
$ ./launchscript.sh viya12
$ ./launchscript.sh viya13
$ ./launchscript.sh viya14

The script

The main parts of this launch script are:

  1. Testing if the Nordics Hackathon Repository VM is running because we must download software from our own locally created repository.
  2. Launch a new VM, based on the SAS Viya Image we created during preparation, assign a public static IP address, and choose a Standard_E32-16s_v3 Azure VM.
  3. Launch our own Viya-install script to perform the following three sub-steps:
    • Install openLDAP as the identity provider
    • Install SAS Viya just as you would do by following the SAS Viya Deployment Guide.
    • Install Jupyter with a customized Ansible script made by my colleague Alexander Koller.

The result of this is we have fourteen full SAS Viya installations ready in about one hour and 45 minutes. We recently posted a Linkedin video describing the entire process.

Final thoughts

I am planning to write a blog on SAS Communities to share more technical insight on how we created the script. I am honored I was asked to be part of the jury for the Hackathon. I am looking forward to the analytical insights that the different teams will discover and how they will make use of SAS Viya running on the Azure Cloud platform.

Additional resources

Series of Webinars supporting the Nordic Hackathon

Installing SAS Viya Azure virtual machines with a single click was published on SAS Users.

9月 072018
 

PC Magazine defines the broad industry term Software as a Service (SaaS) as, “Software that is rented rather than purchased. Instead of buying applications and paying for periodic upgrades, SaaS is subscription based, and upgrades are automatic during the subscription period.” SaaS, according to the same source, is ideally suited [...]

Does software as a service work for analytics? was published on SAS Voices by David Annis

5月 222018
 

SAS ViyaSAS Viya Presentations is our latest extension of the SAS Platform and interoperable with SAS® 9.4. Designed to enable analytics to the enterprise, it seamlessly scales for data of any size, type, speed and complexity. It was also a star at this year’s SAS Global Forum 2018. In this series of articles, we will review several of the most interesting SAS Viya talks from the event. Our first installment reviews Hadley Christoffels’ talk, A Need For Speed: Loading Data via the Cloud.

You can read all the articles in this series or check out the individual interviews by clicking on the titles below:
Part 1: Technology that gets the most from the Cloud.


Technology that gets the most from the Cloud

Few would argue about the value the effective use of data can bring an organization. Advancements in analytics, particularly in areas like artificial intelligence and machine learning, allow organizations to analyze more complex data and deliver faster, more accurate results.

However, in his SAS Global Forum 2018 paper, A Need For Speed: Loading Data via the Cloud, Hadley Christoffels, CEO of Boemska, reminded the audience that 80% of an analyst’s time is still spent on the data. Getting insight from your data is where the magic happens, but the real value of powerful analytical methods like artificial intelligence and machine learning can only be realized when “you shorten the load cycle the quicker you get to value.”

Data Management is critical and still the most common area of investment in analytical software, making data management a primary responsibility of today’s data scientist. “Before you can get to any value the data has to be collected, has to be transformed, has to be enriched, has to be cleansed and has to be loaded before it can be consumed.”

Benefits of cloud adoption

The cloud can help, to a degree. According to Christoffels, “cloud adoption has become a strategic imperative for enterprises.” The advantages of moving to a cloud architecture are many, but the two greatest are elasticity and scalability.

Elasticity, defined by Christoffels, allows you to dynamically provision or remove virtual machines (VM), while scalability refers to increasing or decreasing capacity within existing infrastructure by scaling vertically, moving the workload to a bigger or smaller VM, or horizontally, by provisioning additional VM’s and distributing the application load between them.

“I can stand up VMs in a matter of seconds, I can add more servers when I need it, I can get a bigger one when I need it and a smaller one when I don’t, but, especially when it comes to horizontal scaling, you need technology that can make the most of it.” Cloud-readiness and multi-threaded processing make SAS® Viya® the perfect tool to take advantage of the benefits of “clouding up.”

SAS® Viya® can addresses complex analytical challenges and speed up data management processes. “If you have software that can only run on a single instance, then scaling horizontally means nothing to you because you can’t make use of that multi-threaded, parallel environment. SAS Viya is one of those technologies,” Christoffels said.

Challenges you need to consider

According to Christoffels, it’s important, when moving your processing to the cloud, that you understand and address existing performance challenges and whether it will meet your business needs in an agile manner. Inefficiencies on-premise are annoying; inefficiencies in the cloud are annoying and costly, since you pay for that resource.

It’s not the best use of the architecture to take what you have on premise and just shift it. “Finding and improving and eliminating inefficiencies is a massive part in cutting down the time data takes to load.”

Boemska, Christoffels’ company, has tools to help businesses find inefficiencies and understand the impact users have on the environment, including:

  1. Real-time diagnostics looking at CPU Usage, Memory Usage, SAS Workload, etc.
  2. Insight and comparison provides a historic view in a certain timeframe, essential when trying to optimize and shave off costly time when working in cloud.
  3. Utilization reports to better understand how the platform is used.

Optimizing inefficiencies with SAS Viya

But scaling vertically and horizontally from cloud-based infrastructure to speed the loading and data management process solves only part of the problem. Christoffels said SAS Viya capabilities completes the picture. SAS Viya offers a number of benefits in a Cloud infrastructure, Christoffels said. Code amendments that make use of the new techniques and benefits now available in SAS Viya, such as the multi-threaded DATA step or CAS Action Sets, can be extremely powerful.

One simple example of the benefits of SAS Viya, Christoffels said, is that with in-memory processing, PROC SORT is a procedure that’s no longer needed; SAS Viya does “grouping on the fly,” meaning you can remove sort routines from existing programs, which of itself, can cut down processing time significantly.

As a SAS Programmer, just the fact that SAS Viya can run multithreaded, the fact that you don’t have to do these sorts, the way it handles grouping on the fly, the fact that multithreaded nature and capability is built into how you deal with tables are all “significant,” according to Christoffels.

Conclusion

Data preparation and load processes have a direct impact on how applications can begin and subsequently complete. Many organizations are using the Cloud platform to speed up the process, but to take full advantage of the infrastructure you have to apply the right software technology. SAS Viya enables the full realization of Cloud benefits through performance improvements, such as the transposing of data and the transformation of data using the DATA step or CAS Action Sets.

Additional Resources

SAS Global Forum Video: A Need For Speed: Loading Data via the Cloud
SAS Global Forum 2018 Paper: A Need For Speed: Loading Data via the Cloud
SAS Viya
SAS Viya Products


Read all the posts in this series.

Part 1: Technology that gets the most from the Cloud

Technology that gets the most from the Cloud was published on SAS Users.

9月 092016
 

If you’re doing data processing in the cloud or using container-enabled infrastructures to deploy your software, you’ll want to learn more about SAS Analytics for Containers. This new solution puts SAS into your existing container-enabled environment – think Docker or Kubernetes – giving data scientists and analysts the ability to perform sophisticated analyses using SAS, and all from the cloud.

The product’s coming out party is the Analytics Experience 2016 conference, September 12-14, 2016 at the Bellagio in Las Vegas. In advance of that event, I sat down with Product Manager Donna DeCapite to learn a little more about SAS Analytics for Containers and find out why it’s such a big deal for organizations who use containers for their applications.

Larry LaRusso: Before we get into details around the solution, and with apologies for my ignorance, let me start out with a really basic question. What are containers?

Donna DeCapite: Cloud containers are all the rage in the IT world. They’re an alternative to virtual machines.  They allow applications and any of its dependencies to be deployed and run in isolated space. Organizations will build and deploy in the container environment because it allows you to build only the necessary system libraries and functions to run a given piece of software. IT prefer it because it’s easy to replicate, and it’s faster and easier to deploy.

LL: And SAS Analytics for Containers will allow organizations to run SAS’ analytics in this environment, the containers?

DD: In short, yes. SAS Analytics for Containers provides a powerful set of data access, analysis and graphical tools to organizations within a container-based infrastructure like Docker. This takes advantage of the build once, run anywhere flexibility of the container environment, making it easier and faster to use SAS Analytics in the cloud.

LL: Who, within an organization, will be the primary users?

DC: Really anyone working with containers in the cloud or anyone working with DevOps. Data scientists will embrace SAS Analytics for Containers because it allows them to access data from nearly any source and easily perform sophisticated analyses using SAS Studio, our browser-based interface, or Jupyter Notebook, an open source notebook-style interface. For SAS developers, the product allows them to quickly provision IT resource to sandbox development ideas. And as I mentioned earlier, IT will appreciate the ease with which applications can be deployed, distributed and managed.

LL: You mentioned data scientists, so now I’m curious; we’re talking complex analyses here, yes?

DC: Definitely. Regressions, decision trees, Bayesian analysis, spatial point pattern analysis, missing data analysis, and many additional statistical analysis techniques can be performed with SAS Analytics for Containers. And in addition to sophisticated statistical and predictive analytics, there are a ton of prebuilt SAS procedures included to handle common tasks like data manipulation, information storage, and report writing, all available via SAS Studio and its assistive nature.

LL: What about organizations that have massive amounts of data? Can they use SAS Analytics for Containers as well?

DC: Yes, SAS Analytics for Containers allows you to take advantage of the processing power of your Hadoop Cluster by leveraging the SAS Accelerators for Hadoop like Code and Scoring Accelerator.

LL: Thank you so much for your time Donna. I know you’ve educated me quite a deal! How about more information. Where can individuals learn more about SAS Analytics for Containers?

DC: If you’re going to Analytics Experience 2016, consider coming to Michael Ames’ table talk, Cloud Computing: How Does It Work? It’s scheduled for Monday, September 12 from 4:30-5:00pm Vegas time. If you’re not going to the event, the SAS website is the best place for more information. Probably the best place to start is the SAS Analytics for Containers home page.

tags: analytics conference, analytics experience, cloud computing, SAS Analytics for Containers, sas studio

Analytics in the Cloud gets a whole lot easier with SAS Analytics for Containers was published on SAS Users.

1月 182016
 
I love GitHub for version control and collaboration, though I'm no master of it. And the tools for integrating git and GitHub with RStudio are just amazing boons to productivity.

Unfortunately, my University-supplied computer does not play well with GitHub. Various directories are locked down, and I can't push or pull to GitHub directly from RStudio. I can't even use install_github() from the devtools package, which is needed for loading Shiny applications up to Shinyapps.io. I lived with this for a bit, using git from the desktop and rsconnect from a home computer. But what a PIA.

Then I remembered I know how to put RStudio in the cloud-- why not install R there, and make that be my GitHub solution?

It works great. The steps are below. In setting it up, I discovered that Digital Ocean has changed their set-up a little bit, so I update the earlier post as well.

1. Go to Digital Ocean and sign up for an account. By using this link, you will get a $10 credit. (Full disclosure: I will also get a $25 credit once you spend $25 real dollars there.) The reason to use this provider is that they have a system ready to run with Docker already built in, which makes it easy. In addition, their prices are quite reasonable. You will need to use a credit card or PayPal to activate your account, but you can play for a long time with your $10 credit-- the cheapest machine is $.007 per hour, up to a $5 per month maximum.

2. On your Digital Ocean page, click "Create droplet". Click on "One-click Apps" and select "Docker (1.9.1 on 14.04)". (The numbers in the parentheses are the Docker and Ubuntu version, and might change over time.) Then a size (meaning cost/power) of machine and the region closest to you. You can ignore the settings. Give your new computer an arbitrary name. Then click "Create Droplet" at the bottom of the page.

3. It takes a few seconds for the droplet to spin up. Then you should see your droplet dashboard. If not, click "Droplets" from the top bar. Under "More", click "Access Console". This brings up a virtual terminal to your cloud computer. Log in (your username is root) using the password that digital ocean sent you when the droplet spun up.

4. Start your RStudio container by typing: docker run -d -p 8787:8787 -e ROOT=TRUE rocker/hadleyverse

You can replace hadleyverse with rstudio if you like, for a quicker first-time installation, but many R users will want enough of Hadley Wickham's packages that it makes sense to install this version. The -e ROOT=TRUE is crucial for our approach to installing git into the container, but see the comment below from Petr Simicek below for another way to do the same thing.

5. Log in to your Cloud-based RStudio. Find the IP address of your cloud computer on the droplet dashboard, and append :8787 to it, and just put it into your browser. For example: http://135.104.92.185:8787. Log in as user rstudio with password rstudio.

6. Install git, inside the Docker container. Inside RStudio, click Tools -> Shell.... Note: you have to use this shell, it's not the same as using the droplet terminal. Type: sudo apt-get update and then sudo apt-get install git-core to install git.

git likes to know who you are. To set git up, from the same shell prompt, type git config --global user.name "Your Handle" and git config --global user.email "an.email@somewhere.edu"

7. Close the shell, and in RStudio, set things up to work with GitHub: Go to Tools -> Global Options -> Git/SVN. Click on create RSA key. You don't need a name for it. Create it, close the window, then view it and copy it.

8. Open GitHub, go to your Profile, click "Edit Profile", "SSH keys". Click "Add key", and just paste in the stuff you copied from RStudio in the previous step.

You're done! To clone an existing repos from Github to your cloud machine, open a new project in RStudio, and select Version Control, then Git, and paste in the URL name that GitHub provides. Then work away!

An unrelated note about aggregators:We love aggregators! Aggregators collect blogs that have similar coverage for the convenience of readers, and for blog authors they offer a way to reach new audiences. SAS and R is aggregated by R-bloggers, PROC-X, and statsblogs with our permission, and by at least 2 other aggregating services which have never contacted us. If you read this on an aggregator that does not credit the blogs it incorporates, please come visit us at SAS and R. We answer comments there and offer direct subscriptions if you like our content. In addition, no one is allowed to profit by this work under our license; if you see advertisements on this page, other than as mentioned above, the aggregator is violating the terms by which we publish our work.