Tech

9月 172020
 

Unquote by removing matching quotesBefore we delve into unquoting SAS character variables let’s briefly review existing SAS functionality related to the character strings quoting/unquoting.

%QUOTE and %UNQUOTE macro functions

Don’t be fooled by these macro functions’ names. They have nothing to do with quoting or un-quoting character variables’ values. Moreover, they have nothing to do with quoting or un-quoting even macro variables’ values. According to the %QUOTE Macro Function documentation it masks special characters and mnemonic operators in a resolved value at macro execution.  %UNQUOTE Macro Function unmasks all special characters and mnemonic operators so they are interpreted as macro language elements instead of as text. There are many other SAS “macro quoting functions” (%SUPERQ, %BQUOTE, %NRBQUOTE, all macro functions whose name starts with %Q: %QSCAN, %QSUBSTR, %QSYSFUNC, etc.) that perform some action including masking.

Historically, however, SAS Macro Language uses terms “quote” and “unquote” to denote “mask” and “unmask”. Keep that in mind when reading SAS Macro documentation.

QUOTE function

Most SAS programmers are familiar with the QUOTE function that adds quotation marks around a character value. It can add double quotation marks (by default) or single quotation marks if you specify that in its second argument.

This function goes even further as it doubles any quotation mark that already existed within the value to make sure that an embedded quotation mark is escaped (not treated as an opening or closing quotation mark) during parsing.

DEQUOTE function

There is also a complementary DEQUOTE function that removes matching quotation marks from a character string that begins with a quotation mark. But be warned that it also deletes all characters to the right of the first matching quotation mark. In my view, deleting those characters is overkill because when writing a SAS program, we may not know what is going to be in the data and whether it’s okay to delete its part outside the first matching quotes. That is why you need to be extra careful if you decide to use this function. Here is an example of what I mean. If you run the following code:

data a;
   input x $ 1-50;
   datalines;
'This is what you get. Let's be careful.'
;
 
data _null_;
   set a;
   y = dequote(x);
   put x= / y=;
run;

you will get the following in the SAS log:

y=This is what you get. Let

This is hardly what you really wanted as you have just lost valuable information – part of the y character value got deleted: 's be careful. I would rather not remove the quotation marks at all than remove them at the expense of losing meaningful information.

$QUOTE informat

The $QUOTE informat does exactly what the DEQUOTE() function does, that is removes matching quotation marks from a character string that begins with a quotation mark. You can use it in the example above by replacing

y = dequote(x);

with the INPUT() function

y = input(x, $quote50.);

Or you can use it directly in the INPUT statement when reading raw data from datalines or an external file:

input x $quote50.;

Both, $QUOTE informat and DEQUOTE() function, in addition to removing all characters to the right of the closing quotation mark do the following unconventional, peculiar things:

  • Remove a lone quotation mark (either double or single) when it’s the only character in the string; apparently, the lone quotation mark is matched to itself.
  • Match single quotation mark with double quotation mark as if they are the same.
  • Remove matching quotation marks from a character string that begins with a quotation mark; if your string has one or more leading blanks (that is, a quotation mark is not the first character), nothing gets removed (un-quoted).

If the described behavior matches your use case, you are welcome to use either $QUOTE informat or DEQUOTE() function. Otherwise, please read on.

UNQUOTE function definition

Up to this point such a function did not exist, but we are about to create one to justify the title. Let’s keep it simple and straightforward. Here is what I propose our new unquote() function to do:

  • If first and last non-blank characters of a character string value are matching quotation marks, we will remove them. We will not consider quotation marks matching if one of them is a single quotation mark and another is a double quotation mark.
  • We will remove those matching quotation marks whether they are both single quotation marks OR both double quotation marks.
  • We are not going to remove or change any other quotation marks that may be present within those matching quotation marks that we remove.
  • We will remove leading and trailing blanks outside the matching quotation marks that we delete.
  • However, we will not remove any leading or trailing blanks within the matching quotation marks that we delete. You may additionally apply the STRIP() function if you need to do that.

To summarize these specifications, our new UNQUOTE() function will extract a character substring within matching quotation marks if they are the first and the last non-blank characters in a character string. Otherwise, it returns the character argument unchanged.

UNQUOTE function implementation

Here is how such a function can be implemented using PROC FCMP:

libname funclib 'c:\projects\functions';
 
proc fcmp outlib=funclib.userfuncs.v1; /* outlib=libname.dataset.package */
   function unquote(x $) $32767;
      pos1 = notspace(x); *<- first non-blank character position;
      if pos1=0 then return (x); *<- empty string;
 
      char1 = char(x, pos1); *<- first non-blank character;
      if char1 not in ('"', "'") then return (x); *<- first non-blank character is not " or ' ;
 
      posL = notspace(x, -length(x)); *<- last non-blank character position;
 
      if pos1=posL then return (x); *<- single character string;
 
      charL = char(x, posL); *<- last non-blank character;
      if charL^=char1 then return (x); *<- last non-blank character does not macth first;
 
      /* at this point we should have matching quotation marks */
      return (substrn(x, pos1 + 1, posL - pos1 - 1)); *<- remove first and last quotation character;
   endfunc; 
run;

Here are the highlights of this implementation:

We use multiple RETURN statements: we sequentially check for different special conditions and if one of them is met we return the argument value intact. The RETURN statement does not just return the value, but also stops any further function execution.

At the very end, after making sure that none of the special conditions is met, we strip the argument value from the matching quotation marks along with the leading and trailing blanks outside of them.

NOTE: SAS user-defined functions are stored in a SAS data set specified in the outlib= option of the PROC FCMP. It requires a 3-level name (libref.datsetname.packagename) for the function definition location to allow for several versions of the same-name function to be stored there.

However, when a user-defined function is used in a SAS DATA Step, only a 2-level name can be specified (libref.datasetname). If that data set has several same-name functions stored in different packages the DATA Step uses the latest function definition (found in a package closest to the bottom of the data set).

UNQUOTE function results

Let’s use the following code to test our newly minted user-defined function UNQUOE():

libname funclib 'c:\projects\functions';
options cmplib=funclib.userfuncs;
 
data A;
   infile datalines truncover;
   input @1 S $char100.;
   datalines;
'
"
How about this?
    How about this?
"How about this?"
'How about this?'
"How about this?'
'How about this?"
"   How about this?"
'      How about this?'
'      How "about" this?'
'      How 'about' this?'
   "     How about this?"
   "     How "about" this?"
   "     How 'about' this?"
   '     How about this?'
;
 
data B;
   set A;
   length NEW_S $100;
   label NEW_S = 'unquote(S)';
   NEW_S = unquote(S);
run;

This code produces the following output table:

Example of character string unquoting
As you can see it does exactly what we wanted it to do – removing matching first and last quotation marks as well as stripping out blanks outside the matching quotation marks.

DSD (Delimiter-Sensitive Data) option

This INFILE statement’s option is particularly and extremely useful when using LIST input to read and un-quote comma-delimited raw data. In addition to removing enclosing quotation marks from character values, the DSD option specifies that when data values are enclosed in quotation marks, delimiters within the value are masked, that is treated as character data (not as delimiters). It also sets the default delimiter to a comma and treats two consecutive delimiters as a missing value.

In contrast with the above UNQUOTE() function, the DSD option will not remove enclosing quotation marks if there are same additional quotation marks present inside the character value.  When DSD option does strip enclosing quotation marks it also strips leading and trailing blanks outside and within the removed quotation marks.

Additional Resources

Your thoughts?

Have you found this blog post useful? Please share your use cases, thoughts and feedback in the comments below.

How to unquote SAS character variable values was published on SAS Users.

9月 172020
 

Unquote by removing matching quotesBefore we delve into unquoting SAS character variables let’s briefly review existing SAS functionality related to the character strings quoting/unquoting.

%QUOTE and %UNQUOTE macro functions

Don’t be fooled by these macro functions’ names. They have nothing to do with quoting or un-quoting character variables’ values. Moreover, they have nothing to do with quoting or un-quoting even macro variables’ values. According to the %QUOTE Macro Function documentation it masks special characters and mnemonic operators in a resolved value at macro execution.  %UNQUOTE Macro Function unmasks all special characters and mnemonic operators so they are interpreted as macro language elements instead of as text. There are many other SAS “macro quoting functions” (%SUPERQ, %BQUOTE, %NRBQUOTE, all macro functions whose name starts with %Q: %QSCAN, %QSUBSTR, %QSYSFUNC, etc.) that perform some action including masking.

Historically, however, SAS Macro Language uses terms “quote” and “unquote” to denote “mask” and “unmask”. Keep that in mind when reading SAS Macro documentation.

QUOTE function

Most SAS programmers are familiar with the QUOTE function that adds quotation marks around a character value. It can add double quotation marks (by default) or single quotation marks if you specify that in its second argument.

This function goes even further as it doubles any quotation mark that already existed within the value to make sure that an embedded quotation mark is escaped (not treated as an opening or closing quotation mark) during parsing.

DEQUOTE function

There is also a complementary DEQUOTE function that removes matching quotation marks from a character string that begins with a quotation mark. But be warned that it also deletes all characters to the right of the first matching quotation mark. In my view, deleting those characters is overkill because when writing a SAS program, we may not know what is going to be in the data and whether it’s okay to delete its part outside the first matching quotes. That is why you need to be extra careful if you decide to use this function. Here is an example of what I mean. If you run the following code:

data a;
   input x $ 1-50;
   datalines;
'This is what you get. Let's be careful.'
;
 
data _null_;
   set a;
   y = dequote(x);
   put x= / y=;
run;

you will get the following in the SAS log:

y=This is what you get. Let

This is hardly what you really wanted as you have just lost valuable information – part of the y character value got deleted: 's be careful. I would rather not remove the quotation marks at all than remove them at the expense of losing meaningful information.

$QUOTE informat

The $QUOTE informat does exactly what the DEQUOTE() function does, that is removes matching quotation marks from a character string that begins with a quotation mark. You can use it in the example above by replacing

y = dequote(x);

with the INPUT() function

y = input(x, $quote50.);

Or you can use it directly in the INPUT statement when reading raw data from datalines or an external file:

input x $quote50.;

Both, $QUOTE informat and DEQUOTE() function, in addition to removing all characters to the right of the closing quotation mark do the following unconventional, peculiar things:

  • Remove a lone quotation mark (either double or single) when it’s the only character in the string; apparently, the lone quotation mark is matched to itself.
  • Match single quotation mark with double quotation mark as if they are the same.
  • Remove matching quotation marks from a character string that begins with a quotation mark; if your string has one or more leading blanks (that is, a quotation mark is not the first character), nothing gets removed (un-quoted).

If the described behavior matches your use case, you are welcome to use either $QUOTE informat or DEQUOTE() function. Otherwise, please read on.

UNQUOTE function definition

Up to this point such a function did not exist, but we are about to create one to justify the title. Let’s keep it simple and straightforward. Here is what I propose our new unquote() function to do:

  • If first and last non-blank characters of a character string value are matching quotation marks, we will remove them. We will not consider quotation marks matching if one of them is a single quotation mark and another is a double quotation mark.
  • We will remove those matching quotation marks whether they are both single quotation marks OR both double quotation marks.
  • We are not going to remove or change any other quotation marks that may be present within those matching quotation marks that we remove.
  • We will remove leading and trailing blanks outside the matching quotation marks that we delete.
  • However, we will not remove any leading or trailing blanks within the matching quotation marks that we delete. You may additionally apply the STRIP() function if you need to do that.

To summarize these specifications, our new UNQUOTE() function will extract a character substring within matching quotation marks if they are the first and the last non-blank characters in a character string. Otherwise, it returns the character argument unchanged.

UNQUOTE function implementation

Here is how such a function can be implemented using PROC FCMP:

libname funclib 'c:\projects\functions';
 
proc fcmp outlib=funclib.userfuncs.v1; /* outlib=libname.dataset.package */
   function unquote(x $) $32767;
      pos1 = notspace(x); *<- first non-blank character position;
      if pos1=0 then return (x); *<- empty string;
 
      char1 = char(x, pos1); *<- first non-blank character;
      if char1 not in ('"', "'") then return (x); *<- first non-blank character is not " or ' ;
 
      posL = notspace(x, -length(x)); *<- last non-blank character position;
 
      if pos1=posL then return (x); *<- single character string;
 
      charL = char(x, posL); *<- last non-blank character;
      if charL^=char1 then return (x); *<- last non-blank character does not macth first;
 
      /* at this point we should have matching quotation marks */
      return (substrn(x, pos1 + 1, posL - pos1 - 1)); *<- remove first and last quotation character;
   endfunc; 
run;

Here are the highlights of this implementation:

We use multiple RETURN statements: we sequentially check for different special conditions and if one of them is met we return the argument value intact. The RETURN statement does not just return the value, but also stops any further function execution.

At the very end, after making sure that none of the special conditions is met, we strip the argument value from the matching quotation marks along with the leading and trailing blanks outside of them.

NOTE: SAS user-defined functions are stored in a SAS data set specified in the outlib= option of the PROC FCMP. It requires a 3-level name (libref.datsetname.packagename) for the function definition location to allow for several versions of the same-name function to be stored there.

However, when a user-defined function is used in a SAS DATA Step, only a 2-level name can be specified (libref.datasetname). If that data set has several same-name functions stored in different packages the DATA Step uses the latest function definition (found in a package closest to the bottom of the data set).

UNQUOTE function results

Let’s use the following code to test our newly minted user-defined function UNQUOE():

libname funclib 'c:\projects\functions';
options cmplib=funclib.userfuncs;
 
data A;
   infile datalines truncover;
   input @1 S $char100.;
   datalines;
'
"
How about this?
    How about this?
"How about this?"
'How about this?'
"How about this?'
'How about this?"
"   How about this?"
'      How about this?'
'      How "about" this?'
'      How 'about' this?'
   "     How about this?"
   "     How "about" this?"
   "     How 'about' this?"
   '     How about this?'
;
 
data B;
   set A;
   length NEW_S $100;
   label NEW_S = 'unquote(S)';
   NEW_S = unquote(S);
run;

This code produces the following output table:

Example of character string unquoting
As you can see it does exactly what we wanted it to do – removing matching first and last quotation marks as well as stripping out blanks outside the matching quotation marks.

DSD (Delimiter-Sensitive Data) option

This INFILE statement’s option is particularly and extremely useful when using LIST input to read and un-quote comma-delimited raw data. In addition to removing enclosing quotation marks from character values, the DSD option specifies that when data values are enclosed in quotation marks, delimiters within the value are masked, that is treated as character data (not as delimiters). It also sets the default delimiter to a comma and treats two consecutive delimiters as a missing value.

In contrast with the above UNQUOTE() function, the DSD option will not remove enclosing quotation marks if there are same additional quotation marks present inside the character value.  When DSD option does strip enclosing quotation marks it also strips leading and trailing blanks outside and within the removed quotation marks.

Additional Resources

Your thoughts?

Have you found this blog post useful? Please share your use cases, thoughts and feedback in the comments below.

How to unquote SAS character variable values was published on SAS Users.

9月 162020
 

Many examples for performing analysis of covariance (ANCOVA) are available in statistical textbooks, conference papers, and blogs. One of the primary interests for researchers and statisticians is to obtain a graphical representation for the fitted ANCOVA model. This post, co-authored with my SAS Technical Support colleague Kathleen Kiernan, demonstrates how to obtain the statistical analysis for a quadratic ANCOVA model by using the GLM procedure and graphical representation by using the SGPLOT procedure.

For the purposes of this blog, the tests for equality of intercepts and slopes have been omitted. For more detailed information about performing analysis of covariance using SAS, see SAS Note 24177, Comparing parameters (slopes) from a model fit to two or more groups.

Fitting the model

Keep in mind that the statistical methods for ANCOVA are similar whether you have a one-way ANCOVA model or a repeated-measures ANCOVA model. The example data below consists of three treatments, the covariate X and the response value Y. Predicted observations were generated based upon augmenting observations to the training data set that is used to fit the final model.

data ts132;
input trt x y @@;
datalines;
1 2.2 3.7 1 3.5 4.7 1 4.1 5.3 1 4.1 5.5 1 7.4 5.7 1 3.0 3.8
1 0.5 1.0 1 2.4 3.3 1 8.4 5.3 1 8.5 5.3 1 8.0 6.1 1 6.3 5.8
2 5.8 7.2 2 9.9 2.5 2 1.2 4.3 2 6.0 6.9 2 5.5 8.0 2 9.7 3.7
2 8.6 5.2 2 5.0 8.0 2 2.6 6.9 2 8.4 4.3 2 3.0 6.8 2 5.2 7.6
3 8.8 5.7 3 9.0 6.1 3 8.8 6.4 3 4.1 9.6 3 2.4 9.4 3 4.1 8.2
3 4.4 10.2 3 8.9 6.0 3 5.3 9.4 3 9.5 5.0 3 1.1 7.5 3 0.1 5.3
;
run;
data new;
do trt=1 to 3;
x=0; y=.; 
output;
x=10; 
output;
end;
run;
data combo;
set ts132 new;
run;

The following statements fit the final model with common slope in the direction of x and unequal quadratic slope in direction of X*X.

proc glm data=combo;
class trt;
model y=trt x x*x*trt/solution noint;
output out=anew p=predy;
ods output parameterestimates=pe;
quit;

The Type III SS displays significance tests for the effects in the model.

The F statistic that corresponds to the TRT effect provides test intercepts (α123=0). The F statistic  that corresponds to X provides test β1=0. The X*X*TRT effect provides test β123=0. The significance levels are very small, indicating that there is sufficient evidence to reject the null hypothesis.

Displayed next: parameter estimates

You can write the parameter estimates to a data set by using the PARAMETERESTIMATES output object in an ODS OUTPUT statement in order to display the model equations on a graph.

The default ODS graphics from PROC GLM provides the following graphic representation of the model:

However, researchers, managers and statisticians often want to modify the graph to include the model equation, to change thickness of the lines, symbols, titles and so on.

You can use PROC SGPLOT to plot the ANCOVA model and regression equations by using the OUTPUT OUT=ANEW data set from PROC GLM, as shown in this example:

proc sgplot data=anew;
scatter y=y x=x /group=trt;
pbspline y=predy x=x / group=trt smooth=0.5 nomarkers;
run;

This code produces the following graph:

You can see that the respective graphs from PROC GLM and PROC SGPLOT look similar.

Customizing your graphs

It is easier to use PROC SGPLOT to enhance graphs rather than to do the same by editing the ODS Graphics Template associated with the GLM procedure.

To color coordinate the respective treatments with the regression model equations and display them in the graph, you can use SG annotation. The TEXT function is used to place the text at specific locations within the graph.

The POLYGON and POLYCONT functions are used to draw a border around the equations. Note that you might need to adjust the values for X1 and Y1 in the annotate data set for your specific output.

data eqn;
set pe(keep=parameter estimate) end=last;
retain trt1 trt2 trt3 x x_x1 x_x2 x_x3;
length function $9 textcolor $30;
x1space='datapercent'; y1space='wallpercent';
function='text';
x1=0;
anchor='left';
width=120;
select (compbl(parameter));
when ('trt 1') trt1=estimate;
when ('trt 2') trt2=estimate;
when ('trt 3') trt3=estimate;
when ('x') do;
if estimate <0 then x=cats(estimate,'x');
else x=cats(' + ',estimate,'x');
end;
when ('x*x*trt 1') do;
if estimate < 0 then x_x1=cats(estimate,'x*x*trt1');
else x_x1=cats(' + ',estimate,'x*x*trt1');
end;
when ('x*x*trt 2') do;
if estimate < 0 then x_x2=cats(estimate,'x*x*trt2');
else x_x2=cats(' + ',estimate,'x*x*trt2');
end;
when ('x*x*trt 3') do;
if estimate < 0 then x_x3=cats(estimate,'x*x*trt3');
else x_x3=cats(' + ',estimate,'x*x*trt3');
end;
otherwise;
end;
if last then do;
label=cats("Y=",trt1,x,x_x1);
y1=3;
textcolor="graphdata1:contrastcolor";
output;
label=cats("Y=",trt2,x,x_x2);
y1=10;
textcolor="graphdata2:contrastcolor";
output;
label=cats("Y=",trt3,x,x_x3);
y1=17;
textcolor="graphdata3:contrastcolor";
output;
function='polygon';
x1=0; y1=0;
linecolor='grayaa';
linethickness=1;
output;
function='polycont';
y1=21; output;
x1=60; output;
y1=0; output;
end;
run;
proc sgplot data=anew sganno=eqn;
scatter y=y x=x /group=trt;
pbspline y=predy x=x / group=trt smooth=0.5 nomarkers;
 yaxis offsetmin=0.3;
run;

This code results in the following graph with model equations:

You might also want to add a title; change the X or Y-axis label; or change the color, thickness and pattern for the lines as well as the symbol for the data points, as follows:

  • Use a TITLE statement to add a title to the graph.
  • Use the LABEL= option in the XAXIS and YAXIS statements in PROC SGLPLOT to specify a label for the axis, with the LABELATTRS= option specifying attributes for those labels.
  • Use the STYLEATTRS statement in PROC SGPLOT to specify the attributes for the lines and marker symbols.

The following PROC SGPLOT code produces a graph with the customizations described above:

ods graphics / attrpriority=none;
title 'Fitted Plot for Quadratic Covariate with Unequal Slopes'; 
proc sgplot data=anew sganno=eqn;
styleattrs datacontrastcolors=(navy purple darkgreen)
datalinepatterns=(1 2 44)
datasymbols=(squarefilled circlefilled
diamondfilled);
scatter y=y x=x /group=trt;
pbspline y=predy x=x / group=trt smooth=0.5 nomarkers;
yaxis offsetmin=0.3 label='Y value' 
labelattrs=(size=10pt color=black weight=bold);
xaxis label='X value' 
labelattrs=(size=10pt color=black weight=bold);
run;

Keep in mind that you also need to adjust the color for the annotated text by changing the value for the TEXTCOLOR= variable (in the DATA step) from textcolor="graphdata1:contrastcolor"  to reference the new colors, as shown in this example: textcolor="navy";

The resulting graph, including the modifications, is displayed below.

Conclusion

Using the output data set from your statistical procedure enables you to take advantage of the functionality of PROC SGPLOT to enhance your output.  This post describes just some of the customizations you can make to your output.

References

Milliken, George. A., and Dallas E. Johnson. 2002. Analysis of Messy Data, Volume III: Analysis of Covariance. London: Chapman and Hall/CRC.

SAS Institute Inc. 2006. SAS Note 24529, "Modifying axes, colors, labels, or other elements of statistical graphs produced using ODS Graphics." Available at support.sas.com/kb/24/529.html.

Using SAS® to analyze and visualize a quadratic ANCOVA model was published on SAS Users.

9月 162020
 

There are three types of visualization APIs defined in the SAS Viya REST API reference documetation: Reports, Report Images and Report Transforms. You may have seen the posts on how to use Reports and Report Images. In this post, I'm going to show you how to use the Report Transforms API. The scenario I am using changes the data source of a SAS Visual Analytics report and saves the transformed report.

Overview of the Report Transforms API

The Report Transforms API provides simple alterations to SAS Visual Analytics reports, and it uses the 'application/vnd.sas.report.transform' media type for the transformation (passed in the REST API call header). When part of a request, the transform performs editing or modifications to a report. If part of a response, the transform describes the operation performed on the report. Some typical transformations include:

  • Replace a data source of a report.
  • Change the theme of a report.
  • Translate labels in a report.
  • Generate an automatic visualization report of a specified data source and columns.

To use the Transforms API, we need to properly set the request body and some attributes for the transform. After the transform, the response contains the transformed report or a reference to the report.

Prepare the source report

This step is very straight-forward. In SAS Visual Analytics, create a report and save it to a folder (instructions on creating a report are found in this video). For this example, I'll use the 'HPS.CARS' table as the data source and create a bar chart. I save the report with name 'Report 1' in 'My Folder'. I'll use this report as the original report in the transform.

Generate the request body

I will use PROC HTTP to call the Transforms API using the 'POST' method and appending the URL with '/reportTransforms/dataMappedReports'. The call needs to set the request body.

  1. Get the ReportURI: In an earlier post I outlined how to get the reportURI via REST API, so I won't go into details. If you'd like an easy way, try this: in SAS Visual Analytics, choose 'Copy Link…'item from the menu. In the pop-up dialog, expand the 'Options' and choose 'Embedded Web Component', and you see there is a string in the form reportUri='/reports/reports/…', that's it. In the request body, we set the string to the 'inputReportUri' to specify the original report - the 'Report 1'.
  2. Report URI from SAS Visual Analytics

  3. Decide on changes to the data source: Here I’d like to change the data source from ‘HPS.CARS’ to ‘CASUSER.CARS_NEW’. The new table uses three columns from ‘HPS.CARS’ as mapped below.
  4. Select columns to include in new table

  5. Specify the data sources in the request body: The request requires two data sources, 'original' and 'replacement', respectively, representing the data sources in original report and the transformed report. Note that the 'namePattern' value is used to enumerate the way of identifying the data source. If it is set to 'uniqueName', the data source is identified by its unique data item name in the XML file of the report. If it is set to 'serverLibraryTable', the data source is identified by the CAS Server, CAS Library and Table names together. The snippets below show the data source section in the request body. I like to use the 'serverLibraryTable' to specify the data source for both original and transformed report, which is clear and easy.
  6. /* data source identification for original report */
      {
        "namePattern": "serverLibraryTable",
        "purpose": "original",
        "server": "cas-shared-default",
        "library": "HPS",
        "table": "CARS"
      }
     
    /* data source identification for transformed report */
      {
        "namePattern": "serverLibraryTable",
        "purpose": "replacement",
        "server": "cas-shared-default",
        "library": "CASUSER",
        "table": "CARS_NEW",
        "replacementLabel": "NEW CARS",
        "dataItemReplacements": [
          {
            "originalColumn": "dte",
            "replacementColumn": "date"
          },
          {
            "originalColumn": "wght",
            "replacementColumn": "weight"
          },	
          {
            "originalColumn": "dest",
            "replacementColumn": "region"
          }
        ]
      }

    Set more attributes for transform

    Besides the request body, we need to set some other attributes for the transform API when changing the data source. These include 'useSavedReport', 'saveResult', 'failOnDataSourceError' and 'validate'.

    • useSavedReport specifies whether to find the input (original) report as a permanent resource. Since I am using the saved report in the repository, I will set it to true.
    • saveResult specifies to save the transformed report permanently in the repository or not. I am going to save the transformed report in the repository, so I set it to true.
    • failOnDataSourceError specifies whether the transform continues if there is a data source failure. The default value is false, and I leave it as such.
    • The validate value decides if the transform will perform the XML schema validation or not. The default value is false, and I leave it is as such.

    Decide on a target report and folder

    I'll save the transformed report with the name 'Transformed Report 1' in the same folder as the original 'Report 1'. I set the 'resultReportName' to 'Transformed Report 1', and set the 'resultReport' with 'name" and 'description' attributes. I also need to get the folderURI of the 'My Folder' directory. You may refer my previous post to see how to get the folderURI using REST APIs.

    Below is the section of the settings for the target report and folder:

    "resultReportName": "Transformed Report 1",
    "resultParentFolderUri": "/folders/folders/cf981702-fb8f-4c6f-bef3-742dd898a69c",
    "resultReport": {
    			    "name": "Transformed Report 1",
    			    "description": "TEST report transform"
    			}

    Perform the transform

    Now, we have set all the necessary parameters for the transform and are ready to run the transform. I put my entire set of code on GitHub. Running the code creates the 'Transformed Report 1' report in 'My Folder', with the data source changing to CASUSER.CARS_NEW', containing the three mapped columns.

    Check the result

    If the API failed to create the transformed report, the PROC SQL statements displays an error code and error message. For example, if the replacement data source is not valid, it returns errors similar to the following.

    Invalid data source error message

    If the API successfully creates the transformed report, “201 Created” is displayed in the log. You may find more info about the transformed report from the response body of tranFile from PROC HTTP. You can also log into the SAS Visual Analytics user interface to check the transformed report is opened successfully, and the data sources are changed as expected. Below is the screenshot of the original report and transformed report. You may have already noticed they use different data sources from data labels.

    Original and transformed reports in SAS Visual Analytics

    Finally

    There are a wealth of other transformations available through the Report Transform APIs. Check out the SAS Developer site for more information.

Using REST API to transform a Visual Analytics Report was published on SAS Users.

9月 152020
 

This is the third installment of a series focused on discussing the secure integration of custom applications into your SAS Viya platform. I began the series with an example of how a recent customer achieved value-add on top of their existing investments by building a custom app for their employees and embedding repeatable analytics from SAS Viya into it. Then in the second part, I outlined the security frameworks and main integration points used to accomplish that.

Key takeaways from the previous post

As a quick recap, in the last post I covered how to integrate custom-written applications with the SAS Viya platform using scoped access tokens from SAS Logon Manager. I outlined the returned access tokens need to be included with the REST calls those custom apps make to SAS Viya endpoints and resources. The access tokens contain the necessary information SAS Viya will use to make authorization decisions for every call your app makes. Administrators of your SAS Viya environment have complete control over the rules that determine the outcome of those decisions. I also introduced the primary decision you need to make before you begin integrating your custom app into your SAS Viya environment: whether those access tokens will be scoped in the context of your application (general-scoped) or in the context of each logged-in user of your application (user-scoped).

Now let's go a bit deeper into the authorization flows for both scenarios. TL;DR at the bottom.

Quick disclaimer: The rest of this post assumes you are familiar with the SAS Administration concepts of Custom Groups, granting access to SAS Content, and creating authorization rules. If those topics are unfamiliar to you and you will be implementing this custom application registration, please take a look at the resources linked inline.

General-scoped access tokens

First, when I say "general-scoped access" here, I mean cases where you don't want end users to have to log in (directly to SAS Viya, at least). You don't want the tokens returned by SAS Logon to your custom applications generating authorization decisions based on the individual using your application. In these cases, however, you do still want control over the endpoints/resources your app calls and which CRUD operations it performs on them. Any situation where you'd like to expose some content you've created in SAS Viya to a large scale of users, particularly those that wouldn't have valid login credentials to your SAS Viya environment, fall in this category. There are two main “flows” available to make this happen. Here, “flow” is a term used to describe the process for accessing a protected resource within the OAuth framework.

Password flow

I'll won't go into much detail on this one because the folks behind OAuth 2.0 are suggesting the phase-out of the use of this flow (source one) (source two). Though it technically works for this use case, it does so by essentially using a "service account" so all REST calls your application makes are within the scope of a single UID. You could create specific authorization grants for that user (or a group it's a member of) to make this functional, but it requires that you include the username and password as clear text URL parameters when you request your token. It's just not a great idea and not secure to send this over the wire even if you are using HTTPS. It also creates another sensitive environment variable to manage and, worse, I have even seen this hard coded it into custom application logic.

Client Credentials flow

Interestingly enough, the combination of the security concerns with the Password flow, while still needing to provide specific access levels for a custom application our team was building, is how I found myself wanting to write this series. If you're reading this, you've likely checked out the page on how to register clients to SAS Logon Manger on developer.sas.com. These instructions do not cover the Client Credintials flow though Mike Roda's SAS Global Forum paper, OpenID Connect Opens the Door to SAS Viya APIs, does provide guidance on the client credentials grant type. Remember, these types of scenarios don't warrant concern for which individual users are making the calls into SAS Viya; however, you still will want to restrict which calls your app makes, so follow least-privilege access best practices.

The secret sauce to making this work is registering your application to SAS Logon Manager using a specific, optional key called authorities and setting its value to be a list of custom group(s) giving your application the access to the proper resources and endpoints. Here's where some prior knowledge of creating authorization rules for specific URIs using endpoint patterns and specific principals comes in handy. Check out the documentation linked above if those are unfamiliar terms. We'll see examples of how to implement this flow in the next post in this series.

The diagram below illustrates this flow. Viewing hint: click on the photo to focus on it, and click on the full-screen button to enlarge it.

Figure 1: Visual representation of the Client Credentials OAuth flow

User-scoped access tokens

As a quick recap, here I'm talking about the cases where it does make sense for your application's end users to log in to SAS. All subsequent API calls into SAS Viya resources your application makes should be within the context of what they as an individual user is allowed to do/see/access in SAS Viya. For example, if a user is not in the SAS Administrator's group, then you'll want to ensure any REST calls your custom app makes for that user that would require him to be in the SAS Administrator's group will indeed fail.

Admittedly, this takes more work on the application development side to implement; however, as far as creating and registering the client for your application, this isn't any more complicated than what I've already outlined in this post. By the nature of the flow I'll outline in a minute, the application developer is going to need to add additional logic to handle sessions and/or cookies to obtain, store, check, update, and eventually revoke individual user's authorization codes. When using SAS Viya applications this all occurs behind the scenes. For example, when you first navigate to SAS Visual Analytics (VA) you are re-directed to SAS Logon Manager to log in and get your user-scoped access token. Then you are redirected back to VA with your token. In turn, VA and the supporting services use this token for any subsequent REST calls to other SAS Viya endpoints and resources. When creating custom-developed apps you integrate with SAS Viya for user-scoped access, we need to recreate that baked-in behavior. This is the process I outline below.

Authorization Code flow

A shortened-for-readability version of how this works is:

    1. The user navigates to your application. If they don't already have a valid session with SAS Logon Manager redirection sends them to the same screen they would see when accessing native SAS Viya applications.
      _
    2.  They enter credentials and log in. Then depending on how the client was registered, either:
      • the user manually approves their group memberships which SAS Logon Manager presents to them (thus all the authorization grants for those group(s)). An example of this is seen when logging into SAS Viya and being presented with the option to accept or deny your SAS Administrators group privileges, or
      • their group memberships will be automatically approved for them. This option is more SAS Viya-esque in that, when you log in as a regular user you're not asked to accept your role in any Custom Groups. It is implicitly allowed behind the scenes, so you needn't know what each Custom Group allows. Rather, it is assumed that a SAS Administrator has provided you the right memberships.
      • Note that we will see an example of how to toggle these two options in the next post in this series._
    3. If the user credentials are valid, SAS Logon Manager responds with a user-scoped authorization code. We'll see how to use redirects to get it back to your application in the next part of this series. Your application needs to retrieve and temporarily store the authorization code which will be in a URL parameter on the redirect.
      _
    4. Your application will then need to make another request to SAS Logon Manager on behalf of your user, this time trading the code for an OAuth access token. This is where your apps ability to handle sessions and/or cookies, regardless of programming and code language, starts to come in handy.
      _
    5. Finally, when your user interacts with some part of your application that needs to make an API call to a SAS Viya endpoint or resource, your application needs to pass their user-scoped OAuth token in with the request. Remember, it's a good idea to include some logic to check for expired tokens and be able to refresh and revoke them when necessary as well.

Joe Furbee covers the authorization code process in detail in Authentication to SAS Viya: a couple of approaches. The whole point of using this flow in this manner is to ensure that SAS Logon Manager is responsible for handling individual user credentials rather than your application, which by contrast would then require submitting them over the wire. The diagram below represents the flow.

Figure 2: Visual representation of the Authorization Code OAuth flow

Password flow

In fairness, this technically works here as well, but we already know the drill.

_

Summing it all up (or TL;DR)...

My goal between this post (and in part 2) was to introduce the necessary planning steps prior to registering a custom-developed application to SAS Logon Manager, so it can make REST API calls to SAS Viya endpoints and resources. To recap, the steps are:

1.  Decide whether users should have to log into SAS Viya or not.

    • Is your application serving up generic or "public" content that anyone using your app should have the same level of access to? Note that in this case you could still have a layer of custom login logic to restrict user access to your application, but I'm referencing scenarios where it doesn't make sense for users to log into SAS Viya, especially if they aren't in the configured user base with valid credentials. If you find yourself in this scenario, then proceed to Step 2A.
      _
    • Or, do you want any application API calls to SAS Viya to be within the context of what that individual logged-in user should be able to do? Then proceed to Step 2B.

2A.  Consider using the Client Credentials flow. Set up the security principal(s) and authorization rule(s) to give your application the appropriate permissions to the needed resources and then register your client with the relevant settings.

2B.  In this case, you should consider using the Authorization Code flow. Enhance your custom application's ability to request and handle user- and session-specific authorization codes and tokens, substituting them into subsequent API calls when needed, and then register your client with the relevant settings.

_
In either case, I'll cover examples in the next post in this series.

Building custom apps on top of SAS Viya, Part Three: Choosing an OAuth flow was published on SAS Users.

9月 092020
 

Welcome back! Today, I continue with part 2 of my series on building custom applications on SAS Viya. The goal of this series is to discuss securely integrating custom-written applications into your SAS Viya platform.

In the first installment of this series, I outlined my experiences on a recent project. In that example, our team worked with a pre-owned auto mall to create a custom application for their buyers to use at auction, facilitating data-driven decisions and maximizing potential profits. Buyers use the app's UI to select/enter the characteristics of the specific car on which they'd like to bid. Those details are then sent via an HTTP REST call to a specific API endpoint in the customer's SAS Viya environment which exposes a packaged version of a detailed decision flow. Business rules and analytic models evaluate the car details as part of that flow. The endpoint returns a maximum bid recommendation and other helpful information via the custom applications's UI.

If you haven't already, I encourage you to read part 1 for more details.

If the concept of exposing endpoints to custom applications elicits some alarming thoughts for you, fear not. The goal of this post is to introduce the security-related considerations when integrating custom applications into your SAS Viya environment. In fact, that is the topic of the next two sections. Today we'll take a look at the security underpinnings of SAS Viya so we can start to flesh out how that integration works. In part 3, we'll dive into more specific decision points. To round things out, the series finishes with some examples of the concepts discussed in parts 1-3.

Looking a little deeper

So how does SAS Viya know it can trust your custom app? How does it know your app should have the authority to call specific API endpoints in your SAS Viya environment and what HTTP methods it should allow? Moreover, who should be able to make those REST calls, the app itself or only individual logged-in users of that app? To answer these questions, we need to understand the foundation of the security frameworks in SAS Viya.

Security in SAS Viya is built around OAuth 2.0 (authorization) and OpenID Connect (authentication), both of which are industry-standard frameworks. The details of how they work, separately and together, are out of the scope of this post. We will, however, delve into a few key things to be aware of within their implementation context for SAS Viya.

First, SAS Logon Manager is the centerpiece. When a client (application) requests access to endpoints and resources, they must first communicate with SAS Logon Manager which returns a scoped access token if it was a valid request. This is true both within SAS Viya (like when internal services hosting APIs communicate with each other) and for external applications like we’re talking about here. The client/application subsequently hands that token in with their request to the desired endpoint.

Workflow: applications requesting access to SAS Viya endpoints

The long and short of it is that we don’t want users of our custom applications to see or manipulate anything they shouldn’t, so these tokens need to be scoped. The authorization systems in SAS Viya need to be able to understand what the bearer of that specific token should be able to do. For a detailed overview of how SAS Viya services leverage OAuth and OpenID Connect tokens for authentication, I'd recommend reading this SAS Global Forum paper from SAS' Mike Roda.

Later in this series, we'll see examples of obtaining a scoped token and then using it to make REST calls to SAS Viya API endpoints, like the auto mall's custom app does to return recommendations for its users. For now, suffice it to say the scope of the token is the critical component to answering our questions regarding how SAS Viya returns specific responses to our custom application.

The glue that binds the two

We've established that native SAS Viya applications and services register themselves as clients to SAS Logon Manager. When a client requests access to an endpoint and/or resource, they must first communicate with SAS Logon Manager which returns a scoped access token if it was a valid request. If the request was unauthorized, it receives a 403 unauthorized (sometimes 401) response. Your SAS Viya environment's administrator has complete control over the rules that determine the authorization decision. We can register additional custom applications, like the auto mall's auction bidding app, to SAS Logon Manager so they too can request scoped tokens. This provides our custom applications with fine-grained access to endpoints and content within SAS Viya. But we’ve already hit a fork in the road. We need to understand the purpose of the application we’re looking to build, so we can decide if the access should be user-scoped or general-scoped.

In the pre-owned auto mall's custom application, a decision and modeling flow is exposed by way of a SAS Micro Analytics Score (MAS) module at a specific HTTP REST API endpoint. All the users of the application have the same level of access to the same endpoint by design; there was no need for differentiated access levels. This is considered an instance of general-scoped access. Abiding by a least-privilege approach, the app should only access certain data and endpoints in SAS Viya, that is, only what it needs to rather than having free reign. In this case, the appropriate level of access can be granted to the registered client (custom application) itself. The result is an access token from SAS Logon Manager scoped within the context of the application.

In other applications you might find it necessary to make more sensitive, or user-specific resources available. In this instance, you'll want to make all authorization decisions within the context of the individual logged-in user. For these situations, it is more prudent to redirect the user directly to SAS Logon Manager to log in and then redirect them back to your application. This results in SAS Logon Manager returning an access token scoped within the context of the logged-in user.

What's next?

Once you've decided which of these two scenarios most closely fits your custom application's needs, the next step is to decide on a flow (or grant) to provide the associated type of access token, general- or user-scoped. Here “flow” is a term used to describe the process for accessing a protected resource within the OAuth framework and that's what I'll cover in the next post in this series.

Building custom apps on top of SAS Viya, Part Two: Understanding the integration points was published on SAS Users.

9月 072020
 

Locale-specific SAS® format catalogs make reporting in multiple languages more dynamic. It is easy to generate reports in different languages when you use both the LOCALE option in the FORMAT procedure and the LOCALE= system option to create these catalogs. If you are not familiar with the LOCALE= system option, see the "Resources" section below for more information.

This blog post, inspired by my work on this topic with a SAS customer, focuses on how to create and use locale-specific informats to read in numeric values from a Microsoft Excel file and then transform them into SAS character values. I incorporated this step into a macro that transforms ones and zeroes from the Excel file into meaningful information for multilingual readers.

Getting started: Creating the informats

The first step is to submit the LOCALE= system option with the value fr_FR. For the example in this article, I chose the values fr_FR and en_US for French and English from this table of LOCALE= values. (That is because I know how to say “yes” and “no” in both English and French — I need to travel more!)

   options locale=fr_fr;

The following code uses both the INVALUE statement and the LOCALE option in PROC FORMAT to create an informat that is named $PT_SURVEY:

   proc format locale library=work;
      invalue $pt_survey 1='oui' 0='non'; run;

Now, toggle the LOCALE= system option and create a second informat using labels in a different language (in this example, it is English):
options locale=en_us;

   proc format locale library=work;
      invalue $pt_survey 1='yes' 0='no';
   run;

In the screenshot below, which shows the output from the DATASETS procedure, you can see that PROC FORMAT created two format catalogs using the specified locale values, which are preceded by underscore characters. If the format catalogs already exist, PROC FORMAT simply adds the $PT_SURVEY informat entry type to them.

   proc datasets memtype=catalog; 
   quit;

Before you use these informats for a report, you must tell SAS where the informats are located. To do so, specify /LOCALE after the libref name within the FMTSEARCH= system option. If you do not add the /LOCALE specification, you see an error message stating either that the $PT_SURVEY informat does not exist or that it cannot be found. In the next two OPTIONS statements, SAS searches for the locale-specific informat in the FORMATS_FR_FR catalog, which PROC FORMAT created in the WORK library:

   options locale=fr_fr;
   options fmtsearch=(work/locale);

If you toggle the LOCALE= system option to have the en_US locale value, SAS then searches for the informat in the other catalog that was created, which is the FORMATS_EN_US catalog.

Creating the Excel file for this example

For this example, you can create an Excel file by using the ODS EXCEL destination from the REPORT procedure output. Although you can create the Excel file in various ways, the reason that I chose the ODS EXCEL statement was to show you some options that can be helpful in this scenario and are also useful at other times.
Use the ODS EXCEL destination to create a file from PROC REPORT. I specify the TAGATTR= style attribute using “TYPE:NUMBER” for the Q_1 variable:

   %let  path=%sysfunc(getoption(WORK));
   filename temp "&path\surveys.xlsx"; 
   ods excel file=temp;
 
 
   data one;
      infile datalines truncover;
      input ptID Q_1;
      datalines;
   111 0
   112 1
   ;
   run;
 
   proc report data=one;
      define ptID / display style(column)={tagattr="type:String"};
      define Q_1 / style(column)={tagattr="type:Number"};
   run;
 
   ods excel close;

Now you have a file that looks like this screenshot when it is opened in Excel. Note that the data value for the Q_1 column is numeric:

The IMPORT procedure uses the DBSASTYPE= data set option to convert the numeric Excel data into SAS character values. Then I can apply the locale-specific character informat to a character variable.

As you will see below, in the macro, I use DBMS=EXCEL in PROC IMPORT to read the Excel file because my SAS and Microsoft Office versions are both 64-bit. (You might have to use the PCFILES LIBNAME Engine to connect to Excel through the SAS PC Files Server if you are not set up this way.)

Using the informats in a macro to create the multilingual reports

The final step is to run the macro with parameters to produce the two reports in French and English, using the locale-specific catalogs. When the macro is called, depending on the parameter value for the macro variable LOCALE, the LOCALE= system option changes, and the $PT_SURVEY informat from the locale-specific catalog is applied. These two tabular reports are produced:

Here is the full code for the example:

   %let  path=%sysfunc(getoption(WORK));
   filename temp "&path\surveys.xlsx";
   ods excel file=temp;
 
   data one;
      infile datalines truncover;
      input ptID Q_1;
      datalines;
   111 0
   112 1
   ;
   run;
 
   proc report data=one;
      define ptID / display style(column)={tagattr="type:String"};
      define Q_1 / style(column)={tagattr="type:Number"};
   run;
 
   ods excel close;
   options locale=fr_fr;
 
   proc format locale library=work;
      invalue $pt_survey 1='oui' 0='non';
   run;
 
   options locale=en_us;
 
   proc format locale library=work;
      invalue $pt_survey 1='yes' 0='no';
   run;
 
   /* Set the FMTSEARCH option */
   options fmtsearch=(work/locale);
 
   /* Compile the macro */
   %macro survey(locale,out);
      /* Set the LOCALE system option */
      options locale=&locale;
 
      /* Import the Excel file  */
      filename survey "&path\surveys.xlsx";
 
      proc import dbms=excel datafile=survey out=work.&out replace;
         getnames=yes;
         dbdsopts="dbsastype=(Q_1='char(8)')";
      run;
 
      data work.&out;
         set work.&out;
 
         /* Create a new variable for the report whose values are assigned by specifying the locale-specific informat in the INPUT function */
         newvar=input(Q_1, $pt_survey.);
         label newvar='Q_1';
      run;
 
      options missing='0';
 
      /*  Create the tabular report */
      proc tabulate data=&out;
         class ptID newvar;
 
         table ptID='Patient ID', newvar*n=' '/box="&locale";
      run;
 
   %mend survey;
 
   /* Call the macros */
   %survey(fr_fr,fr)
   %survey(en_us,en)

For a different example that does not involve an informat, you can create a format in a locale-specific catalog to print a data set in both English and Romanian. See Example 19: Creating a Locale-Specific Format Catalog in the Base SAS® 9.4 Procedures Guide.

Resources

For more information about the LOCALE option:

For more information about reading and writing Excel files:

For more information about creating macros and using the macro facility in SAS:

Using locale-specific format catalogs to create reports in multiple languages was published on SAS Users.

9月 042020
 

As SAS Viya adoption increases among customers, many discover that it fits perfectly alongside their existing SAS implementations, which can be integrated and kept running until major projects have been migrated over. Conversely, SAS Grid Manager has been deployed during the past years to countless production sites. Because SAS Viya provides distributed computing capabilities, customers wonder how it compares to SAS Grid Manager.

SAS® Grid Manager and SAS® Viya® implement distributed computing according to different computational patterns. They can complement each other in providing a highly available and scalable environment to process large volumes of data and produce rapid results. At a high level, the questions we get the most from SAS customers can be summarized in four categories:

  1. I have SAS Viya and SAS Grid Manager. How can I get the most value from using them together?
  2. I have SAS Viya. Can I get any additional benefits by also implementing SAS Grid Manager?
  3. I have SAS Grid Manager. Should I move to SAS Viya?
  4. I am starting a new project. Which platform should I use - SAS Viya or SAS Grid Manager?

To better understand how to get the most from both an architecture and an administration perspective, I answer these questions and more in my SGF 2020 paper SAS® Grid Manager and SAS® Viya®: A Strong Relationship, and its accompanying YouTube video:

I’ve also written a three-part series with more details:

SAS Grid Manager and SAS Viya: A Strong Relationship was published on SAS Users.

9月 042020
 

As SAS Viya adoption increases among customers, many discover that it fits perfectly alongside their existing SAS implementations, which can be integrated and kept running until major projects have been migrated over. Conversely, SAS Grid Manager has been deployed during the past years to countless production sites. Because SAS Viya provides distributed computing capabilities, customers wonder how it compares to SAS Grid Manager.

SAS® Grid Manager and SAS® Viya® implement distributed computing according to different computational patterns. They can complement each other in providing a highly available and scalable environment to process large volumes of data and produce rapid results. At a high level, the questions we get the most from SAS customers can be summarized in four categories:

  1. I have SAS Viya and SAS Grid Manager. How can I get the most value from using them together?
  2. I have SAS Viya. Can I get any additional benefits by also implementing SAS Grid Manager?
  3. I have SAS Grid Manager. Should I move to SAS Viya?
  4. I am starting a new project. Which platform should I use - SAS Viya or SAS Grid Manager?

To better understand how to get the most from both an architecture and an administration perspective, I answer these questions and more in my SGF 2020 paper SAS® Grid Manager and SAS® Viya®: A Strong Relationship, and its accompanying YouTube video:

I’ve also written a three-part series with more details:

SAS Grid Manager and SAS Viya: A Strong Relationship was published on SAS Users.

9月 022020
 

SAS offering free learning resources in celebration of programmers

For more than 40 years, SAS programmers have crafted software and solutions that transform the world. From statistics to data science, to analytics and artificial intelligence, people writing code have architected a new economy with incredible opportunities. SAS Programmer Week honors those people by offering free learning resources available for everyone, from students to early career professionals to SAS veterans.

Running from Sept. 7-11, SAS Programmer Week leads up to the international Day of the Programmer on Saturday, Sept. 12. Training resources will be available for free through a variety of YouTube and video tutorials, webinars, blogs and documentation.

There will be three different tracks for new, experienced and analytics-focused users, with new content released each day. The week culminates with SAS certification prep content that will have participants ready to pursue a valuable SAS credential.

For instance, Tech Republic named SAS as one of 7 data science certifications to boost your resume and salary. CIO Magazine puts SAS among the top 11 big data and analytics certifications for 2020.

Since SAS programmers are busy and may not have all day to engage with the materials, SAS Programmer Week is flexible. Participants can access the material when they want to learn a specific skill related to the day’s topic or consume the material in snippets when they have time.

Interested participants can visit the SAS Programmer Week website to register today, preview the materials and schedule, and jump-start their career journeys.

 

All hail the SAS programmer! was published on SAS Users.