Getting Started

6月 112018
 

In SAS, the reserved keyword _NULL_ specifies a SAS data set that has no observations and no variables. When you specify _NULL_ as the name of an output data set, the output is not written. The _NULL_ data set is often used when you want to execute DATA step code that displays a result, defines a macro variable, writes a text file, or makes calls to the EXECUTE subroutine. In those cases, you are interested in the "side effect" of the DATA step and rarely want to write a data set to disk. This article presents six ways to use the _NULL_ data set. Because the _NULL_ keyword is used, no data set is created on disk.

#1. Use SAS as a giant calculator

You can compute a quantity a DATA _NULL_ step and then use the PUT statement to output the answer to the SAS log. For example, the following DATA step evaluates the normal density function at x-0.5 when μ=1 and σ=2. The computation is performed twice: first using the built-in PDF function and again by using the formula for the normal density function. The SAS log shows that the answer is 0.193 in both cases.

data _NULL_;
mu = 1; sigma = 2; x = 0.5; 
pdf = pdf("Normal", x, mu, sigma);
y = exp(-(x-mu)**2 / (2*sigma**2)) / sqrt(2*constant('pi')*sigma**2);
put (pdf y) (=5.3);
run;
pdf=0.193 y=0.193

#2. Display characteristics of a data set

You can use a null DATA step to display characteristics of a data set. For example, the following DATA step uses the PUT statement to display the number of numeric and character variables in the Sashelp.Class data set. No data set is created.

data _NULL_;
set Sashelp.Class;
array char[*} $ _CHAR_;
array num[*} _NUMERIC_;
nCharVar  = dim(char);
nNumerVar = dim(num);
put "Sashelp.Class: " nCharVar= nNumerVar= ;
stop;   /* stop processing after first observation */
run;
Sashelp.Class: nCharVar=2 nNumerVar=3

You can also store these values in a macro variable, as shown in the next section.

#3. Create a macro variable from a value in a data set

You can use the SYMPUT or SYMPUTX subroutines to create a SAS macro variable from a value in a SAS data set. For example, suppose you run a SAS procedure that computes some statistic in a table. Sometimes the procedure supports an option to create an output data that contains the statistic. Other times you might need to use the ODS OUTPUT statement to write the table to a SAS data set. Regardless of how the statistic gets in a data set, you can use a DATA _NULL_ step to read the data set and store the value as a macro variable.

The following statements illustrate this technique. PROC MEANS creates a table called Summary, which contains the means of all numerical variables in the Sashelp.Class data. The ODS OUTPUT statement writes the Summary table to a SAS data set called Means. The DATA _NULL_ step finds the row for the Height variable and creates a macro variable called MeanHeight that contains the statistic. You can use that macro variable in subsequent steps of your analysis.

proc means data=Sashelp.Class mean stackods;
   ods output Summary = Means;
run;
 
data _NULL_;
set Means;
/* use PROC CONTENTS to determine the columns are named Variable and Mean */
if Variable="Height" then             
   call symputx("MeanHeight", Mean);
run;
 
%put &=MeanHeight;
MEANHEIGHT=62.336842105

For a second example, see the article "What is a factoid in SAS," which shows how to perform the same technique with a factoid table.

#4. Create macro variable from a computational result

Sometimes there is no procedure that computes the quantity that you want, or you prefer to compute the quantity yourself. The following DATA _NULL_ step counts the number of complete cases for the numerical variables in the Sashelp.Heart data. It then displays the number of complete cases and the percent of complete cases in the data. You can obtain the same results if you use PROC MI and look at the MissPattern table.

data _NULL_;
set Sashelp.Heart end=eof nobs=nobs;
NumCompleteCases + (nmiss(of _NUMERIC_) = 0); /* increment if all variables are nonmissing */
if eof then do;                               /* when all observations have been read ... */
   PctComplete = NumCompleteCases / nobs;     /* ... find the percentage */
   put NumCompleteCases= PctComplete= PERCENT7.1;
end;
run;
NumCompleteCases=864 PctComplete=16.6%

#5. Edit a text file or ODS template "on the fly"

This is a favorite technique of Warren Kuhfeld, who is a master of writing a DATA _NULL_ step that modifies an ODS template. In fact, this technique is at the heart of the %MODSTYLE macro and the SAS macros that modify the Kaplan-Meier survival plot.

Although I am not as proficient as Warren, I wrote a blog post that introduces this template modification technique. The DATA _NULL_ step is used to modify an ODS template. It then uses CALL EXECUTE to run PROC TEMPLATE to compile the modified template.

#6. A debugging tool

All the previous tips use _NULL_ as the name of a data set that is not written to disk. It is a curious fact that you can use the _NULL_ data set in almost every SAS statement that expects a data set name!

For example, you can read from the _NULL_ data set. Although reading zero observations is not always useful, one application is to check the syntax of your SAS code. Another application is to check whether a procedure is installed on your system. For example, you can run the statements PROC ARIMA data=_NULL_; quit; to check whether you have access to the ARIMA procedure.

A third application is to use _NULL_ to suppress debugging output. During the development and debugging phase of your development, you might want to use PROC PRINT, PROC CONTENTS, and PROC MEANS to ensure that your program is working as intended. However, too much output can be a distraction, so sometimes I direct the debugging output to the _NULL_ data set where, of course, it magically vanishes! For example, the following DATA step subsets the Sashelp.Cars data. I might be unsure as to whether I created the subset correctly. If so, I can use PROC CONTENTS and PROC MEANS to display information about the subset, as follows:

data Cars;
set Sashelp.Cars(keep=Type _NUMERIC_);
if Type in ('Sedan', 'Sports', 'SUV', 'Truck'); /* subsetting IF statement */
run;
 
/* FOR DEBUGGING ONLY */
%let DebugName = Cars;  /* use _NULL_ to turn off debugging output */
proc contents data=&DebugName short;
run;
proc means data=&DebugName N Min Max;
run;

If I don't want to this output (but I want the option to see it again later), I can modify the DebugName macro (%let DebugName = _NULL_;) so that the CONTENTS and MEANS procedures do not produce any output. If I do that and rerun the program, the program does not create any debugging output. However, I can easily restore the debugging output whenever I want.

Summary

In summary, the _NULL_ data set name is a valuable tool for SAS programmers. You can perform computations, create macro variables, and manipulate text files without creating a data set on disk. Although I didn't cover it in this article, you can use DATA _NULL_ in conjunction with ODS for creating customized tables and reports.

What is your favorite application of using the _NULL_ data set? Leave a comment.

The post 6 ways to use the _NULL_ data set in SAS appeared first on The DO Loop.

5月 292018
 

The SAS language provides syntax that enables you to quickly specify a list of variables. SAS statements that accept variable lists include the KEEP and DROP statements, the ARRAY statement, and the OF operator for comma-separated arguments to some functions. You can also use variable lists on the VAR statements and MODEL statements of analytic procedures.

This article describes six ways to specify a list of variables in SAS. There is a section in the SAS documentation that describes how to construct lists, but this blog post provides more context and a cut-and-paste example for every syntax. This article demonstrates the following:

  • Use the _NUMERIC_, _CHARACTER_, and _ALL_ keywords to specify variables of a certain type (numeric or character) or all types.
  • Use a single hyphen (-) to specify a range of variables that have a common prefix and a sequential set of numerical suffixes.
  • Use the colon operator (:) to specify a list of variables that begin with a common prefix.
  • Use a double-hyphen (--) to specify a consecutive set of variables, regardless of type. You can also use a variation of this syntax to specify a consecutive set of variables of a certain type (numeric or character).
  • Use the OF operator to specify variables in an array or in a function call.
  • Use macro variables to specify variables that satisfy certain characteristics.

Some companies might discourage the use of variable lists in production code because automated lists can be volatile. If the number and names of variables in your data sets occasionally change, it is safer to manually list the variables that you are analyzing. However, for developing code and constructing examples, lists can be a huge time saver.

Use the _NUMERIC_, _CHARACTER_, and _ALL_ keywords

You can specify all numeric variables in a data set by using the _NUMERIC_ keyword. You can specify all character variables by using the _CHARACTER_ keyword. Many SAS procedures use a VAR statement to specify the variables to be analyzed. When you want to analyze all variables of a certain type, you can use these keywords, as follows:

/* compute descriptive statistics of allnumeric variables */
proc means data=Sashelp.Heart nolabels; 
   var _NUMERIC_;          /* _NUMERIC_ is the default */
run;
 
/* display the frequencies of all levels for all character variables */
proc freq data=Sashelp.Heart; 
   tables _CHARACTER_;    /* _ALL_ is the defaul */
run;
Use a keyword to specify a list of variables in SAS

One of my favorite SAS programming tricks is to use these keywords in a KEEP or DROP statement (or data set option). For example, the following statements create a new data set that contains all numeric variables and two character variables from the Sashelp.Heart data:

data HeartNumeric;
set Sashelp.Heart(keep=_NUMERIC_            /* all numeric variables */
                       Sex Smoking_Status); /* two character variables */
run;

An example of using the _ALL_ keyword is shown in the section that discusses the OF operator.

Use a hyphen to specify numerical suffixes

In many situations, variables are named with a common prefix and numerical suffix. For example, financial data might have variables that are named Sales2008, Sales2009, ..., Sales2017. In simulation studies, variables often have names such as X1, X2, ..., X50. The hyphen enables you to specify the first and last variable in a list. The first example can be specified as Sales2008-Sales2017. The second example is X1-X50.

The following DATA step creates 10 variables, including the variables x1-x6. Notice that the data set variables are not in alphanumeric order. That is okay. The syntax x1-x6 will select the six variables x1, x2, x3, x4, x5, and x6 regardless of their physical order in the data. The call to PROC REG uses the six variables in a linear regression:

data A;
   retain Y x1 x3 Z x6 x5 x2 W x4 R;  /* create 10 variables and one observation. Initialize to 0 */
run;
proc reg data=A plots=none;
   model Y = x1-x6;
run;

The parameter estimates from PROC REG are displayed in the order that you specify in the MODEL statement. However, if you use the SET statement in a DATA step, the variables appear in the original order unless you intentionally reorder the variables:

data B;
   set A(keep=x1-x6);
run;

Use the colon operator to specify a prefix

If you want to use variables that have a common prefix but have a variety of suffixes, you can use the colon operator (:), which is a wildcard character that matches any name that begins with a specified prefix. For example, the following DATA step creates a data set that contains 10 variables, including five variables that begin with the prefix 'Sales'. The subsequent DATA step drops the variables that begin with the prefix 'Sales':

data A;
retain Sales17 Y Sales16 Z SalesRegion Sales_new Sales1 R; /* 1 obs. Initialize to 0 */
run;
 
data B;
   set A(drop= Sales: ); /* drop all variables that begin with 'Sales' */
run;

Use a double-hyphen to specify consecutive variables

The previous sections used wildcard characters to match variables that had a specified type or prefix. In the previous sections, you will get the same set of variables regardless of how they might be ordered in the data set. You can use a double-hyphen (--) to specify a consecutive set of variables. The variables you get depend on the order of the variables in the data set.

data A;
   retain Y 0   x3 2   C1 'A'   C2 'BC'
          Z 3   W  4   C4 'D'   C5 'EF'; /* Initialize eight variables */
run;
data B;
   set A(keep=x3--C4);
run;

In this example, the data set B contains the variables x3, C1, C2, Z, W, and C4. If you use the double-hyphen to specify a list, be sure that you know the order of the variables and that this order is never going to change. If the order of the variables changes, your program will behave differently.

You can also specify all variables of a certain type within a range of variables. The syntax Y-numeric-Z specifies all numeric variables between Y and Z in the data set. The syntax Y-character-Z specifies all character variables between Y and Z. For example, the following call to PROC CONTENTS displays the variables (in order) in the Sashelp.Heart data. The call to PROC LOGISTIC specifies all the numeric variables between (and including) the AgeCHDiag variable and the Smoking variable:

proc contents data=Sashelp.Heart order=varnum ;
run;
 
proc logistic data=Sashelp.Heart;
   model status = AgeCHDdiag-numeric-Smoking;
   ods select ParameterEstimates;
run;
Use a double-hyphen to specify a contiguous list of variables in SAS

Arrays and the OF operator

You can use variable lists to assign an array in a SAS DATA step. For example, the following program creates a numerical array named X and a character array named C. The program finds the maximum value in each row and puts that value into the variable named rowMaxNUm. The program also creates a variable named Str that contains the concatenation of the character values for each row:

data Arrays;
   set sashelp.Class;
   array X {*} _NUMERIC_;        /* X[1] is 1st var, X[2] is 2nd var, etc */
   array C {*} _CHARACTER_;      /* C[1] is 1st var, C[2] is 2nd var, etc */
   /* use the OF operator to pass values in array to functions */
   rowMaxNum = max(of x[*]);     /* find the max value in this array (row) */
   length Str $30;
   call catx(' ', Str, of C[*]); /* concatenate the strings in this array (row) */
   keep rowMaxNum Str;
run;
 
proc print data=Arrays(obs=4);
run;
Use a keyword to specify a list of variables to certain SAS functions

You can use the OF operator directly in functions without creating an array. For example, the following program uses the _ALL_ keyword to output the "complete cases" for the Sashelp.Heart data. The program drops any observation that has a missing value for any variable:

data CompleteCases;
  set Sashelp.Heart;
  if cmiss(of _ALL_)=0;  /* output only complete cases for all vars */
run;

Use macro variables to specify a list

The previous sections demonstrate how you can use syntax to specify a list of variables to SAS statements. In contrast, this section describes a technique rather than syntax. It is sometimes the case that the names of variables are in a column in a data set. There might be other columns in the data set that contain characteristics or statistics for the variables. For example, the following call to PROC MEANS creates an output data set (called MissingValues) that contains columns named Variable and NMiss.

proc means data=Sashelp.Heart nolabels NMISS stackodsoutput;
   var _NUMERIC_;
   ods output Summary = MissingValues;
run;
proc print; run;
Use a macro variable to specify a list of variables in SAS

Suppose you want to keep or drop those variables that have one or more missing values. The following PROC SQL call creates a macro variable (called MissingVarList) that contains a space-separated list of all variables that have at least one missing value. This technique has many applications and is very powerful.

/* Use PROC SQL to create a macro variable (MissingVarList) that contains
   the list of variables that have a property such as missing values */
proc sql noprint;                              
 select Variable into :MissingVarList separated by ' '
 from MissingValues
 where NMiss > 0;
quit;
%put &=MissingVarList;
MISSINGVARLIST=AgeCHDdiag Height Weight MRW Smoking AgeAtDeath Cholesterol

You can now use the macro variable in a KEEP, DROP, VAR, or MODEL statement, such as KEEP=&MissingVarList;

Summary

This article shows six ways to specify a list of variables to SAS statements and functions. The SAS syntax provides keywords (_NUMERIC_, _CHARACTER_, and _ALL_) and operators (hyphen, colon, and double-hyphen) to make it easy to specify a list of variables. You can use the syntax in conjunction with the OF operator to pass a variable list to some SAS functions. Lastly, if the names of variables are stored in a column in a data set, you can use the full power of PROC SQL to create a macro variable that contains variables that satisfy certain criteria.

Do you use shorthand syntax to specify lists of variables? Why or why not? Leave a comment.

The post 6 easy ways to specify a list of variables in SAS appeared first on The DO Loop.

5月 212018
 

In a recent blog post, Chris Hemedinger used a scatter plot to show the result of 100 coin tosses. Chris arranged the 100 results in a 10 x 10 grid, where the first 10 results were shown on the first row, the second 10 were shown on the second row, and so on. Placing items along each row before going to the next row is called row-major order.

An implicit formula for arranging items in rows

If you process items sequentially, it is easy to position the items in a grid by using an inductive scheme:

  1. Place the first item at (1, 1).
  2. Assume the n_th is placed at position (r, c). Place the (n+1)st item at position (r, c+1) if there is room on the current row, otherwise place it at (r+1, 1), which is the first element of the next row.

The inductive scheme is also called an implicit or recursive formula because the position of the (n+1)st item is given in terms of the position of the nth item.

For example, suppose that you have 70 items and you want to place 11 items in each row. The inductive algorithm looks like the following:

%let Nx = 11;           /* number of items in row */
data Loc;
label r = "Row" c = "Column";
retain r 1  c 1 item 1;
output;                 /* base case */
do item = 2 to 70;      /* inductive step */
   c + 1;
   if c > &Nx then do;
      r + 1; c = 1;
   end;
   output;
end;
run;
 
title "Position of Items in Grid";
proc sgplot data=Loc;
   text x=c y=r text=item / textattrs=(size=12) position=center strip;
   xaxis integer offsetmin=0.05 offsetmax=0.05;
   yaxis reverse offsetmin=0.05 offsetmax=0.05;
run;
Items arranged in a grid in row-major order with 11 items in each row

The inductive algorithm is easy to implement and to understand. However, it does not enable you to easily determine the row and column of the 1,234,567_th item if there are 11 items in each row. Nor does it enable you to compute the positions when the index increments by a value greater than 1. To answer these questions, you need to use an explicit or direct formula.

An explicit formula for arranging items in rows

The explicit formula uses the MOD function to compute the column position and integer division to compute the row position. SAS does not have an explicit "integer division operator," but you can emulate it by using the FLOOR function. The following macro definitions encapsulate the formulas:

/* (row, col) for item n if there are Nx items in each row (count from 1),
   assuming row-major order */
%macro ColPos(n, Nx);
   1 + mod(&n.-1, &Nx.)
%mend;
%macro RowPos(n, Nx);
   1 + floor((&n.-1) / &Nx.)
%mend;

The formulas might look strange because they subtract 1, do a calculation, and then add 1. This formula assumes that you want to count the items, rows, and columns beginning with 1. If you prefer to count from 0 then the formulas become MOD(n, Nx) and FLOOR(N/Nx).

You can use the formulas to directly compute the position of the odd integers in the digits 1–70 when there are 11 items on each row:

%let Nx = 11;
data grid;
do item = 1 to 70 by 2;       /* only odd integers */
   row = %RowPos(item, &Nx);
   col = %ColPos(item, &Nx);
   output;
end;
run;
 
title "Position of Odd Integers in Grid";
proc sgplot data=grid;
   text x=col y=row text=item / textattrs=(size=12) position=center strip;
   xaxis integer offsetmin=0.05 offsetmax=0.05 label="Column" max=&Nx;
   yaxis reverse offsetmin=0.05 offsetmax=0.05 integer label="Row";
run;
Positions of odd integers in a grid in row-major order

Of course, you can also use the direct formula to process items incrementally. The following DATA step computes the positions for 19 observations in the Sashelp.Class data set, where five names are placed in each row:

data gridName;
set sashelp.class;
y = %RowPos(_N_, 5);  /* 5 columns in each row */
x = %ColPos(_N_, 5);
run;
 
title "Position Five Names in Each Row";
proc sgplot data=gridName;
   text x=x y=y text=Name / textattrs=(size=12) position=center strip;
   xaxis integer offsetmin=0.08 offsetmax=0.08;
   yaxis reverse offsetmin=0.05 offsetmax=0.05 integer;
run;
Positions of items in a grid with 5 items in each row

The explicit formula is used in the SAS/IML NDX2SUB function, which tells you the row and column information for the n_th item in a matrix.

In summary, you can use an implicit formula or an explicit formula to arrange items in rows, where each row contains Nx items. The implicit formula is useful when you are arranging the items sequentially. The explicit formula is ideal when you are randomly accessing the items and you need a direct computation that provides the row and column position.

Finally, if you want to arrange items in column-major order (down the first column, then down the second,...), you can use similar formulas. The row position of the n_th item is 1 + mod(n-1, Ny) and the column position is 1 + floor((n-1) / Ny), where Ny is the number of rows in the grid.

The post Position items in a grid appeared first on The DO Loop.

4月 182018
 

The sweep operator performs elementary row operations on a system of linear equations. The sweep operator enables you to build regression models by "sweeping in" or "sweeping out" particular rows of the X`X matrix. As you do so, the estimates for the regression coefficients, the error sum of squares, and the generalized inverse of the system are updated simultaneously. This is true not only for a single response variable but also for multiple responses.

You might remember elementary row operations from when you learned Gaussian elimination, which is a general technique for solving linear systems of equations. Sweeping a matrix is similar to Gaussian elimination, but the sweep operator is most often used to solve least squares problems for the normal equations X`X = X`Y. Therefore, in practice, the sweep operator is usually applied to the rows of a symmetric positive definite system.

How to use the sweep operator to obtain least squares estimates

Before we discuss details of the sweep operator, let's show how it produces regression estimates. The following program uses the SWEEP function in SAS/IML on a six-observation data set from J. Goodnight (1979, p. 153). The first three columns of the matrix M contains the design matrix for the explanatory variables; the first column of M represents the intercept parameter. The last column of the augmented matrix is the response variable. The sweep operator is called on the uncorrected sum of squares and crossproducts matrix (the USSCP matrix), which is computed as M`*M. In block form, the USSCP matrix contains the submatrices X`X, X`Y, Y`X, and Y`Y. The following program sweeps the first three rows of the USSCP matrix:

proc iml;
/*Intercept X1  X2  Y */
M  = {  1   1   1   1,
        1   2   1   3,
        1   3   1   3,
        1   1  -1   2,
        1   2  -1   2,
        1   3  -1   1};
S = sweep(M`*M, 1:3);     /* sweep first three rows (Intercept, X1, X2) */
print S;
The sweep operator applied to all ros of a USSCP matrix to obtain regression coefficients, SSe, and generalized inverse

The resulting matrix (S) is divided into blocks by black lines. The leading 3x3 block is the (generalized) inverse of the X`X matrix. The first three elements of the last column (outlined in red) are the least squares estimates for the regression model. The lower right cell (outlined in blue) is the sum of squared errors (SSE) for the regression model.

You can compare the result with the results from PROC REG, as follows. The parameter estimates and error sum of squares are highlighted for easy comparison:

data Have;
input X1 X2 Y @@;
datalines;
1   1   1   2   1   3   3   1   3
1  -1   2   2  -1   2   3  -1   1
;
 
proc reg data=Have USSCP plots=none;
   model Y = X1 X2;
   ods select USSCP ANOVA ParameterEstimates;
run; quit;
Regression estimates and SSE for least squares analysis in SAS PROC REG

As claimed, the sweep operator produces the same parameter estimates and SSE statistic as PROC REG. The next sections discuss additional details of the sweep operator.

The basics of the sweep operator

Goodnight (1979) defines the sweep operator as the following sequence of row operations. Given a symmetric positive definite matrix A, SWEEP(A, k) modifies the matrix A by using the pivot element A[k,k] and the k_th row, as follows:

  1. Let D = A[k,k] be the k_th diagonal element.
  2. Divide the k_th row by D.
  3. For every other row i ≠ k, let B = A[i,k] be the i_th element of the k_th column. Subtract B x (row k) from row i. Then set A[i,k] = –B/D.
  4. Set A[k,k] = 1/D.

You could program these steps in a matrix language such as SAS/IML, but as shown previously, the SAS/IML language supports the SWEEP function as a built-in function.

Sweeping in effects

The following program uses the same data as the first section. The USSCP matrix (S0) represents the normal equations (plus a little extra information). If you sweep the first row of the S0 matrix, you solve the regression problem for an intercept-only model:

proc iml;
/*Intercept X1  X2  Y */
M  = {  1   1   1   1,    1   2   1   3,    1   3   1   3,
        1   1  -1   2,    1   2  -1   2,    1   3  -1   1};
S0 = M`*M;          print S0;   /* USSCP matrix */
 
/* sweep in 1st row (Intercept) */
S1 = sweep(S0, 1);   print S1[F=BEST6.];
The sweep operator applied to the first row of a USSCP matrix to obtain an intercept-only model

The first element in the fourth row (S1[1,4], outlined in red) is the parameter estimate for the intercept-only model. The last element (S1[4,4], outlined in blue) is the sum of squared errors (SSE) for the intercept-only model.

If you "sweep in" the second row, you solve the least squares problem for a model that includes the intercept and X1. The corresponding elements of the last column contain the parameter estimates for the model. The lower-right element again contains the SSE for the revised model. If you proceed to "sweep in" the third row, the last column again contains the parameter estimates and the SSE, this time for the model that includes the intercept, X1, and X2:

/* sweep in 2nd row (X1) and 3rd row (X2) */
S2 = sweep(S1, 2);   print S2[F=BEST6.]; 
S3 = sweep(S2, 3);   print S3[F=BEST6.];
The sweep operator applied to subsequent rows of a USSCP matrix to 'sweep in' additional effects

Sweeping out effects

One of the useful features of the sweep operators is that you can remove an effect from a model as easily as you can add it. The sweep operator has a reversibility property: if you sweep a row a second time you "undo" the first sweep. In symbols, the operator has the property that SWEEP( SWEEP(A,k), k) = A for any row k. For example, if you want to take the X1 variable out of the model, merely sweep on the second row again:

S4 = sweep(S3, 2);   print S4[F=BEST6.]; /* model with Intercept + X2 */
The sweep operator applied twice 'sweeps out' an effect from a regression model

Of course, as shown in an earlier section, you do not need to sweep in one row at a time. You can sweep in multiple rows with one call by specifying a vector of indices. In fact, the sweep operator also commutes with itself, which means that you can specify the rows in any order. The call SWEEP(S0, 1:3) is equivalent to SWEEP(S0, {3 1 2}) or SWEEP(S0, {2 3 1}) or any other permutation of {1 2 3}.

Summary

The SWEEP operator (which is implemented in the SWEEP function in SAS/IML) enables you to construct least squares model estimates from the USSCP matrix, which is a block matrix that contains the normal equations. You can "sweep in" rows to add effects to a model or "sweep out" rows to remove effects. After each operation, the parameter estimates appear in the portion of the adjusted USSCP matrix that corresponds to the block for X`Y. Furthermore, the residual sum of squares appears in the portion of the adjusted USSCP matrix that corresponds to Y`Y.

Further reading

The sweep operator is known by different names in different fields and has been rediscovered several times. In statistics, the operator was popularized in a TAS article by J. Goodnight (1979) who mentions Ralston (1960) and Beaton (1964) as early researchers. Outside of statistics, the operator is known as the Principal Pivot Transform (Tucker, 1960), gyration (Duffy, Hazony, and Morrison, 1966), or exchange (Stewart and Stewart, 1998). Tsatsomeros (2000) provides an excellent review of the literature and the history of the sweep operator.

The post The sweep operator: A fundamental operation in regression appeared first on The DO Loop.

4月 022018
 

As a general rule, when SAS programmers want to manipulate data row by row, they reach for the SAS DATA step. When the computation requires column statistics, the SQL procedure is also useful. When both row and column operations are required, the SAS/IML language is a powerful addition to a SAS programmer's toolbox.

I was reminded of this fact recently when a SAS programmer (possibly a student) asked how to "manually" perform the classic chi-square test for association in a two-way frequency table. The computation requires computing the means across rows and down columns, and the student was struggling with implementing the computations in the DATA step. This article illustrates how SAS/IML can simplify the rowwise and columnwise computations in the classic chi-square test.

The chi-square test for association in PROC FREQ

In SAS, the easy way to compute the chi-square test for association is to use PROC FREQ. The following data are from several examples in the PROC FREQ documentation. The data shows the hair color and eye color of 762 European children. The call to PROC FREQ computes the chi-square test and a cross-tabulation that displays the observed value, expected values (under the hypothesis that hair color and eye color are independent), and deviations, which are the "observed minus expected" values:

data Color;
   input Eyes $ Hair $ Count @@;
   label Eyes  ='Eye Color'
         Hair  ='Hair Color';
datalines;
blue  black   6  blue  dark   51  blue  fair   69
blue  medium 68  blue  red    28  brown black  16
brown dark   94  brown fair   90  brown medium 94
brown red    47  green dark   37  green fair   69
green medium 55  green red    38
;
 
title 'Eye and Hair Color of European Children';
proc freq data=Color;
   tables Eyes*Hair / nopercent norow nocol 
                      deviation expected chisq;
   weight Count;
run;

In the eye-by-hair table, each cell contains three values. The first value is the observed cell count, the second value is the expected cell count (assuming independence), and the third value is their difference, which is sometimes called the "deviation." The test statistic and p-value for the chi-square test are outlined in red. The test statistic is 20.92. The probability of observing that value from a random draw of a chi-square distribution with 8 degrees of freedom is 0.0073. Because that probability is so small, we reject the null hypothesis that hair color and eye color are independent.

Compute the chi-square test "manually" in SAS

The chi-square test on a 3 x 4 table is simple enough to compute by hand, but suppose you want to use SAS to validate or reproduce the numbers that PROC FREQ produces? This is a good programming exercise for students to make sure they understand the computations. The PROC FREQ documentation provides the formula for the test statistic by using the equation

where nij is the observed count in row i and column j and eij is the expected count, but there is nothing like programming a formula to ensure understanding.

The following steps indicate the (vectorized) computations that can be used to implement the chi-square test in SAS/IML.

  1. Use subscript reduction operators to compute the means for each row and column, and the grand mean for all cells.
  2. Use an outer product to form the table of expected values from the mean vectors.
  3. Compute the test statistic by using elementwise matrix operations.
  4. Use the CDF function to compute the p-value.
proc iml;
/* row means, column means, and overall mean */
cName = {"black" "dark" "fair" "medium" "red"};
rName = {"blue" "brown" "green"};
C = { 6  51  69  68  28,  
     16  94  90  94  47,   
      0  37  69  55  38};
colMarg = C[:, ];       /* mean of each column */
rowMarg = C[ ,:];       /* mean of each row */
grandMean = rowMarg[:]; /* grand mean (also C{:]) */
 
/* expected values under hypothesis of independence */
Expected  = rowMarg*colMarg / grandMean;   /* outer product */
Deviance = C - Expected;                   /* difference (observed-expected) for each cell */
/* chi square statistic: Sum(( Observed[i,j] - Expected[i,j])^2 / Expected[i,j] ) */
ChiSq = sum( Deviance##2 / Expected );     
DF = (nrow(C)-1) * (ncol(C)-1);
pvalue = 1 - cdf("ChiSq", ChiSq, DF);
 
print Expected[c=cName r=rName], Deviance[c=cName r=rName];
print ChiSq pvalue;
quit;

Notice that the program does not contain any loops, although the formulas contain double summations over the elements of the table. This is an example of "vectorizing" the computations, which means writing the computations as vector or matrix computations rather than scalar operations in a loop.

You can see that the 'Expected' matrix matches the PROC FREQ output for the expected values for each cell. Similarly, the 'Deviance' matrix matches the PROC FREQ output for the difference between observed and expected values. The test statistic is the sum of the ratios of the squared deviances and the expected values. A call to the CDF function computes the p-value.

In summary, you can use the high-level SAS/IML language to implement basic statistical tests such as the chi-square test for association in a two-way frequency table. Such an exercise enables students to understand the details of elementary statistical tests. For programmers who know the statistical details but who are new to the SAS/IML language, this short exercise provides a way to gain proficiency with vectorized programming techniques.

The post The chi-square test: An example of working with rows and columns in SAS appeared first on The DO Loop.

2月 262018
 

My article about the difference between CLASS variables and BY variables in SAS focused on SAS analytical procedures. However, the BY statement is also useful in the SAS DATA step where it is used to merge data sets and to analyze data at the group level. When you use the BY statement in the DATA step, the DATA step creates two temporary indicator variables for each variable in the BY statement. The names of these variables are FIRST.variable and LAST.variable, where variable is the name of a variable in the BY statement. For example, if you use the statement BY Sex, then the names of the indicator variables are FIRST.Sex and LAST.Sex.

This article gives several examples of using the FIRST.variable and LAST.variable indicator variables for BY-group analysis in the SAS DATA step. The first example shows how to compute counts and cumulative amounts for each BY group. The second example shows how to compute the time between the first and last visit of a patient to a clinic, as well as the change in a measured quantity between the first and last visit. BY-group processing in the DATA step is a fundamental operation that belongs in every SAS programmer's tool box.

Use FIRST. and LAST. variables to find count the size of groups

The first example uses data from the Sashelp.Heart data set, which contains data for 5,209 patients in a medical study of heart disease. The data are distributed with SAS. The following DATA step extracts the Smoking_Status and Weight variables and sorts the data by the Smoking_Status variable:

proc sort data=Sashelp.Heart(keep=Smoking_Status Weight)
          out=Heart;
   by Smoking_Status;
run;

Because the data are sorted by the Smoking_Status variable, you can use the FIRST.Smoking_Status and LAST.Smoking_Status temporary variables to count the number of observations in each level of the Smoking_Status variable. (PROC FREQ computes the same information, but does not require sorted data.) When you use the BY Smoking_Status statement, the DATA step automatically creates the FIRST.Smoking_Status and LAST.Smoking_Status indicator variables. As its name implies, the FIRST.Smoking_Status variable has the value 1 for the first observation in each BY group and the value 0 otherwise. (More correctly, the value is 1 for the first record and for records for which the Smoking_Status variable is different than it was for the previous record.) Similarly, the LAST.Smoking_Status indicator variable has the value 1 for the last observation in each BY group and 0 otherwise.

The following DATA step defines a variable named Count and initializes Count=0 at the beginning of each BY group. For every observation in the BY group, the Count variable is incremented by 1. When the last record in each BY group is read, that record is written to the Count data set.

data Count;
   set Heart;                 /* data are sorted by Smoking_Status */
   BY Smoking_Status;         /* automatically creates indicator vars */
   if FIRST.Smoking_Status then
      Count = 0;              /* initialize Count at beginning of each BY group */
   Count + 1;                 /* increment Count for each record */
   if LAST.Smoking_Status;    /* output only the last record of each BY group */
run;
 
proc print data=Count noobs; 
   format Count comma10.;
   var Smoking_Status Count;
run;
Use FIRST.variable and LAST.variable to count the size of groups

The same technique enables you to accumulate values of a variable within a group. For example, you can accumulate the total weight of all patients in each smoking group by using the following statements:

if FIRST.Smoking_Status then
   cumWt = 0;
cumWt + Weight;

This same technique can be used to accumulate revenue from various sources, such as departments, stores, or regions.

Use FIRST. and LAST. variables to compute duration of treatment

Another common use of the FIRST.variable and LAST.variable indicator variables is to determine the length of time between a patient's first visit and his last visit. Consider the following DATA step, which defines the dates and weights for four male patients who visited a clinic as part of a weight-loss program:

data Patients;
informat Date date7.;
format Date date7. PatientID Z4.;
input PatientID Date Weight @@;
datalines;
1021 04Jan16  302  1042 06Jan16  285
1053 07Jan16  325  1063 11Jan16  291
1053 01Feb16  299  1021 01Feb16  288
1063 09Feb16  283  1042 16Feb16  279
1021 07Mar16  280  1063 09Mar16  272
1042 28Mar16  272  1021 04Apr16  273
1063 20Apr16  270  1053 28Apr16  289
1053 13May16  295  1063 31May16  269
;

For these data, you can sort by the patient ID and by the date of visit. After sorting, the first record for each patient contains the first visit to the clinic and the last record contains the last visit. You can subtract the patient's weight for these dates to determine how much the patient gained or lost during the trial. You can also use the INTCK function to compute the elapsed time between visits. If you want to measure time in days, you can simply subtract the dates, but the INTCK function enables you to compute duration in terms of years, months, weeks, and other time units.

proc sort data=Patients;
   by PatientID Date;
run;
 
data weightLoss;
   set Patients;
   BY PatientID;
   retain startDate startWeight;                 /* RETAIN the starting values */
   if FIRST.PatientID then do;
      startDate = Date; startWeight = Weight;    /* remember the initial values */
   end;
   if LAST.PatientID then do;
      endDate = Date; endWeight = Weight;
      elapsedDays = intck('day', startDate, endDate); /* elapsed time (in days) */
      weightLoss = startWeight - endWeight;           /* weight loss */
      AvgWeightLoss = weightLoss / elapsedDays;       /* average weight loss per day */
      output;                                         /* output only the last record in each group */
   end;
run;
 
proc print noobs; 
   var PatientID elapsedDays startWeight endWeight weightLoss AvgWeightLoss;
run;
Use FIRST.variable and LAST.variable to compute elapsed time and average quantities in a group

The output data set summarizes each patient's activities at the clinic, including his average weight loss and the duration of his treatment.

Some programmers think that the FIRST.variable and LAST.variable indicator variables require that the data be sorted, but that is not true. The temporary variables are created whenever you use a BY statement in a DATA step. You can use the NOTSORTED option on the BY statement to process records regardless of the sort order.

Summary

In summary, the BY statement in the DATA step automatically creates two indicator variables. You can use the variables to determine the first and last record in each BY group. Typically the FIRST.variable indicator is used to initialize summary statistics and to remember the initial values of measurement. The LAST.variable indicator is used to output the result of the computations, which often includes simple descriptive statistics such as a sum, difference, maximum, minimum, or average values.

BY-group processing in the DATA step is a common topic that is presented at SAS conferences. Some authors use FIRST.BY and LAST.BY as the name of the indicator variables. For further reading, I recommend the paper "The Power of the BY Statement" (Choate and Dunn, 2007). SAS also provides several samples about BY-group processing in the SAS DATA step, including the following:

The post How to use FIRST.variable and LAST.variable in a BY-group analysis in SAS appeared first on The DO Loop.

2月 142018
 

When I first learned to program in SAS, I remember being confused about the difference between CLASS statements and BY statements. A novice SAS programmer recently asked when to use one instead of the other, so this article explains the difference between the CLASS statement and BY variables in SAS procedures.

The BY statement and the CLASS statement in SAS both enable you to specify one or more categorical variables whose levels define subgroups of the data. (For simplicity, we consider only a single categorical variable.) The primary difference is that the BY statement computes many analyses, each on a subset of the data, whereas the CLASS statement computes a single analysis of all the data. Specifically,

  • The BY statement repeats an analysis on every subgroup. The subgroups are treated as independent samples. If a BY variable defines k groups, the output will contains k copies of every table and graph, one copy for the first group, one copy for the second group, and so on.
  • The CLASS statement enables you to include a categorical variable as part of an analysis. Often the CLASS variable is used to compare the groups, such as in a t test or an ANOVA analysis. In regression models, the CLASS statement enables you to estimate parameters for the levels of a categorical variable, thereby estimating the effect of each level on the response. Another use of a CLASS variable is to define categories for a classification task, such as a discriminant analysis.

To illustrate the differences between an analysis that uses a BY statement and one that uses a CLASS statement, let's create a subset (called Cars) of the Sashelp.Cars data. The levels of the Origin variable indicate whether a vehicle is manufactured in "Asia", "Europe", or the "USA". For efficiency reasons, most classical SAS procedures require that you sort the data when you use a BY statement. Therefore, a call to PROC SORT creates a sorted version of the data called CarsSorted, which will be used for the BY-group analyses.

data Cars;
   set Sashelp.Cars;
   where cylinders in (4,6,8) and type ^= 'Hybrid'; 
run;
 
proc sort data=Cars out=CarsSorted; 
   by Origin; 
run;

Descriptive statistics for grouped data

When you generate descriptive statistics for groups of data, the univariate statistics are identical whether you use a CLASS statement or a BY statement. What changes is the way that the statistics are displayed. When you use the CLASS statement, you get one table that contains all statistics or one graph that shows the distribution of each subgroup. However, when you use the BY statement you get multiple tables and graphs.

The following statements use the CLASS statement to produce descriptive statistics. PROC UNIVARIATE displays one (paneled) graph that shows a comparative histogram for the vehicles that are made in Asia, Europe, and USA. PROC MEANS displays one table that contains descriptive statistics:

proc univariate data=Cars;
   class Origin;
   var Horsepower;
   histogram Horsepower / nrows=3; /* must use NROWS= to get panel */
   ods select histogram;
run;
 
proc means data=Cars N Mean Std;
   class Origin;
   var Horsepower Weight Mpg_Highway;
run;

In contrast, if you run a BY-group analysis on the levels of the Origin variable, you will see three times as many tables and graphs. Each analysis is preceded by a label that identifies each BY group. Notice that the BY-group analysis uses the sorted data.

proc means data=CarsSorted N Mean Std;
   by Origin;
   var Horsepower Weight Mpg_Highway;
run;

Always remember that the output from a BY statement is equivalent to the output from running the procedure multiple times on subsets of the data. For example, the previous statistics could also be generated by calling PROC MEANS three times, each call with a different WHERE clause, as follows:

proc means N Mean Std data=CarsSorted( where=(origin='Asia') );
   var Horsepower Weight Mpg_Highway;
run;
proc means N Mean Std data=CarsSorted( where=(origin='Europe') );
   var Horsepower Weight Mpg_Highway;
run;
proc means N Mean Std data=CarsSorted( where=(origin='USA') );
   var Horsepower Weight Mpg_Highway;
run;

In fact, if you ever find yourself repeating an analysis many times (perhaps by using a macro loop), you should consider whether you can rewrite your program to be more efficient by using a BY statement.

Comparing groups: Use the CLASS statement

As a general rule, you should use a CLASS statement when you want to compare or contrast groups. For example, the following call to PROC GLM performs an ANOVA analysis on the horsepower (response variable) for the three groups defined by the Origin variable. The procedure automatically creates a graph that displays three boxplots, one for each group. The procedure also computes parameter estimates for the levels of the CLASS variable (not shown).

proc glm data=Cars; /* by default, create graph with side-by-side boxplots */
   class Origin;
   model Horsepower = Origin / solution;
run;

You can specify multiple variables on the CLASS statement to include multiple categorical variables in a model. Any variables that are not listed on the CLASS statement are assumed to be continuous. Thus the following call to PROC GLM analyzes a model that has one continuous and one classification variable. The procedure automatically produces a graph that overlays the three regression curves on the data:

ods graphics /antialias=on;
title "CLASS Variable Regression: One Model with Multiple Parameters";
proc GLM data=Cars plots=FitPlot;
   class Origin;
   model Horsepower = Origin | Weight / solution;
   ods select ParameterEstimates ANCOVAPlot;
quit;

In contrast, if you use a BY statement, the Origin variable cannot be part of the model but is used only to subset the data. If you use a BY statement, you obtain three different models of the form Horsepower = Weight. You get three parameter estimates tables and three graphs, each showing one regression line overlaid on a subset of the data.

Predicted Values: CLASS VARIABLE versus BY Variable

When you use a BY statement and fit three models of the form Horsepower = Weight, the procedure fits a total of six parameters. Notice that when you use the CLASS statement and fit the model Horsepower = Origin | Weight, you also fit six free parameters. It turns out that these two methods produce the same predicted values. In fact, you can combine the parameter estimates (for the GLM parameterization) for the CLASS model to obtain the parameter estimates from the BY-variable analysis, as shown below. Each parameter estimate for the BY-variable models are obtained as the sum of two estimates for the CLASS-variable analysis:

For many regression models, the predicted values for the BY-variable analyses are the same as for a particular model that uses a CLASS variable. As shown above, you can even see how the parameters are related when you use a GLM or reference parameterization. However, the CLASS variable formulation can fit models (such as the equal-slope model Horsepower = Origin Weight) that are not available when you use a BY variable to fit three separate models. Furthermore, the CLASS statement provides parameter estimates so that you can see the effect of the groups on the response variable. It is more difficult to compare the models that are produced by using the BY statement.

Other CLASS-like statements in SAS

Some SAS procedures use other syntax to analyze groups. In particular, the SGPLOT procedure calls classification variables "group variables." If you want to overlay graphs for multiple groups, you can use the GROUP= option on many SGPLOT statements. (Some statements support the CATEGORY= option, which is similar.) For example, to replicate the two-variable regression analysis from PROC GLM, you can use the following statements in PROC SGPLOT:

proc sgplot data=Cars;
   reg y=Horsepower x=Weight / group=Origin; /* Horsepower = Origin | Weight */
run;

Summary

In summary, use the BY statement in SAS procedures when you want to repeat an analysis for every level of one or more categorical variables. The variables define the subsets but are not otherwise part of the analysis. In classical SAS procedures, the data must be sorted by the BY variables. A BY-group analysis can produce many tables and graphs, so you might want to suppress the ODS output and write the results to a SAS data set.

Use the CLASS statement when you want to include a categorical variable in a model. A CLASS statement often enables you to compare or contrast subgroups. For example, in regression models you can evaluate the relative effect of each level on the response variable.

In some cases, the BY statement and the CLASS statement produce identical statistics. However, the CLASS statement enables you to fit a wider variety of models.

The post The difference between CLASS statements and BY statements in SAS appeared first on The DO Loop.

8月 142017
 

When someone refers to the correlation between two variables, they are probably referring to the Pearson correlation, which is the standard statistic that is taught in elementary statistics courses. Elementary courses do not usually mention that there are other measures of correlation.

Why would anyone want a different estimate of correlation? Well, the Pearson correlation, which is also known as the product-moment correlation, uses empirical moments of the data (means and standard deviations) to estimate the linear association between two variables. However, means and standard deviations can be unduly influenced by outliers in the data, so the Pearson correlation is not a robust statistic.

A simple robust alternative to the Pearson correlation is called the Spearman rank correlation, which is defined as the Pearson correlation of the ranks of each variable. (If a variable contains tied values, replace those values by their average rank.) The Spearman rank correlation is simple to compute and conceptually easy to understand. Some advantages of the rank correlation are

  • The rank correlation is always in the interval [-1, 1]. For "tame" data, the Spearman and Pearson correlations are close to each other. In fact, if X and Y are bivariate normal random variables with Pearson correlation ρ, then the Spearman correlation is 6/π arcsin(ρ/2), which is very close to the identity function on [-1, 1].
  • The rank correlation is robust to outliers. For example, the data set X={1, 2, 2, 5} has the same ranks as the set Y={1, 2, 2, 500}. Therefore for any third variable Z, the rank correlation between X and Z is the same as the rank correlation between Y and Z.
  • The rank correlation is invariant under any monotonic increasing transformation of the data, such as LOG, EXP, and SQRT. In the previous example, the rank correlation between Z and X is the same as the rank correlation between Z and the log-transform of X, which is {log(1), log(2), log(2), log(5)}. This is in contrast to the Pearson correlation, which is only invariant under affine transformations with positive scaling factors (X → a*X + b, where a > 0).
  • The rank correlation can be used for any ordinal variable. For example, if the variable X has the ordinal values {"Very Unsatisfied", "Unsatisfied", "Satisfied", "Very Satisfied"}, and the variable Y has the ordinal values {"Low", "Medium", "High"}, then you can compute a rank correlation between X and Y.

Compute rank correlation in SAS

PROC CORR in SAS supports several measures of correlation, including the Pearson and Spearman correlations. For data without outliers, the two measures are often similar. For example, the following call to PROC CORR computes the Spearman rank correlation between three variables in the Sashelp.Class data set:

/* Compute PEARSON and SPEARMAN rank correlation by using PROC CORR in SAS */
proc corr data=sashelp.class noprob nosimple PEARSON SPEARMAN;
   var height weight age;
run;
Spearman rank correlation compared to the Pearson correlation in SAS

According to both statistics, these variables are very positively correlated, with correlations in the range [0.7, 0.88]. Notice that the rank correlations (the lower table) are similar to the Pearson correlations for these data. However, if the data contain outliers, the rank correlation estimate is less influenced by the magnitude of the outliers.

Compute rank correlation manually

As mentioned earlier, the Spearman rank correlation is conceptually easy to understand. It consists of two steps: compute the ranks of each variable and compute the Pearson correlation between the ranks. It is instructive to reproduce each step in the Spearman computation. You can use PROC RANK in SAS to compute the ranks of the variables, then use PROC CORR with the PEARSON option to compute the Pearson correlation of the ranks. If the data do not contain any missing values, then the following statements implement to two steps that compute the Spearman rank correlation:

/* Compute the Spearman rank correlation "manually" by explicitly computing ranks */
/* First compute ranks; use average rank for ties */
proc rank data=sashelp.class out=classRank ties=mean;
   var height weight age;
   ranks RankHeight RankWeight RankAge;
run;
 
/* Then compute Pearson correlation on the ranks */
proc corr data=classRank noprob nosimple PEARSON;
   var RankHeight RankWeight RankAge;
run;

The resulting table of correlations is the same as in the previous section and is not shown. Although PROC CORR can compute the rank correlation directly, it is comforting that these two steps produce the same answer. Furthermore, this two-step method can be useful if you decide to implement a rank-based statistic that is not produced by any SAS procedure. This two-step method is also the way to compute the Spearman correlation of character ordinal variables because PROC CORR does not analyze character variables. However, PROC RANK supports both character and numeric variables.

If you have missing values in your data, then make sure you delete the observations that contain missing values before you call PROC RANK. Equivalently, you can use a WHERE statement to omit the missing values. For example, you could insert the following statement into the PROC RANK statements:
   where height^=. & weight^=. & age^=.;

Compute rank correlation in SAS/IML software

In the SAS/IML language, the CORR function computes the Spearman rank correlation directly, as follows. The results are the same as the results from PROC CORR, and are not shown.

proc iml;
use sashelp.class;
   read all var {height weight age} into X;
close;
 
RankCorr = corr(X, "Spearman");  /* compute rank correlation */

If you ever need to compute a rank-based statistic manually, you can also use the RANKTIE function to compute the ranks of the elements in a numerical vector, such as
   ranktie(X[ ,1], "Mean");

Summary

The Spearman rank correlation is a robust measure of the linear association between variables. It is related to the classical Pearson correlation because it is defined as the Pearson correlation between the ranks of the individual variables. It has some very nice properties, including being robust to outliers and being invariant under monotonic increasing transformations of the data. For other measures of correlation that are supported in SAS, see the PROC CORR documentation.

The post What is rank correlation? appeared first on The DO Loop.

6月 072017
 

I have previously discussed how to define functions that safely evaluate their arguments and return a missing value if the argument is not in the domain of the function. The canonical example is the LOG function, which is defined only for positive arguments. For example, to evaluate the LOG function on a sequence of (possibly non-positive) values, you can use the following IF-THEN/ELSE logic:

data Log;
input x @@;
if x > 0 then 
   logX = log(x);
else 
   logX = .;
datalines;
-1 1 2 . 0 10  
;
 
proc print; run;

On SAS discussion forums, I sometimes see questions from people who try to use the IFN function to accomplish the same logic. That is, in place of the IF-THEN/ELSE logic, they try to use the following one-liner in the DATA step:

logX = ifn(x>0, log(x), .);

Although this looks like the same logic, there is a subtle difference. All three arguments to the IFN function are evaluated BEFORE the function is called, and the results of the evaluation are then passed to the function. For example, if x= -1, then the SAS DATA step does the following:

  1. Evaluate the Boolean expression x>0. When x= -1, the expression evaluates to 0.
  2. Evaluate the second argument log(x). This happens regardless of the result of evaluating the first expression. When x= -1, the expression log(x) is invalid and the SAS log will report
    NOTE: Invalid argument to function LOG
    NOTE: Mathematical operations could not be performed at the following places. The results of the operations have been set to missing values.
  3. Call the IFN function as logX = IFN(0, ., .), which results in assigning a missing value to logX.

In Step 2, SAS evaluates log(x) unconditionally for every value of x, which leads to out-of-domain errors when x is not positive. This is exactly the situation that the programmer was trying to avoid! In contrast, the IF-THEN/ELSE logic only evaluates log(x) when x is positive. Consequently, the SAS log is clean when you use the IF-THEN/ELSE statement.

There are plenty of situations for which the IFN function (and it's cousin, the IFC function) are useful, but for testing out-of-domain conditions, use IF-THEN/ELSE logic instead.

The post The IFN function versus the IF-THEN/ELSE statement in SAS appeared first on The DO Loop.

4月 102017
 

Many intervals in statistics have the form p ± δ, where p is a point estimate and δ is the radius (or half-width) of the interval. (For example, many two-sided confidence intervals have this form, where δ is proportional to the standard error.) Many years ago I wrote an article that mentioned that you can construct these intervals in the SAS/IML language by using a concatenation operator (|| or //). The concatenation creates a two-element vector, like this:

proc iml;
mu = 50; 
delta = 1.5;
CI = mu - delta  || mu + delta;   /* horizontal concatenation ==> 1x2 vector */

Last week it occurred to me that there is a simple trick that is even easier: use the fact that SAS/IML is a matrix-vector language to encode the "±" sign as a vector {-1, 1}. When SAS/IML sees a scalar multiplied by a vector, the result will be a vector:

CI = mu + {-1  1}*delta;         /* vector operation ==> 1x2 vector */
print CI;

You can extend this example to compute many intervals by using a single statement. For example, in elementary statistics we learn the "68-95-99.7 rule" for the normal distribution. The rule says that in a random sample drawn from a normal population, about 68% of the observations will be within 1 standard deviation of the mean, about 95% will be within 2 standard deviations, and about 99.7 % will be within 3 standard deviations of the mean. You can construct those intervals by using a "multiplier matrix" whose first row is {-1, +1}, whose second row is {-2, +2}, and whose third row is {-3, +3}. The following SAS/IML statements construct the three intervals for the 69-95-99.7 rule for a normal population with mean 50 and standard deviation 8:

mu = 50;  sigma = 8;
m = {-1 1,
     -2 2,
     -3 3};
 
Intervals = mu + m*sigma;
ApproxPct = {"68%", "95%", "99.7"};
print Intervals[rowname=ApproxPct];

Just for fun, let's simulate a large sample from the normal population and empirically confirm the 68-95-99.7 rule. You can use the RANDFUN function to generate a random sample and use the BIN function to detect which observations are in each interval:

call randseed(12345);
n = 10000;                                /* sample size */
x = randfun(n, "Normal", mu, sigma);      /* simulate normal sample */
ObservedPct = j(3,1,0);
do i = 1 to 3;
   b = bin(x, Intervals[i,]);             /* b[i]=1 if x[i] in interval */
   ObservedPct[i] = sum(b) / n;           /* percentage of x in interval */
end;
 
results = Intervals || {0.68, 0.95, 0.997} || ObservedPct;
print results[colname={"Lower" "Upper" "Predicted" "Observed"}
              label="Probability of Normal Variate in Intervals: X ~ N(50, 8)"];

The simulation confirms the 68-95-99.7 rule. Remember that the rule is a mnemonic device. You can compute the exact probabilities by using the CDF function. In SAS/IML, the exact computation is p = cdf("Normal", m[,2]) - cdf("Normal", m[,1]);

In summary, the SAS/IML language provides an easy syntax to construct intervals that are symmetric about a central value. You can use a vector such as {-1, 1} to construct an interval of the form p ± δ, or you can use a k x 2 matrix to construct k symmetric intervals.

The post A simple trick to construct symmetric intervals appeared first on The DO Loop.