Statistical Programming

5月 202019
 

Recently I wrote about how to compute the Kolmogorov D statistic, which is used to determine whether a sample has a particular distribution. One of the beautiful facts about modern computational statistics is that if you can compute a statistic, you can use simulation to estimate the sampling distribution of that statistic. That means that instead of looking up critical values of the D statistic in a table, you can estimate the critical value by using empirical quantiles from the simulation.

This is a wonderfully liberating result! No longer are we statisticians constrained by the entries in a table in the appendix of a textbook. In fact, you could claim that modern computation has essentially killed the standard statistical table.

Obtain critical values by using simulation

Before we compute anything, let's recall a little statistical theory. If you get a headache thinking about null hypotheses and sampling distributions, you might want to skip the next two paragraphs!

When you run a hypothesis test, you compare a statistic (computed from data) to a hypothetical distribution (called the null distribution). If the observed statistic is way out in a tail of the null distribution, you reject the hypothesis that the statistic came from that distribution. In other words, the data does not seem to have the characteristic that you are testing for. Statistical tables use "critical values" to designate when a statistic is in the extreme tail. A critical value is a quantile of the null distribution; if the observed statistic is greater than the critical value, then the statistic is in the tail. (Technically, I've described a one-tailed test.)

One of the uses for simulation is to approximate the sampling distribution of a statistic when the true distribution is not known or is known only asymptotically. You can generate a large number of samples from the null hypothesis and compute the statistic on each sample. The union of the statistics approximates the true sampling distribution (under the null hypothesis) so you can use the quantiles to estimate the critical values of the null distribution.

Critical values of the Kolmogorov D distribution

You can use simulation to estimate the critical value for the Kolmogorov-Smirnov statistical test for normality. For the data in my previous article, the null hypothesis is that the sample data follow a N(59, 5) distribution. The alternative hypothesis is that they do not. The previous article computed a test statistic of D = 0.131 for the data (N = 30). If the null hypothesis is true, is that an unusual value to observe? Let's simulate 40,000 samples of size N = 30 from N(59,5) and compute the D statistic for each. Rather than use PROC UNIVARIATE, which computes dozens of statistics for each sample, you can use the SAS/IML computation from the previous article, which is very fast. The following simulation runs in a fraction of a second.

/* parameters of reference distribution: F = cdf("Normal", x, &mu, &sigma) */
%let mu    = 59;
%let sigma =  5;
%let N     = 30;
%let NumSamples = 40000;
 
proc iml;
call randseed(73);
N = &N;
i = T( 1:N );                           /* ranks */
u = i/N;                                /* ECDF height at right-hand endpoints */
um1 = (i-1)/N;                          /* ECDF height at left-hand endpoints  */
 
y = j(N, &NumSamples, .);               /* columns of Y are samples of size N */
call randgen(y, "Normal", &mu, &sigma); /* fill with random N(mu, sigma)      */
D = j(&NumSamples, 1, .);               /* allocate vector for results        */
 
do k = 1 to ncol(y);                    /* for each sample:                   */
   x = y[,k];                           /*    get sample x ~ N(mu, sigma)     */
   call sort(x);                        /*    sort sample                     */
   F = cdf("Normal", x, &mu, &sigma);   /*    CDF of reference distribution   */
   D[k] = max( F - um1, u - F );        /*    D = max( D_minus, D_plus )      */
end;
 
title "Monte Carlo Estimate of Sampling Distribution of Kolmogorov's D Statistic";
title2 "N = 30; N_MC = &NumSamples";
call histogram(D) other=
     "refline 0.131 / axis=x label='Sample D' labelloc=inside lineattrs=(color=red);";

The test statistic is right smack dab in the middle of the null distribution, so there is no reason to doubt that the sample is distributed as N(59, 5).

How big would the test statistic need to be to be considered extreme? To test the hypothesis at the α significance level, you can compute the 1 – α quantile of the null distribution. The following statements compute the critical value for α = 0.05 and N = 30:

/* estimate critical value as the 1 - alpha quantile */
alpha = 0.05;
call qntl(Dcrit_MC, D, 1-alpha);
print Dcrit_MC;

The estimated critical value for a sample of size 30 is 0.242. This compares favorably with the exact critical value from a statistical table, which gives Dcrit = 0.2417 for N = 30.

You can also use the null distribution to compute a p value for an observed statistic. The p value is estimated as the proportion of statistics in the simulation that exceed the observed value. For example, if you observe data that has a D statistic of 0.28, the estimated p value is obtained by the following statements:

Dobs = 0.28;                        /* hypothetical observed statistic */
pValue = sum(D >= Dobs) / nrow(D);  /* proportion of distribution values that exceed D0 */
print Dobs pValue;

This same technique works for any sample size, N, although most tables critical values only for all N ≤ 30. For N > 35, you can use the following asymptotic formulas, developed by Smirnov (1948), which depend only on α:

The Kolmogorov D statistic does not depend on the reference distribution

It is reasonable to assume that the results of this article apply only to a normal reference distribution. However, Kolmogorov proved that the sampling distribution of the D statistic is actually independent of the reference distribution. In other words, the distribution (and critical values) are the same regardless of the continuous reference distribution: beta, exponential, gamma, lognormal, normal, and so forth. That is a surprising result, which explains why there is only one statistical table for the critical values of the Kolmogorov D statistic, as opposed to having different tables for different reference distributions.

In summary, you can use simulation to estimate the critical values for the Kolmogorov D statistic. In a vectorized language such as SAS/IML, the entire simulation requires only about a dozen statements and runs extremely fast.

The post Critical values of the Kolmogorov-Smirnov test appeared first on The DO Loop.

5月 152019
 

Have you ever run a statistical test to determine whether data are normally distributed? If so, you have probably used Kolmogorov's D statistic. Kolmogorov's D statistic (also called the Kolmogorov-Smirnov statistic) enables you to test whether the empirical distribution of data is different than a reference distribution. The reference distribution can be a probability distribution or the empirical distribution of a second sample. Most statistical software packages provide built-in methods for computing the D statistic and using it in hypothesis tests. This article discusses the geometric meaning of the Kolmogorov D statistic, how you can compute it in SAS, and how you can compute it "by hand."

This article focuses on the case where the reference distribution is a continuous probability distribution, such as a normal, lognormal, or gamma distribution. The D statistic can help you determine whether a sample of data appears to be from the reference distribution. Throughout this article, the word "distribution" refers to the cumulative distribution.

What is the Kolmogorov D statistic?

The geometry of Kolmogorov's D statistic, which measures the maximum vertical distance between an emirical distribution and a reference distribution.

The letter "D" stands for "distance." Geometrically, D measures the maximum vertical distance between the empirical cumulative distribution function (ECDF) of the sample and the cumulative distribution function (CDF) of the reference distribution. As shown in the adjacent image, you can split the computation of D into two parts:

  • When the reference distribution is above the ECDF, compute the statistic D as the maximal difference between the reference distribution and the ECDF. Because the reference distribution is monotonically increasing and because the empirical distribution is a piecewise-constant step function that changes at the data values, the maximal value of D will always occur at a right-hand endpoint of the ECDF.
  • When the reference distribution is below the ECDF, compute the statistic D+ as the maximal difference between the ECDF and the reference distribution. The maximal value of D+ will always occur at a left-hand endpoint of the ECDF.

The D statistic is simply the maximum of D and D+.

You can compare the statistic D to critical values of the D distribution, which appear in tables. If the statistic is greater than the critical value, you reject the null hypothesis that the sample came from the reference distribution.

How to compute Kolmogorov's D statistic in SAS?

In SAS, you can use the HISTOGRAM statement in PROC UNIVARIATE to compute Kolmogorov's D statistic for many continuous reference distributions, such as the normal, beta, or gamma distributions. For example, the following SAS statements simulate 30 observations from a N(59, 5) distribution and compute the D statistic as the maximal distance between the ECDF of the data and the CDF of the N(59, 5) distribution:

/* parameters of reference distribution: F = cdf("Normal", x, &mu, &sigma) */
%let mu    = 59;
%let sigma =  5;
%let N     = 30;
 
/* simulate a random sample of size N from the reference distribution */
data Test;
call streaminit(73);
do i = 1 to &N;
   x = rand("Normal", &mu, &sigma);
   output;
end;
drop i;
run;
 
proc univariate data=Test;
   ods select Moments CDFPlot GoodnessOfFit;
   histogram x / normal(mu=&mu sigma=&sigma) vscale=proportion;  /* compute Kolmogov D statistic (and others) */
   cdfplot x / normal(mu=&mu sigma=&sigma) vscale=proportion;    /* plot ECDF and reference distribution */
   ods output CDFPlot=ECDF(drop=VarName CDFPlot);                /* for later use, save values of ECDF */
run;

The computation shows that D = 0.131 for this simulated data. The plot at the top of this article shows that the maximum occurs for a datum that has the value 54.75. For that observation, the ECDF is farther away from the normal CDF than at any other location.

The critical value of the D distribution when N=30 and α=0.05 is 0.2417. Since D < 0.2417, you should not reject the null hypothesis. It is reasonable to conclude that the sample comes from the N(59, 5) distribution.

Although the computation is not discussed in this article, you can use PROC NPAR1WAY to compute the statistic when you have two samples and want to determine if they are from the same distribution. In this two-sample test, the geometry is the same, but the computation is slightly different because the reference distribution is itself an ECDF, which is a step function.

Compute Kolmogorov's D statistic manually

Although PROC UNIVARIATE in SAS computes the D statistic (and other goodness-of-fit statistics) automatically, it is not difficult to compute the statistic from first principles in a vector language such as SAS/IML.

The key is to recall that the ECDF is a piecewise constant function that changes heights at the value of the observations. If you sort the data, the height at the i_th sorted observation is i / N, where N is the sample size. The height of the ECDF an infinitesimal distance before the i_th sorted observation is (i – 1)/ N. These facts enable you to compute D and D+ efficiently.

The following algorithm computes the D statistic:

  1. Sort the data in increasing order.
  2. Evaluate the reference distribution at the data values.
  3. Compute the pairwise differences between the reference distribution and the ECDF.
  4. Compute D as the maximum value of the pairwise differences.

The following SAS/IML program implements the algorithm:

/* Compute the Kolmogorov D statistic manually */
proc iml;
use Test;  read all var "x";  close;
N = nrow(x);                         /* sample size */
call sort(x);                        /* sort data */
F = cdf("Normal", x, &mu, &sigma);   /* CDF of reference distribution */
i = T( 1:N );                        /* ranks */
Dminus = F - (i-1)/N;
Dplus = i/N - F;
D = max(Dminus, Dplus);              /* Kolmogorov's D statistic */
print D;

The D statistic matches the computation from PROC UNIVARIATE.

The SAS/IML implementation is very compact because it is vectorized. By computing the statistic "by hand," you can perform additional computations. For example, you can find the observation for which the ECDF and the reference distribution are farthest away. The following statements find the index of the maximum for the Dminus and Dplus vectors. You can use that information to find the value of the observation at which the maximum occurs, as well as the heights of the ECDF and reference distribution. You can use these values to create the plot at the top of this article, which shows the geometry of the Kolmogorov D statistic:

/* compute locations of the maximum D+ and D-, then plot with a highlow plot */
i1 = Dminus[<:>];  i2 = Dplus [<:>];              /* indices of maxima */
/* Compute location of max, value of max, upper and lower curve values */
x1=x[i1];  v1=DMinus[i1];  Low1=(i[i1]-1)/N;  High1=F[i1]; 
x2=x[i2];  v2=Dplus [i2];  Low2=F[i2];        High2=i[i2]/N; 
Result = (x1 || v1 || Low1 || High1) // (x2 || v2 || Low2 || High2);
print Result[c={'x' 'Value' 'Low' 'High'} r={'D-' 'D+'} L='Kolmogorov D'];

The observations that maximize the D and D+ statistics are x=54.75 and x=61.86, respectively. The value of D is the larger value, so that is the value of Kolmogorov's D.

For completeness, the following statement show how to create the graph at the top of this article. The HIGHLOW statement in PROC SGPLOT is used to plot the vertical line segments that represent the D and D+ statistics.

create KolmogorovD var {x x1 Low1 High1 x2 Low2 High2}; append; close;
quit;
 
data ALL;
   set KolmogorovD ECDF;   /* combine the ECDF, reference curve, and the D- and D+ line segments */
run;
 
title "Kolmogorov's D Statistic";
proc sgplot data=All;
   label CDFy = "Reference CDF" ECDFy="ECDF";
   xaxis grid label="x";
   yaxis grid label="Cumulative Probability" offsetmin=0.08;
   fringe x;
   series x=CDFx Y=CDFy / lineattrs=GraphReference(thickness=2) name="F";
   step x=ECDFx Y=ECDFy / lineattrs=GraphData1 name="ECDF";
   highlow x=x1 low=Low1 high=High1 / 
           lineattrs=GraphData2(thickness=4) name="Dm" legendlabel="D-";
   highlow x=x2 low=Low2 high=High2 / 
           lineattrs=GraphData3(thickness=4) name="Dp" legendlabel="D+";
   keylegend "F" "ECDF" "Dm" "Dp";
run;

Concluding thoughts

Why would anyone want to compute the D statistic by hand if PROC UNIVARIATE can compute it automatically? There are several reasons:

  • Understanding: When you compute a statistic manually, you are forced to understand what the statistic means and how it works.
  • Efficiency: When you run PROC UNIVARIATE, it computes many statistics (mean, standard deviation, skewness, kurtosis,...), many goodness-of-fit tests (Kolmogorov-Smirnov, Anderson-Darling, and Cramér–von Mises), a histogram, and more. That's a lot of work! If you are interested only in the D statistic, why needlessly compute the others?
  • Generalizability: You can compute quantities such as the location of the maximum value of D and D+. You can use the knowledge and experience you've gained to compute related statistics.

The post What is Kolmogorov's D statistic? appeared first on The DO Loop.

5月 012019
 

SAS regression procedures support several parameterizations of classification variables. When a categorical variable is used as an explanatory variable in a regression model, the procedure generates dummy variables that are used to construct a design matrix for the model. The process of forming columns in a design matrix is called a parameterization or encoding. In SAS, most regression procedures use either the GLM encoding, the EFFECT encoding, or the REFERENCE encoding. This article summarizes the default and optional encodings for each regression procedure in SAS/STAT. In many SAS procedures, you can use the PARAM= option to change the default encoding.

The documentation section "Parameterization of Model Effects" provides a complete list of the encodings in SAS and shows how the design matrices are constructed from the levels. (The levels are the values of a classification variable.) Pasta (2005) gives examples and further discussion.

Default and optional encodings for SAS regression procedures

The following SAS regression procedures support the CLASS statement or a similar syntax. The columns GLM, REFERENCE, and EFFECT indicate the three most common encodings. The word "Default" indicates the default encoding. For procedures that support the PARAM= option, the column indicates the supported encodings. The word All means that the procedure supports the complete list of SAS encodings. Most procedures default to using the GLM encoding; the exceptions are highlighted.

Procedure GLMREFERENCE EFFECTPARAM=
ADAPTIVEREG Default
ANOVA Default
BGLIMM Default Yes Yes GLM | EFFECT | REF
CATMOD Default
FMM Default
GAM Default
GAMPL Default Yes GLM | REF
GEE Default
GENMOD Default Yes Yes All
GLIMMIX Default
GLM Default
GLMSELECT Default Yes Yes All
HP regression procedures Default Yes GLM | REF
HPMIXED Default
ICPHREG Default Yes Yes All
LIFEREG Default
LOGISTIC Yes Yes Default All
MIXED Default
ORTHOREG Default Yes Yes All
PLS Default
PROBIT Default
PHREG Yes Default Yes All
QUANTLIFE Default
QUANTREG Default
QUANTSELECT Default Yes Yes All
RMTSREG Default Yes Yes All
ROBUSTREG Default
SURVEYLOGISTIC Yes Yes Default All
SURVEYPHREG Default Yes Yes All
SURVEYREG Default
TRANSREG Yes Default Yes

A few comments:

  • The REFERENCE encoding is the default for PHREG and TRANSREG.
  • The EFFECT encoding is the default for CATMOD, LOGISTIC, and SURVEYLOGISTIC.
  • The HP regression procedures all use the GLM encoding by default and support only PARAM=GLM or PARAM=REF. The HP regression procedures include HPFMM, HPGENSELECT, HPLMIXED, HPLOGISTIC, HPNLMOD, HPPLS, HPQUANTSELECT, and HPREG. In spite of its name, GAMPL is also an HP procedure. In spite of its name, HPMIXED is NOT an HP procedure!
  • PROC LOGISTIC and PROC HPLOGISTIC use different default encodings.
  • CATMOD does not have a CLASS statement because all variables are assumed to be categorical.
  • PROC TRANSREG does not support a CLASS statement. Instead, it uses a CLASS() transformation list. It uses different syntax to support parameter encodings.

How to interpret main effects for the SAS encodings

The GLM parameterization is a singular parameterization. The other encodings are nonsingular. The "Other Parameterizations" section of the documentation gives a simple one-sentence summary of how to interpret the parameter estimates for the main effects in each encoding:

  • The GLM encoding estimates the difference in the effect of each level compared to the reference level. You can use the REF= option to specify the reference level. By default, the reference level is the last ordered level. The design matrix for the GLM encoding is singular.
  • The REFERENCE encoding estimates the difference in the effect of each nonreference level compared to the effect of the reference level. You can use the REF= option to specify the reference level. By default, the reference level is the last ordered level. Notice that the REFERENCE encoding gives the same interpretation as the GLM encoding. The difference is that the design matrix for the REFERENCE encoding excludes the column for the reference level, so the design matrix for the REFERENCE encoding is (usually) nonsingular.
  • The EFFECT encoding estimates the difference in the effect of each nonreference level compared to the average effect over all levels.

This article lists the various encodings that are supported for each SAS regression procedures. I hope you will find it to be a useful reference. If I've missed your favorite regression procedure, let me know in the comments.

The post Encodings of CLASS variables in SAS regression procedures: A cheat sheet appeared first on The DO Loop.

4月 292019
 

Did you know that SAS provides built-in support for working with probability distributions that are finite mixtures of normal distributions? This article shows examples of using the "NormalMix" distribution in SAS and describes a trick that enables you to easily work with distributions that have many components.

As with all probability distributions, there are four essential functions that you need to know: The PDF, CDF, QUANTILE, and RAND functions.

What is a normal mixture distribution?

A finite mixture distribution is a weighted sum of component distributions. When all of the components are normal, the distribution is called a mixture of normals. If the i_th component has parameters (μi, σi), then you can write the probability density function (PDF) of the normal mixture as
f(x) = Σi wi φ(x; μi, σi)
where φ is the normal PDF and the positive constants wi are the mixing weights. The mixing weights must sum to 1.

The adjacent graph shows the density function for a three-component mixture of normal distributions. The means of the components are -6, 3, and 8, respectively, and are indicated by vertical reference lines. The mixing weights are 0.1, 0.3, and 0.6. The SAS program to create the graph is in the next section.

The "NormalMix" distribution in SAS

The PDF and CDF functions in Base SAS support the "NormalMix" distribution. The syntax is a little unusual because the function needs to support an arbitrary number of components. If there are k components, the PDF and CDF functions require 3k + 3 parameters:

  1. The first parameter is the name of the distribution: "NormalMix". The second parameter is the value, x, at which to evaluate the density function.
  2. The third parameter is the number of component distributions, k > 1.
  3. The next k parameters specify the mixing weights, w1, w2, ..., wk.
  4. The next k parameters specify the component means, μ1, μ2, ..., μk.
  5. The next k parameters specify the component standard deviations, σ1, σ2, ..., σk.

If you are using a model that has many components, it is tedious to explicitly list every parameter in every function call. Fortunately, there is a simple trick that prevents you from having to list the parameters. You can put the parameters into arrays and use the OF operator (sometimes called the OF keyword) to reference the parameter values in the array. This is shown in the next section.

The PDF and CDF of a normal mixture

The following example demonstrates how to compute the PDF and CDF for a three-component mixture-of-normals distribution. The DATA step shows two tricks:

  • The parameters (weights, means, and standard deviations) are stored in arrays. In the calls to the PDF and CDF functions, syntax such as OF w[*} enables you to specify the parameter values with minimal typing.
  • A normal density is extremely tiny outside of the interval [μ - 5*σ, μ + 5*σ]. You can use this fact to compute the effective domain for the PDF and CDF functions.
/* PDF and CDF of the normal mixture distribution. This example specifies three components. */
data NormalMix;
array w[3]     _temporary_ ( 0.1, 0.3, 0.6);  /* mixing weights */ 
array mu[3]    _temporary_ (-6,   3,   8);    /* mean for each component */
array sigma[3] _temporary_ (0.5,  0.6, 2.5);  /* standard deviation for each component */
 
/* For each component, the range [mu-5*sigma, mu+5*sigma] is the effective support. */
minX = 1e308; maxX = -1e308;                  /* initialize to extreme values */
do i = 1 to dim(mu);                          /* find largest interval where density > 1E-6 */
   minX = min(minX, mu[i] - 5*sigma[i]);
   maxX = max(maxX, mu[i] + 5*sigma[i]);
end;
 
/* Visualize the functions on the effective support. Use arrays and the OF operator to
   specify the parameters. An alternative syntax is to list the arguments, as follows:
   cdf = CDF('normalmix', x, 3, 
              0.1, 0.3, 0.6,    
             -6,   3,   8,    
              0.5, 0.6, 2.5);       */
dx = (maxX - minX)/200;
do x = minX to maxX by dx;
  pdf = pdf('normalmix', x, dim(mu), of w[*], of mu[*], of sigma[*]); 
  cdf = cdf('normalmix', x, dim(mu), of w[*], of mu[*], of sigma[*]); 
  output;
end;
keep x pdf cdf;
run;

As shown in the program, the OF operator greatly simplifies and clarifies the function calls. The alternative syntax, which is shown in the comments, is unwieldy.

The following statements create graphs of the PDF and CDF functions. The PDF function is shown at the top of this article. The CDF function, along with a few reference lines, is shown below.

title "PDF function for Normal Mixture Distribution";
title2 "Vertical Lines at Component Means";
proc sgplot data=NormalMix;
   refline -6 3 8 / axis=x;
   series x=x y=pdf;
run;
 
title "CDF function for Normal Mixture Distribution";
proc sgplot data=NormalMix;
   xaxis grid; yaxis min=0 grid; 
   refline 0.1 0.5 0.7 0.9;
   series x=x y=cdf;
run;

The quantiles of a normal mixture

The quantile function for a continuous distribution is the inverse of the CDF distribution. The graph of the CDF function for a mixture of normals can have flat regions when the component means are far apart relative to their standard deviations. Technically, these regions are not completely flat because the normal distribution has infinite support, but computationally they can be very flat. Because finding a quantile is equivalent to finding the root of a shifted CDF, you might encounter computational problems if you try to compute the quantile that corresponds to an extremely flat region, such as the 0.1 quantile in the previous graph.

The following DATA step computes the 0.1, 0.5, 0.7, and 0.9 quantiles for the normal mixture distribution. Notice that you can use arrays and the OF operator for the QUANTILE function:

data Q;
array w[3]     _temporary_ ( 0.1, 0.3, 0.6);  /* mixing weights */ 
array mu[3]    _temporary_ (-6,   3,   8);    /* mean for each component */
array sigma[3] _temporary_ (0.5,  0.6, 2.5);  /* standard deviation for each component */
 
array p[4] (0.1, 0.5, 0.7, 0.9);  /* find quantiles for these probabilities */
do i = 1 to dim(p);
   prob = p[i];
   qntl = quantile('normalmix', prob, dim(mu), of w[*], of mu[*], of sigma[*]); 
   output;
end;
keep qntl prob;
run;
 
proc print; run;

The table tells you that 10% of the density of the normal mixture is less than x=-3.824. That is essentially the result of the first component, which has weight 0.1 and therefore is responsible for 10% of the total density. Half of the density is less than x=5.58. Fully 70% of the density lies to the left of x=8, which is the mean of the third component. That result makes sense when you look at the mixing weights.

Random values from a normal mixture

The RAND function does not explicitly support the "NormalMix" distribution. However, as I have shown in a previous article, you can simulate from an arbitrary mixture of distributions by using the "Table" distribution in conjunction with the component distributions. For the three-component mixture distribution, the following DATA step simulates a random sample:

/* random sample from a mixture distribution */
%let N = 1000;
data RandMix(drop=i);
call streaminit(12345);
array w[3]     _temporary_ ( 0.1, 0.3, 0.6);  /* mixing weights */ 
array mu[3]    _temporary_ (-6,   3,   8);    /* mean for each component */
array sigma[3] _temporary_ (0.5,  0.6, 2.5);  /* standard deviation for each component */
do obsNum = 1 to &N;
   i = rand("Table", of w[*]);                /* choose the component by using the mixing weights */
   x = rand("Normal", mu[i], sigma[i]);       /* sample from that component */
   output;
end;
run;
 
title "Random Sample for Normal Mixture Distribution";
proc sgplot data=RandMix;
   histogram x;
   refline -6 3 8 / axis=x;   /* means of component distributions */
run;

The histogram of a random sample looks similar to the graph of the PDF function, as it should.

In summary, SAS provides built-in support for working with the density (PDF), cumulative probability (CDF), and quantiles (QUANTILE) of a normal mixture distribution. You can use arrays and the OF operator to call these Base SAS functions without having to list every parameter. Although the RAND function does not natively support the "NormalMix" distribution, you can use the "Table" distribution to select a component according to the mixing weights and use the RAND("Normal") function to simulate from the selected normal component.

The post The normal mixture distribution in SAS appeared first on The DO Loop.

4月 172019
 

I think every course in exploratory data analysis should begin by studying Anscombe's quartet. Anscombe's quartet is a set of four data sets (N=11) that have nearly identical descriptive statistics but graphical properties. They are a great reminder of why you should graph your data. You can read about Anscombe's quartet on Wikipedia, or check out a quick visual summary by my colleague Robert Allison. Anscombe's first two examples are shown below:

The Wikipedia article states, "It is not known how Anscombe created his data sets." Really? Creating different data sets that have the same statistics sounds like a fun challenge! As a tribute to Anscombe, I decided to generate my own versions of the two data sets shown in the previous scatter plots. The first data set is linear with normal errors. The second is quadratic (without errors) and has the exact same linear fit and correlation coefficient as the first data.

Generating your own version of Anscombe's data

The Wikipedia article notes that there are "several methods to generate similar data sets with identical statistics and dissimilar graphics," but I did not look at the modern papers. I wanted to try it on my own. If you want to solve the problem on your own, stop reading now!

I used the following approach to construct the first two data sets:

  1. Use a simulation to create linear data with random normal errors: Y1 = 3 + 0.5 X + ε, where ε ~ N(0,1).
  2. Compute the regression estimates (b0 and b1) and the sample correlation for the linear data. These are the target statistics. They define three equations that the second data set must match.
  3. The response variable for the second data set is of the form Y2 = β0 + β1 X + β2 X2. There are three equations and three unknowns parameters, so we can solve a system of nonlinear equations to find β.

From geometric reasoning, there are three different solution for the β parameter: One with β2 > 0 (a parabola that opens up), one with β2 = 0 (a straight line), and one with β2 < 0 (a parabola that opens down). Since Anscombe used a downward-pointing parabola, I will make the same choice.

Construct the first data set

You can construct the first data set in a number of ways, but I choose to construct it randomly. The following SAS/IML statements construct the data, defines a helper function (LinearReg). The program computes the target values, which are the parameter estimates for a linear regression and the sample correlation for the data:

proc iml;
call randseed(12345);
x = T( do(4, 14, 0.2) );                              /* evenly spaced X */
eps = round( randfun(nrow(x), "Normal", 0, 1), 0.01); /* normal error */
y = 3 + 0.5*x + eps;                                  /* linear Y + error */
 
/* Helper function. Return paremater estimates for linear regression. Args are col vectors */
start LinearReg(Y, tX);
   X = j(nrow(tX), 1, 1) || tX;
   b = solve(X`*X, X`*Y);       /* solve normal equation */
   return b;
finish;
 
targetB = LinearReg(y, x);          /* compute regression estimates */
targetCorr = corr(y||x)[2];         /* compute sample correlation */
print (targetB`||targetCorr)[c={'b0' 'b1' 'corr'} F=5.3 L="Target"];

You can use these values as the target values. The next step is to find a parameter vector β such that Y2 = β0 + β1 X + β2 X2 has the same regression line and corr(Y2, X) has the same sample correlation. For uniqueness, set β2 < 0.

Construct the second data set

You can formulate the problem as a system of equations and use the NLPHQN subroutine in SAS/IML to solve it. (SAS supports multiple ways to solve a system of equations.) The following SAS/IML statements define two functions. Given any value for the β parameter, the first function returns the regression estimates and sample correlation between Y2 and X. The second function is the objective function for an optimization. It subtracts the target values from the estimates. The NLPHQN subroutine implements a hybrid quasi-Newton optimization routine that uses least squares techniques to find the β parameter that generates quadratic data that tries to match the target statistics.

/* Define system of simultaneous equations:
   https://blogs.sas.com/content/iml/2018/02/28/solve-system-nonlinear-equations-sas.html */
/* This function returns linear regression estimates (b0, b1) and correlation for a choice of beta */
start LinearFitCorr(beta) global(x);
   y2 = beta[1] + beta[2]*x + beta[3]*x##2;    /* construct quadratic Y */
   b = LinearReg(y2, x);      /* linear fit */
   corr = corr(y2||x)[2];     /* sample corr */
   return ( b` || corr);      /* return row vector */
finish;
 
/* This function returns the vector quantity (beta - target). 
   Find value that minimizes Sum | F_i(beta)-Target+i |^2 */
start Func(beta) global(targetB, targetCorr);
   target = rowvec(targetB) || targetCorr;
   G = LinearFitCorr(beta) - target;
   return( G );              /* return row vector */
finish;
 
/* now let's solve for quadratic parameters so that same 
   linear fit and correlation occur */
beta0 = {-5 1 -0.1};         /* initial guess */
con = {.  .  .,              /* constraint matrix */
       0  .  0};             /* quadratic term is negative */
optn = ncol(beta0) || 0;     /* LS with 3 components || amount of printing */
/* minimize sum( beta[i] - target[i])**2 */
call nlphqn(rc, beta, "Func", beta0, optn) blc=con;  /* LS solution */
print beta[L="Optimal beta"];
 
/* How nearly does the solution solve the problem? Did we match the target values? */
Y2Stats = LinearFitCorr(beta);
print Y2Stats[c={'b0' 'b1' 'corr'} F=5.3];

The first output shows that the linear fit and correlation statistics for the linear and quadratic data are identical (to 3 decimal places). Anscombe would be proud! The second output shows the parameters for the quadratic response: Y2 = 4.955 + 2.566*X - 0.118*X2. The following statements create scatter plots of the new Anscombe-like data:

y2 = beta[1] + beta[2]*x + beta[3]*x##2;
create Anscombe2 var {x y y2};  append;  close;
QUIT;
 
ods layout gridded columns=2 advance=table;
proc sgplot data=Anscombe2 noautolegend;
   scatter x=X y=y;    lineparm x=0 y=3.561 slope=0.447 / clip;
run;
proc sgplot data=Anscombe2 noautolegend;
   scatter x=x y=y2;   lineparm x=0 y=3.561 slope=0.447 / clip;
run;
ods layout end;

Notice that the construction of the second data set depends only on the statistics for the first data. If you modify the first data set, and the second will automatically adapt. For example, you could choose the errors manually instead of randomly, and the statistics for the second data set should still match.

What about the other data sets?

You can create the other data sets similarly. For example:

  • For the data set that consist of points along a line except for one outlier, there are three free parameters. Most of the points fall along the line Y3 = a + b*X and one point is at the height Y3 = c. Therefore, you can run an optimization to solve for the values of (a, b, c) that match the target statistics.
  • I'm not sure how to formulate the requirements for the fourth data set. It looks like all but one point have coordinates (X, Y4), where X is a fixed value and the vertical mean of the cluster is c. The outlier has coordinate (a, b). I'm not sure whether the variance of the cluster is important.

Summary

In summary, you can solve a system of equations to construct data similar to Anscombe's quartet. By using this technique, you can create your own data sets that share descriptive statistics but look very different graphically.

To be fair, the technique I've presented does not enable you to reproduce Anscombe's quartet in its entirety. My data share a linear fit and sample correlation, whereas Anscombe's data share seven statistics!

Anscombe was a pioneer (along with Tukey) in using computation to assist in statistical computations. He was also enamored with the APL language. He introduced computers into the Yale statistics department in the mid-1960s. Since he published his quartet in 1973, it is possible that he used computers to assist his creation of the Anscombe quartet. However he did it, Anscombe's quartet is truly remarkable!

You can download the SAS program that creates the results in this article.

The post Create your own version of Anscombe's quartet: Dissimilar data that have similar statistics appeared first on The DO Loop.

4月 082019
 

Suppose you need to assign 100 patients equally among 3 treatment groups in a clinical study. Obviously, an equal allocation is impossible because the second number does not evenly divide the first, but you can get close by assigning 34 patients to one group and 33 to the others. Mathematically, this is a grade-school problem in integer division: simply assign floor(100/3) patients to each group, then deal with the remainders. The FLOOR function rounds numbers down.

The problem of allocating a discrete number of items to groups comes up often in computer programming. I call the solution the FLOOR-MOD trick because you can use the FLOOR function to compute the base number in each group and use the MOD function to compute the number of remainders. Although the problem is elementary, this article describes two programming tips that you can use in a vectorized computer language like SAS/IML:

  • You can compute all the group sizes in one statement. You do not need to use a loop to assign the remaining items.
  • If you enumerate the patients, you can directly assign consecutive numbers to each group. There is no need to deal with the remainders at the end of the process.

Assign items to groups, patients to treatments, or tasks to workers

Although I use patients and treatment groups for the example, I encountered this problem as part of a parallel programming problem where I wanted to assign B tasks evenly among k computational resources. In my example, there were B = 14 tasks and k = 3 resources.

Most computer languages support the FLOOR function for integer division and the MOD function for computing the remainder. The FLOOR-MOD trick works like this. If you want to divide B items into k groups, let A be the integer part of B / k and let C be the remainder, C = B - A*k. Then B = A*k + C, where C < k. In computer code, A = FLOOR(B/k) and C = MOD(B, k).

There are many ways to distribute the remaining items, but for simplicity let's give an extra item to each of the first C groups. For example, if B = 14 and k = 3, then A = floor(14/3) = 4 and C = 2. To allocate 14 items to 3 groups, give 4 items to all groups, and give an extra item to the first C=2 groups.

In a vector programming language, you can assign the items to each group without doing any looping, as shown in the following SAS/IML program:

/* Suppose you have B tasks that you want to divide as evenly as possible 
   among k Groups. How many tasks should you assign to the i_th group? */
proc iml;
/* B = total number of items or tasks (B >= 0, scalar)
   k = total number of groups or workers (k > 0, scalar)
   i = scalar or vector that specifies the group(s), max(i)<=k
   Return the number of items to allocate to the i_th group */
start AssignItems(B, k, i);
   n = floor(B / k) + (i <= mod(B, k));     /* the FLOOR-MOD trick */
   return(n);
finish;
 
/* Test the code. Assign 14 tasks to 3 Groups. */
nItems = 14;
nGroups = 3;
idx = T(1:nGroups);          /* get number of items for all groups */
Group = char(idx);           /* ID label for each group */
n = AssignItems(nItems, nGroups, idx);
print n[c={"Num Items"} r=Group L="Items per Group"];

The AssignItems function is a "one-liner." The interesting part of the AssignItems function is the binary expression i <= mod(B, k), which is valid even when i is a vector. In this example, the expression evaluates to the vector {1, 1, 0}, which assigns an extra item to each of the first two groups.

Which items are assigned to each group?

A related problem is figuring out which items get sent to which groups. In the example where B=14 and k=3, I want to put items 1-5 in the first group, items 6-10 in the second group, and items 11-14 in the last group. There is a cool programming trick, called the CUSUM-LAG trick, which enables you to find these indices. The following function is copied from my article on the CUSUM-LAG trick. After you find the number of items in each group, you can use the ByGroupIndices function to find the item numbers in each group:

/* Return kx2 matrix that contains the first and last elements for k groups that have sizes 
    s[1], s[2],...,s[k]. The i_th row contains the first and last index for the i_th group. */
start ByGroupIndices( s );
   size = colvec(s);              /* make sure you have a column vector */
   endIdx = cusum(size);          /* locations of ending index */
   beginIdx = 1 + lag(endIdx);    /* begin after each ending index ... */
   beginIdx[1] = 1;               /*    ...and at 1  */
   return ( beginIdx || endIdx );
finish;
 
/* apply the CUSUM-LAG trick to the item allocation problem */
GroupInfo = n || ByGroupIndices(n);
print GroupInfo[r=Group c={"NumItems" "FirstItem" "LastItem"}];

The table shows the number of items allocated to each group, as well as the item indices in each group. You can use the FLOOR-MOD trick to get the number of items and the CUSUM-LAG trick to get the indices. In a vector language, you can implement the entire computation without using any loops.

The post Use the FLOOR-MOD trick to allocate items to groups appeared first on The DO Loop.

4月 012019
 

Many SAS procedures support the BY statement, which enables you to perform an analysis for subgroups of the data set. Although the SAS/IML language does not have a built-in "BY statement," there are various techniques that enable you to perform a BY-group analysis. The two I use most often are the UNIQUE-LOC technique and the UNIQUEBY technique. The first is more intuitive, the second is more efficient. This article shows how to use SAS/IML to read and process BY-group data from a data set.

I previously showed that you can perform BY-group processing in SAS/IML by using the UNIQUEBY technique, so this article uses the UNIQUE-LOC technique. The statistical application is simulating clusters of data. If you have a SAS data set that contains the centers and covariance matrices for several groups of observations, you can then read that information into SAS/IML and simulate new observations for each group by using a multivariate normal distribution.

Matrix operations and BY groups

A BY-group analysis in SAS/IML usually starts with a SAS data set that contains a bunch of covariance or correlation matrices. A simple example is a correlation analysis of each species of flower in Fisher's iris data set. The BY-group variable is the species of iris: Setosa, Versicolor, or Virginica. The variables are measurements (in mm) of the sepals and petals of 150 flowers, 50 from each species. A panel of scatter plots for the iris data is shown to the right. You can see that the three species appear to be clustered. From the shapes of the clusters, you might decide to model each cluster by using a multivariate normal distribution.

You can use the OUTP= and COV options in PROC CORR to output mean and covariance statistics for each group, as follows:

proc corr data=sashelp.iris outp=CorrOut COV noprint;
   by Species;
   var Petal: Sepal:;
run;
 
proc print data=CorrOut(where=(_TYPE_ in ("N", "MEAN", "COV"))) noobs;
   where Species="Setosa";  /* just view information for one group */
   by Species  _Type_ notsorted;
   var _NAME_ Petal: Sepal:;
run;

The statistics for one of the groups (Species='Setosa') are shown. The number of observations in the group (N) is actually a scalar value, but it was replicated to fit into a rectangular data set.

Reading BY-group information into SAS/IML

This section reads the sample size, mean vector, and covariance matrix for all groups. A WHERE clause selects only the observations of interest:

/* Read in N, Mean, and Cov for each species. Use to create a parametric bootstrap 
   by simulating N[i] observations from a MVN(Mean[i], Cov[i]) distribution */
proc iml;
varNames = {'PetalLength' 'PetalWidth' 'SepalLength' 'SepalWidth'};
use CorrOut where (_TYPE_="N" & Species^=" ");       /* N */
read all var varNames into mN[rowname=Species];      /* read for all groups */
print mN[c=varNames];
 
use CorrOut where (_TYPE_="MEAN" & Species^=" ");    /* mean */
read all var varNames into mMean[rowname=Species];   /* read for all groups */
print mMean[c=varNames];
 
use CorrOut where (_TYPE_="COV" & Species^=" ");     /* covariance */
read all var varNames into mCov[rowname=Species];    /* read for all groups */
close;
print mCov[c=varNames];

The output (not shown) shows that the matrices mN, mMean, and mCov contain the vertical concatenation (for all groups) of the sample size, mean vectors, and covariance matrices, respectively.

The grouping variable is Species. You can use the UNIQUE function to get the unique (sorted) values of the groups. You can then iterate over the unique values and use the LOC function to extract only the rows of the matrices that correspond to the ith group. What you do with that information depends on your application. In the following program, the information for each group is used to create a random sample from a multivariate normal distribution that has the same size, mean, and covariance as the ith group:

/* Goal: Write random MVN values in X to data set */
X = j(1, ncol(varNames), .);         /* X variables will be numeric */
Spec = BlankStr(nleng(Species));     /* and Spec will be character */
create SimOut from X[rowname=Spec colname=varNames]; /* open for writing */
 
/* The BY-group variable is Species */
call randseed(12345);
u = unique(Species);                 /* get unique values for groups */
do i = 1 to ncol(u);                 /* iterate over unique values */
   idx = loc(Species=u[i]);          /* rows for the i_th group */
   N = mN[i, 1];                     /* extract scalar from i_th group */
   mu = mMean[i,];                   /* extract vector from i_th group */
   Sigma = mCov[idx,];               /* extract matrix from i_th group */
   /* The parameters for this group are now in N, mu, and Sigma.
      Do something with these values. */
   X = RandNormal(N, mu, Sigma);     /* simulate obs for the i_th group */
   X = round(X);                     /* measure to nearest mm */
   Spec = j(N, 1, u[i]);             /* ID for this sample */
   append from X[rowname=Spec];
end;
close SimOut;
quit;
 
ods graphics / attrpriority=none;
title "Parametric Bootstrap Simulation of Iris Data"; 
proc sgscatter data=SimOut(rename=(Spec=Species));
   matrix Petal: Sepal: / group=Species;
run;

The simulation has generated a new set of clustered data. If you compare the simulated data with the original, you will notice many statistical similarities.

Although the main purpose of this article is to discuss BY-group processing in SAS/IML, I want to point out that the simulation in this article is an example of the parametric bootstrap. Simulating Data with SAS (Wicklin, 2013) states that "the parametric bootstrap is nothing more than the process of fitting a model distribution to the data and simulating data from the fitted model." That is what happens in this program. The sample means and covariance matrices are used as parameters to generate new synthetic observations. Thus, the parametric bootstrap technique is really a form of simulation where the parameters for the simulation are obtained from the data.

In conclusion, sometimes you have many matrices in a SAS data set, each identified by a categorical variable. You can perform "BY-group processing" in SAS/IML by reading in all the matrices into a big matrix and then use the UNIQUE-LOC technique to iterate over each matrix.

The post Matrix operations and BY groups appeared first on The DO Loop.

2月 132019
 

When you use maximum likelihood estimation (MLE) to find the parameter estimates in a generalized linear regression model, the Hessian matrix at the optimal solution is very important. The Hessian matrix indicates the local shape of the log-likelihood surface near the optimal value. You can use the Hessian to estimate the covariance matrix of the parameters, which in turn is used to obtain estimates of the standard errors of the parameter estimates. Sometimes SAS programmers ask how they can obtain the Hessian matrix at the optimal solution. This article describes three ways:

  • For some SAS regression procedures, you can store the model and use the SHOW HESSIAN statement in PROC PLM to display the Hessian.
  • Some regression procedures support the COVB option ("covariance of the betas") on the MODEL statement. You can compute the Hessian as the inverse of that covariance matrix.
  • The NLMIXED procedure can solve general regression problems by using MLE. You can use the HESS option on the PROC NLMIXED statement to display the Hessian.

The next section discusses the relationship between the Hessian and the estimate of the covariance of the regression parameters. Briefly, they are inverses of each other. You can download the complete SAS program for this blog post.

Hessians, covariance matrices, and log-likelihood functions

The Hessian at the optimal MLE value is related to the covariance of the parameters. The literature that discusses this fact can be confusing because the objective function in MLE can be defined in two ways. You can maximize the log-likelihood function, or you can minimize the NEGATIVE log-likelihood.

In statistics, the inverse matrix is related to the covariance matrix of the parameters. A full-rank covariance matrix is always positive definite. If you maximize the log-likelihood, then the Hessian and its inverse are both negative definite. Therefore, statistical software often minimizes the negative log-likelihood function. Then the Hessian at the minimum is positive definite and so is its inverse, which is an estimate of the covariance matrix of the parameters. Unfortunately, not every reference uses this convention.

For details about the MLE process and how the Hessian at the solution relates to the covariance of the parameters, see the PROC GENMOD documentation. For a more theoretical treatment and some MLE examples, see the Iowa State course notes for Statistics 580.

Use PROC PLM to obtain the Hessian

I previously discussed how to use the STORE statement to save a generalized linear model to an item store, and how to use PROC PLM to display information about the model. Some procedures, such as PROC LOGISTIC, save the Hessian in the item store. For these procedures, you can use the SHOW HESSIAN statement to display the Hessian. The following call to PROC PLM continues the PROC LOGISTIC example from the previous post. (Download the example.) The call displays the Hessian matrix at the optimal value of the log-likelihood. It also saves the "covariance of the betas" matrix in a SAS data set, which is used in the next section.

/* PROC PLM provides the Hessian matrix evaluated at the optimal MLE */
proc plm restore=PainModel;
   show Hessian CovB;
   ods output Cov=CovB;
run;

Not every SAS procedure stores the Hessian matrix when you use the STORE statement. If you request a statistic from PROC PLM that is not available, you will get a message such as the following: NOTE: The item store WORK.MYMODEL does not contain a Hessian matrix. The option in the SHOW statement is ignored.

Use the COVB option in a regression procedure

Many SAS regression procedures support the COVB option on the MODEL statement. As indicated in the previous section, you can use the SHOW COVB statement in PROC PLM to display the covariance matrix. A full-rank covariance matrix is positive definite, so the inverse matrix will also be positive definite. Therefore, the inverse matrix represents the Hessian at the minimum of the NEGATIVE log-likelihood function. The following SAS/IML program reads in the covariance matrix and uses the INV function to compute the Hessian matrix for the logistic regression model:

proc iml;
use CovB nobs p;                         /* open data; read number of obs (p) */
   cols = "Col1":("Col"+strip(char(p))); /* variable names are Col1 - Colp */
   read all var cols into Cov;           /* read COVB matrix */
   read all var "Parameter";             /* read names of parameters */
close;
 
Hessian = inv(Cov);                      /* Hessian and covariance matrices are inverses */
print Hessian[r=Parameter c=Cols F=BestD8.4];
quit;

You can see that the inverse of the COVB matrix is the same matrix that was displayed by using SHOW HESSIAN in PROC PLM. Be aware that the parameter estimates and the covariance matrix depend on the parameterization of the classification variables. The LOGISTIC procedure uses the EFFECT parameterization by default. However, if you instead use the REFERENCE parameterization, you will get different results. If you use a singular parameterization, such as the GLM parameterization, some rows and columns of the covariance matrix will contain missing values.

Define your own log-likelihood function

SAS provides procedures for solving common generalized linear regression models, but you might need to use MLE to solve a nonlinear regression model. You can use the NLMIXED procedure to define and solve general maximum likelihood problems. The PROC NLMIXED statement supports the HESS and COV options, which display the Hessian and covariance of the parameters, respectively.

To illustrate how you can get the covariance and Hessian matrices from PROC NLMIXED, let's define a logistic model and see if we get results that are similar to PROC LOGISTIC. We shouldn't expect to get exactly the same values unless we use exactly the same optimization method, convergence options, and initial guesses for the parameters. But if the model fits the data well, we expect that the NLMIXED solution will be close to the LOGISTIC solution.

The NLMIXED procedure does not support a CLASS statement, but you can use another SAS procedure to generate the design matrix for the desired parameterization. The following program uses the OUTDESIGN= option in PROC LOGISTIC to generate the design matrix. Because PROC NLMIXED requires a numerical response variable, a simple data step encodes the response variable into a binary numeric variable. The call to PROC NLMIXED then defines the logistic regression model in terms of a binary log-likelihood function:

/* output design matrix and EFFECT parameterization */
proc logistic data=Neuralgia outdesign=Design outdesignonly;
   class Pain Sex Treatment;
   model Pain(Event='Yes')= Sex Age Duration Treatment; /* use NOFIT option for design only */
run;
/* PROC NLMIXED required a numeric response */
data Design;
   set Design;
   PainY = (Pain='Yes');  
run;
 
ods exclude IterHistory;
proc nlmixed data=Design COV HESS;
   parms b0 -18 bSexF bAge bDuration bTreatmentA bTreatmentB 0;
   eta    = b0 + bSexF*SexF + bAge*Age + bDuration*Duration +
                 bTreatmentA*TreatmentA + bTreatmentB*TreatmentB;
   p = logistic(eta);       /* or 1-p to predict the other category */
   model PainY ~ binary(p);
run;

Success! The parameter estimates and the Hessian matrix are very close to those that are computed by PROC LOGISTIC. The covariance matrix of the parameters, which requires taking an inverse of the Hessian matrix, is also close, although there are small differences from the LOGISTIC output.

Summary

In summary, this article shows three ways to obtain the Hessian matrix at the optimum for an MLE estimate of a regression model. For some SAS procedures, you can store the model and use PROC PLM to obtain the Hessian. For procedures that support the COVB option, you can use PROC IML to invert the covariance matrix. Finally, if you can define the log-likelihood equation, you can use PROC NLMIXED to solve for the regression estimates and output the Hessian at the MLE solution.

The post 3 ways to obtain the Hessian at the MLE solution for a regression model appeared first on The DO Loop.

2月 112019
 

Have you ever run a regression model in SAS but later realize that you forgot to specify an important option or run some statistical test? Or maybe you intended to generate a graph that visualizes the model, but you forgot? Years ago, your only option was to modify your program and rerun it. Current versions of SAS support a less painful alternative: you can use the STORE statement in many SAS/STAT procedures to save the model to an item store. You can then use the PLM procedure to perform many post-modeling analyses, including performing hypothesis tests, showing additional statistics, visualizing the model, and scoring the model on new data. This article shows four ways to use PROC PLM to obtain results from your regression model.

What is PROC PLM?

PROC PLM enables you to analyze a generalized linear model (or a generalized linear mixed model) long after you quit the SAS/STAT procedure that fits the model. PROC PLM was released with SAS 9.22 in 2010. This article emphasizes four features of PROC PLM:

  • You can use the SCORE statement to score the model on new data.
  • You can use the EFFECTPLOT statement to visualize the model.
  • You can use the ESTIMATE, LSMEANS, SLICE, and TEST statements to estimate parameters and perform hypothesis tests.
  • You can use the SHOW statement to display statistical tables such as parameter estimates and fit statistics.

For an introduction to PROC PLM, see "Introducing PROC PLM and Postfitting Analysis for Very General Linear Models" (Tobias and Cai, 2010). The documentation for the PLM procedure includes more information and examples.

To use PROC PLM you must first use the STORE statement in a regression procedure to create an item store that summarizes the model. The following procedures support the STORE statement: GEE, GENMOD, GLIMMIX, GLM, GLMSELECT, LIFEREG, LOGISTIC, MIXED, ORTHOREG, PHREG, PROBIT, SURVEYLOGISTIC, SURVEYPHREG, and SURVEYREG.

The example in this article uses PROC LOGISTIC to analyze data about pain management in elderly patients who have neuralgia. In the PROC LOGISTIC documentation, PROC LOGISTIC fits the model and performs all the post-fitting analyses and visualization. In the following program, PROC LOGIST fits the model and stores it to an item store named PainModel. In practice, you might want to store the model to a permanent libref (rather than WORK) so that you can access the model days or weeks later.

Data Neuralgia;
   input Treatment $ Sex $ Age Duration Pain $ @@;
   datalines;
P F 68  1 No  B M 74 16 No  P F 67 30 No  P M 66 26 Yes B F 67 28 No  B F 77 16 No
A F 71 12 No  B F 72 50 No  B F 76  9 Yes A M 71 17 Yes A F 63 27 No  A F 69 18 Yes
B F 66 12 No  A M 62 42 No  P F 64  1 Yes A F 64 17 No  P M 74  4 No  A F 72 25 No
P M 70  1 Yes B M 66 19 No  B M 59 29 No  A F 64 30 No  A M 70 28 No  A M 69  1 No
B F 78  1 No  P M 83  1 Yes B F 69 42 No  B M 75 30 Yes P M 77 29 Yes P F 79 20 Yes
A M 70 12 No  A F 69 12 No  B F 65 14 No  B M 70  1 No  B M 67 23 No  A M 76 25 Yes
P M 78 12 Yes B M 77  1 Yes B F 69 24 No  P M 66  4 Yes P F 65 29 No  P M 60 26 Yes
A M 78 15 Yes B M 75 21 Yes A F 67 11 No  P F 72 27 No  P F 70 13 Yes A M 75  6 Yes
B F 65  7 No  P F 68 27 Yes P M 68 11 Yes P M 67 17 Yes B M 70 22 No  A M 65 15 No
P F 67  1 Yes A M 67 10 No  P F 72 11 Yes A F 74  1 No  B M 80 21 Yes A F 69  3 No
;
 
title 'Logistic Model on Neuralgia';
proc logistic data=Neuralgia;
   class Sex Treatment;
   model Pain(Event='Yes')= Sex Age Duration Treatment;
   store PainModel / label='Neuralgia Study';  /* or use mylib.PaimModel for permanent storage */
run;

The LOGISTIC procedure models the presence of pain based on a patient's medication (Drug A, Drug B, or placebo), gender, age, and duration of pain. After you fit the model and store it, you can use PROC PLM to perform all sorts of additional analyses, as shown in the subsequent sections.

Use PROC PLM to score new data

An important application of regression models is to predict the response variable for new data. The following DATA step defines three new patients. The first two are females who are taking Drug B. The third is a male who is taking Drug A:

/* 1.Use PLM to score future obs */
data NewPatients;
   input Treatment $ Sex $ Age Duration;
   datalines;
B F 63  5 
B F 79 16 
A M 74 12 
;
 
proc plm restore=PainModel;
   score data=NewPatients out=NewScore predicted LCLM UCLM / ilink; /* ILINK gives probabilities */
run;
 
proc print data=NewScore;
run;

The output shows the predicted pain level for the three patients. The younger woman is predicted to have a low probability (0.01) of pain. The model predicts a moderate probability of pain (0.38) for the older woman. The model predicts a 64% chance that the man will experience pain.

Notice that the PROC PLM statement does not use the original data. In fact, the procedure does not support a DATA= option but instead uses the RESTORE= option to read the item store. The PLM procedure cannot create plots or perform calculations that require the data because the data are not part of the item store.

Use PROC PLM to visualize the model

I've previously written about how to use the EFFECTPLOT statement to visualize regression models. The EFFECTPLOT statement has many options. However, because PROC PLM does not have access to the original data, the EFFECTPLOT statement in PROC PLM cannot add observations to the graphs.

Although the EFFECTPLOT statement is supported natively in the LOGISTIC and GENMOD procedure, it is not directly supported in other procedures such as GLM, MIXED, GLIMMIX, PHREG, or the SURVEY procedures. Nevertheless, because these procedures support the STORE statement, you can use the EFFECTPLOT statement in PROC PLM to visualize the models for these procedures. The following statement uses the EFFECTPLOT statement to visualize the probability of pain for female and male patients that are taking each drug treatment:

/* 2. Use PROC PLM to create an effect plot */
proc plm restore=PainModel;
   effectplot slicefit(x=Age sliceby=Treatment plotby=Sex);
run;

The graphs summarize the model. For both men and women, the probability of pain increases with age. At a given age, the probability of pain is lower for the non-placebo treatments, and the probability is slightly lower for the patients who use Drug B as compared to Drug A. These plots are shown at the mean value of the Duration variable.

Use PROC PLM to compute contrasts and other estimates

One of the main purposes of PROC PLM Is to perform postfit estimates and hypothesis tests. The simplest is a pairwise comparison that estimates the difference between two levels of a classification variable. For example, in the previous graph the probability curves for the Drug A and Drug B patients are close to each other. Is there a significant difference between the two effects? The following ESTIMATE statement estimates the (B vs A) effect. The EXP option exponentiates the estimate so that you can interpret the 'Exponentiated' column as the odds ratio between the drug treatments. The CL option adds confidence limits for the estimate of the odds ratio. The odds ratio contains 1, so you cannot conclude that Drug B is significantly more effective that Drug A at reducing pain.

/* 3. Use PROC PLM to create contrasts and estimates */
proc plm restore=PainModel;
   /* 'Exponentiated' column is odds ratio between treatments */
   estimate 'Pairwise B vs A' Treatment 1 -1 / exp CL;
run;

Use PROC PLM to display statistics from the analysis

One of the more useful features of PROC PLM is that you can use the SHOW statement to display tables of statistics from the original analysis. If you want to see the ParameterEstimates table again, you can do that (SHOW PARAMETERS). You can even display statistics that you did not compute originally, such as an estimate of the covariance of the parameters (SHOW COVB). Lastly, if you have the item store but have forgotten what program you used to generate the model, you can display the program (SHOW PROGRAM). The following statements demonstrate the SHOW statement. The results are not shown.

/* 4. Use PROC PLM to show statistics or the original program */
proc plm restore=PainModel;
   show Parameters COVB Program;
run;

Summary

In summary, the STORE statement in many SAS/STAT procedures enables you to store various regression models into an item store. You can use PROC PLM to perform additional postfit analyses on the model, including scoring new data, visualizing the model, hypothesis testing, and (re)displaying additional statistics. This technique is especially useful for long-running models, but it is also useful for confidential data because the data are not needed for the postfit analyses.

The post 4 reasons to use PROC PLM for linear regression models in SAS appeared first on The DO Loop.

1月 282019
 

This article shows how to use SAS to simulate data that fits a linear regression model that has categorical regressors (also called explanatory or CLASS variables). Simulating data is a useful skill for both researchers and statistical programmers. You can use simulation for answering research questions, but you can also use it generate "fake" (or "synthetic") data for a presentation or blog post when the real data are confidential. I have previously shown how to use SAS to simulate data for a linear model and for a logistic model, but both articles used only continuous regressors in the model.

This discussion and SAS program are based on Chapter 11 in Simulating Data with SAS (Wicklin, 2013, p. 208-209). The main steps in the simulation are as follows. The comments in the SAS DATA program indicate each step:

  1. Macro variables are used to define the number of continuous and categorical regressors. Another macro variable is used to specify the number of levels for each categorical variable. This program simulates eight continuous regressors (x1-x8) and four categorical regressors (c1-c4). Each categorical regressor in this simulation has three levels.
  2. The continuous regressors are simulated from a normal distribution, but you can use any distribution you want. The categorical levels are 1, 2, and 3, which are generated uniformly at random by using the "Integer" distribution. The discrete "Integer" distribution was introduced in SAS 9.4M5; for older versions of SAS, use the %RandBetween macro as shown in the article "How to generate random integers in SAS." You can also generate the levels non-uniformly by using the "Table" distribution.
  3. The response variable, Y, is defined as a linear combination of some explanatory variables. In this simulation, the response depends linearly on the x1 and x8 continuous variables and on the levels of the C1 and C4 categorical variables. Noise is added to the model by using a normally distributed error term.
/* Program based on Simulating Data with SAS, Chapter 11 (Wicklin, 2013, p. 208-209) */
%let N = 10000;                  /* 1a. number of observations in simulated data */
%let numCont =  8;               /* number of continuous explanatory variables */
%let numClass = 4;               /* number of categorical explanatory variables */
%let numLevels = 3;              /* (optional) number of levels for each categorical variable */
 
data SimGLM; 
  call streaminit(12345); 
  /* 1b. Use macros to define arrays of variables */
  array x[&numCont]  x1-x&numCont;   /* continuous variables named x1, x2, x3, ... */
  array c[&numClass] c1-c&numClass;  /* CLASS variables named c1, c2, ... */
  /* the following statement initializes an array that contains the number of levels
     for each CLASS variable. You can hard-code different values such as (2, 3, 3, 2, 5) */
  array numLevels[&numClass] _temporary_ (&numClass * &numLevels);  
 
  do k = 1 to &N;                 /* for each observation ... */
    /* 2. Simulate value for each explanatory variable */ 
    do i = 1 to &numCont;         /* simulate independent continuous variables */
       x[i] = round(rand("Normal"), 0.001);
    end; 
    do i = 1 to &numClass;        /* simulate values 1, 2, ..., &numLevels with equal prob */
       c[i] = rand("Integer", numLevels[i]);        /* the "Integer" distribution requires SAS 9.4M5 */
    end; 
 
    /* 3. Simulate response as a function of certain explanatory variables */
    y = 4 - 3*x[1] - 2*x[&numCont] +                /* define coefficients for continuous effects */
       -3*(c[1]=1) - 4*(c[1]=2) + 5*c[&numClass]    /* define coefficients for categorical effects */
        + rand("Normal", 0, 3);                     /* normal error term */
    output;  
  end;  
  drop i k;
run;     
 
proc glm data=SimGLM;
   class c1-c&numClass;
   model y = x1-x&numCont c1-c&numClass / SS3 solution;
   ods select ModelANOVA ParameterEstimates;
quit;
Parameter estimates for synthetic (simulated) data that follows a regression model.

The ModelANOVA table from PROC GLM (not shown) displays the Type 3 sums of squares and indicates that the significant terms in the model are x1, x8, c1, and c4.

The parameter estimates from PROC GLM are shown to the right. You can see that each categorical variable has three levels, and you can use PROC FREQ to verify that the levels are uniformly distributed. I have highlighted parameter estimates for the significant effects in the model:

  • The estimates for the significant continuous effects are highlighted in red. The estimates for the coefficients of x1 and x8 are about -3 and -2, respectively, which are the parameter values that are specified in the simulation.
  • The estimates for the levels of the C1 CLASS variable are highlighted in green. They are close to (-3, -4, 0), which are the parameter values that are specified in the simulation.
  • The estimates for the Intercept and the C4 CLASS variable are highlighted in magenta. Notice that they seem to differ from the parameters in the simulation. As discussed previously, the simulation and PROC GLM use different parameterizations of the C4 effect. The simulation assigns Intercept = 4. The contribution of the first level of C4 is 5*1, the contribution for the second level is 5*2, and the contribution for the third level is 5*3. As explained in the previous article, the GLM parameterization reparameterizes the C4 effect as (4 + 15) + (5 - 15)*(C4=1) + (10 - 15)*(C4=2). The estimates are very close to these parameter values.

Although this program simulates a linear regression model, you can modify the program and simulate from a generalized linear model such as the logistic model. You just need to compute the linear predictor, eta (η), and then use the link function and the RAND function to generate the response variable, as shown in a previous article about how to simulate data from a logistic model.

In summary, this article shows how to simulate data for a linear regression model in the SAS DATA step when the model includes both categorical and continuous regressors. The program simulates arbitrarily many continuous and categorical variables. You can define a response variable in terms of the explanatory variables and their interactions.

The post Simulate data for a regression model with categorical and continuous variables appeared first on The DO Loop.