data analysis

1月 202012

I encountered a wonderful survey article, "Robust statistics for outlier detection," by Peter Rousseeuw and Mia Hubert. Not only are the authors major contributors to the field of robust estimation, but the article is short and very readable. This blog post walks through the examples in the paper and shows how to compute each example by using SAS. In particular, this post shows how to compute robust estimates of location for univariate data. Future posts will show how to compute robust estimates of scale and multivariate estimates.

The Rousseeuw and Hubert article begins with a quote:

In real data sets, it often happens that some observations are different from the majority. Such observations are called outliers. ...They do not fit the model well. It is very important to be able to detect these outliers.

The quote explains why outlier detection is connected to robust estimation methods. Classical statistical estimators are so affected by the outliers that "the resulting fitted model does not allow [you] to detect the deviating observations." The goal of robust statistical methods is to "find a fit that is close to the fit [you] would have found without the [presence of] outliers." You can then identify the outliers by their large deviation from the robust model.

The simplest example is computing the "center" of a set of data, which is known as estimating location. Consider the following five measurements:
6.25, 6.27, 6.28, 6.34, 63.1
As the song says, one of these points is not like the other.... The last datum is probably a miscoding of 6.31.

Robust estimate of location in SAS/IML software

SAS/IML software contains several functions for robust estimation. For estimating location, the MEAN and MEDIAN functions are the primary computational tools. It is well known that the mean is sensitive to even a single outlier, whereas the median is not. The following SAS/IML statements compute the mean and median of these data:

proc iml;
x = {6.25, 6.27, 6.28, 6.34,  63.1};
mean = mean(x); /* or x[:] */
median = median(x);
print mean median;

The mean is not representative of the bulk of the data, but the median is.

Although the survey article doesn't mention it, there are two other robust estimators of location that have been extensively studied. They are the trimmed mean and the Winsorized mean:

trim = mean(x, "trimmed", 0.2);    /* 20% of obs */
winsor = mean(x, "winsorized", 1); /* one obs */
print trim winsor;

The trimmed mean is computed by excluding the k smallest and k largest values, and computing the mean of the remaining values. The Winsorized mean is computed by replacing the k smallest values with the (k+1)st smallest, and replacing the k largest values with the (k+1)st largest. The mean of these remaining values is the Winsorized mean. For both of these functions, you can specify either a number of observations to trim or Winsorize, or a percentage of values. Formulas for the trimmed and Winsorized means are included in the documentation of the UNIVARIATE procedure. If you prefer an example, here are the equivalent computations for the trimmed and Winsorized means:

trim2 = mean( x[2:4] );
winsor2 = mean( x[2] // x[2:4] // x[4] );
print trim2 winsor2;

Robust Estimates in the UNIVARIATE Procedure

The UNIVARIATE procedure also supports these robust estimators. The trimmed and Winsorized means are computed by using the TRIM= and WINSOR= options, respectively. Not only does PROC UNIVARIATE compute robust estimates, but it computes standard errors as shown in the following example.

data a;
input x @@;
6.25 6.27 6.28 6.34 63.1
proc univariate data=a trim=0.2 winsor=1;
   var x;
ods select BasicMeasures TrimmedMeans WinsorizedMeans;

Next time: robust estimates of scale.

tags: Data Analysis, Statistical Programming
1月 132012

A recent question on a SAS Discussion Forum was "how can you overlay multiple kernel density estimates on a single plot?" There are three ways to do this, depending on your goals and objectives.

Overlay different estimates of the same variable

Sometimes you have a single variable and want to overlay various density estimates, either parametric or nonparametric. You can use the HISTOGRAM statement in the UNIVARIATE procedure to accomplish this. The following SAS code overlays three kernel density estimates with different bandwidths on a histogram of the MPG_CITY variable in the SASHelp.Cars data set:

/* use UNIVARIATE to overlay different estimates of the same variable */
proc univariate;
   var mpg_city;
   histogram / kernel(C=SJPI MISE 0.5); /* three bandwidths */

In the same way, you can overlay various parametric estimates and combine parametric and nonparametric estimates.

Overlay estimates of different variables

Sometimes you might want to overlay the density estimates of several variables in order to compare their densities. You can use the KDE procedure to accomplish this by using the PLOTS=DensityOverlay graph. The following SAS code overlays the density curves of two different variables: the miles per gallon for vehicles in the city and the miles per gallon for the same variables on the highway:

/* use KDE to overlay estimates of different variables */
proc kde;
   univar mpg_city mpg_highway / plots=densityoverlay;

Overlay arbitrary densities

Sometimes you might need to overlay density estimates that come from multiple sources. For example, you might use PROC UNIVARIATE construct a parametric density estimate, but overlay it on a density estimate that you computed by using PROC KDE or that you computed yourself by writing an algorithm in PROC IML. In these cases, you want to write the density estimates to a data set, combine them with the DATA step, and plot them using the SERIES statement in PROC SGPLOT.

There are three ways to get density estimates in a data set:

  • In PROC KDE, the UNIVAR statement has an OUT= option that you can use to write the density estimate to a SAS data set.
  • In PROC UNIVARIATE, the HISTOGRAM statement has an OUTKERNEL= option that you can use to write the kernel density estimate to a SAS data set.
  • For parametric estimates that are computed in PROC UNIVARIATE, you can use the ODS OUTPUT statement to save the ParameterEstimates table to a SAS data set. You can then use a DATA step in conjunction with the PDF function to create the (x,y) values along a parametric density curve.

For some of these situations, you might need to transpose a data set from a long format to a wide format. For extremely complicated graphs that overlay multiples density estimates on a histogram, you might need to use PROC SGRENDER and the Graphics Template Language (GTL).

If you prefer to panel (rather than overlay) density estimates for different levels of a classification variable, the SAS & R blog shows an example that uses the SGPANEL procedure.

tags: Data Analysis, Statistical Programming
12月 072011
Yesterday, December 7, 1941, a date which will live in infamy...
- Franklin D. Roosevelt

Today is the 70th anniversary of the Japanese attack on Pearl Harbor. The very next day, America declared war.

During a visit to the Smithsonian National Museum of American History, I discovered the results of a 1939 poll that shows American opinions about war at the start of the European conflict. (I could not determine from the exhibit whether the poll was taken before or after the invasion of Poland by Germany in September 1939.) Although most Americans (83%) favored the Allies, more than 50% of those surveyed supported either providing no aid to either side (25%) or selling supplies to both sides (29%).

I was also amused by the 13.5% who thought the US should fight with the allies "if they are losing."

In addition to wide range of opinions about who to support in Europe, two statistical aspects of this table jumped out at me:

  • Why did the editor print "1/10 of 1%" for the "Help Germany" category? They could have printed 0.1%, or "less than 1%," or even lumped that response into the "others" category. I think the display was an intentional ploy to emphasize how little support there was for helping Germany.
  • Why did the editor use 13.5% when the rest of the table is rounded to the nearest percent? The value seems out of place. I suppose the editor did not want to round the value up to 14%, because then the total percentage would be 100.1%. But so what?

Why do you think the editor displayed the survey results as he did? Do you think this data would be better visualized as a graph, or does the table do a better job?

tags: Data Analysis, Statistical Thinking
12月 022011

Recently the "SAS Sample of the Day" was a Knowledge Base article with an impressively long title:

Sample 42165: Using a stored process to eliminate duplicate values caused by multiple group memberships when creating a group-based, identity-driven filter in SAS® Information Map Studio

"Wow," I thought. "This is the longest title on a SAS Sample that I have ever seen!"

This got me wondering whether anyone has run statistics on the SAS Knowledge Base. It would be interesting, I thought, to see a distribution of the length of the titles, to see words that appear most frequently in titles, and so forth.

I enlisted the aid of my friend Chris Hemedinger who is no dummy at reading data into SAS. A few minutes later, Chris had assembled a SAS data set that contained the titles of roughly 2,250 SAS Samples.

The length of titles

The first statistic I looked at was the length of the title, which you can compute by using the LENGTH function. A quick call to PROC UNIVARIATE and—presto!—the analysis is complete:

proc univariate data=SampleTitles;
   var TitleLength;
   histogram TitleLength;

The table of basic statistical measures shows that the median title length is about 50 characters long, with 50% of titles falling into the range 39–67 characters. Statistically speaking, a "typical" SAS Sample has 50 characters, such as this one: "Calculating rolling sums and averages using arrays." A histogram of the title lengths indicates that the distribution has a long tail:

The shortest title is the pithy "Heat Maps," which contains only nine characters. The longest title is the mouth-filling behemoth mentioned at the beginning of this article, which tips the scales at an impressive 173 characters and crushes the nearest competitor, which has a mere 149 characters.

Frequency of words that appear most often in SAS Samples

The next task was to investigate the frequency of words in the titles. Which words appear most often? The visual result of this investigation is a Wordle word cloud, shown at the beginning of this article. (In the word cloud, capitalization matters, so Using and using both appear.) As you might have expected, SAS and PROC are used frequently, as are action words such as use/using and create/creating. Nouns such as data, variable, example, and documentation also appear frequently.

You can do a frequency analysis of the words in the titles by using the COUNTW, SCAN, and SUBSTR functions to decompose the titles into words. The following SAS code excludes certain simple words (such as "a," "the," and "to") and runs PROC FREQ to perform a frequency analysis on the words that remain. The UPCASE function is used to combine words that differ only in capitalization:

data words;
keep Word;
set SampleTitles;
length Word $20;
count = countw(title);
do i = 1 to count;
   Word = scan(title, i);
   if substr(Word,1,3)="SAS" then Word="SAS"; /* get rid of (R) symbol */
   if upcase(Word) NOT IN ("A" "THE" "TO" "WITH" "FOR" "IN" "OF"
           "AND" "FROM" "AN" "ON" "THAT" "OR" "WHEN" 
           "1" "2" "3" "4" "5" "6" "7" "8" "9")
      & Word NOT IN ("by" "By") then do;
      Word = upcase(Word);
proc freq data=words order=freq noprint;
tables Word / out=FreqOut(where=(count>=50));
ods graphics / height=1200 width=750;
proc sgplot data=FreqOut;
dot Word / response=count categoryorder=respdesc;
xaxis values=(0 to 650 by 50) grid fitpolicy=rotate;

As is often the case, the distribution of frequencies decreases quickly and then has a long tail. The graph shows the frequency counts of terms that appear in titles more than 50 times.

tags: Data Analysis, Just for Fun, Statistical Graphics
11月 182011

Halloween night was rainy, so many fewer kids knocked on the door than usual. Consequently, I'm left with a big bucket of undistributed candy.

One evening as I treated myself to a mouthful of tooth decay, I chanced to open a package of Wonka® Bottle Caps. The package contained three little soda-flavored candies, and I was surprised to find that all three were the same flavor: grape. "Hey," I exclaimed to my youngest child, "they're all the same color! What do you think are the odds of that? You want to help me check some other packages?"

"Mom! Daddy's making me work on his blog," she complained.

After assuring her that she could devour the data when I finished my analysis, she helped me open the remaining packages of Bottle Caps and tabulate the flavors of the candies within.

The flavors of Bottle Caps candies are cola, root beer, grape, cherry, and orange. As we unwrapped the sugary data, my daughter and I hypothesized that the flavors of Bottle Caps were evenly distributed.

Although the hypothesis is the obvious one to make, it is not necessarily true for other candies. There have been dozens (hundreds?) of studies about the distribution of colors in M&M® candies. In fact, many math and statistics classes count the colors of M&M candies and compare the empirical counts to official percentages that are supplied by the manufacturer. The great thing about this classic project is that the colors of M&Ms are not uniformly distributed. Last time I checked, the official proportions were 30% brown, 20% yellow, 20% red, 10% orange, 10% green, and 10% blue.

So what about the Bottle Caps? There were 101 candies in 34 packages (one package contained only two candies). If our hypothesis is correct, the expected number of each flavor is 20.2 candies. We counted 23 Cherry, 15 Cola, 21 Grape, 25 Orange, and 17 Root Beer.

I entered the data into SAS for each package. (If you'd like to do further analysis, you can download the data.) Each package is a sample from a multinomial distribution, and I want to test whether the five flavors each have the same proportion, which is 1/5. I used the FREQ procedure to analyze the distribution of flavors and to run a chi-square test for equal proportions:

proc freq data=BottleCaps;
label Flavor = "Flavor";
weight Count;
tables Flavor / nocum chisq plots=DeviationPlot;

The chi-square test gives a large p-value, so there is no statistical indication that the proportions of flavors of Bottle Caps are not uniform. All of the deviations can be attributed to random sampling.

The FREQ procedure can automatically produce plots as part of the analysis. For these data, I asked for a bar chart that shows the relative deviations of the observed frequencies (O) from the expected frequencies (E). The relative deviation is part of the chi-square computation, and is computed as (O-E)/E.

Although the Bottle Caps that I observed had fewer cola and root beer flavors than expected, the chi-square test shows that these deviations are not significant. Anyway, my favorite Bottle Caps are cherry and orange, so I think the sampling gods smiled on me during this experiment.

Conclusion? Looks like Wonka makes the same number of each flavor. Now back to my pile of sugary goodness. I wonder how many candies are in a typical box of Nerds...

tags: Data Analysis, Just for Fun
11月 042011

Being able to reshape data is a useful skill in data analysis. Most of the time you can use the TRANSPOSE procedure or the SAS DATA step to reshape your data. But the SAS/IML language can be handy, too.

I only use PROC TRANSPOSE a few times per year, so my skills never progress beyond the "beginner" stage. I always have to look up the syntax! Sometimes, when I am trying to meet a deadline, I resort to using the SAS/IML language, which I find more intuitive for reshaping data.

Recently I had data that contained two variables: a character categorical variable and a numerical variable. I wanted to reshape the data so that each level (category) of the categorical variable became a new variable, and I wanted to use the levels to name the new variables. (This is an example of converting data from a "long" description to a "wide" description.) Because the number of observations usually differs among categories, some of the new variables will have missing values.

A Canonical Example

A simple example is given by the Sashelp.Class data set. The SEX variable contains two values, F and M. There are several numerical variables; I'll use HEIGHT for this example. For these data, I want to reshape the data to create two variables named X_F and X_M, where the first variable contains the heights of the females and the second variable contains the heights of the males. The following DATA step shows one way to accomplish this:

data combo;
 keep x_F x_M;
 merge sashelp.class(where=(sex="F") rename=(height=x_F))
       sashelp.class(where=(sex="M") rename=(height=x_M));
proc print; run;

Notice that the X_F variable contains a missing value because there are only nine females in the data, whereas there are ten males.

This technique works when you know the levels of the categorical variable, which you can discover by using PROC FREQ. However, what do you do if the categorical variable has dozens or hundreds of levels? It would be tedious to have to type in the generalization of this DATA step. I'd prefer to have code that works for an arbitrary categorical variable with k levels, and that automatically forms the names of the new variables as X_C1, X_C2, ..., X_Ck where C1, C2, ..., are the unique values of the categorical variable.

Obviously, this can be done, but I had a deadline to meet. Rather than mess with SAS macro, PROC SQL, and other tools that I do not use every day, I turned to the SAS/IML language

SAS/IML to the Rescue

To reshape the data, I needed to do the following:

  1. Find the levels (unique values) of the categorical variable and count the number of observations in each level.
  2. Create the names of the new variables by appending the values of each level to the prefix "X_".
  3. Allocate a matrix large enough to hold the results.
  4. For the ith level of the categorical variable, copy the corresponding values from the continuous variable into the ith column of the matrix.

The following SAS/IML statements implement this algorithm:

proc iml;
use sashelp.class;
   read all var {sex} into C;
   read all var {height} into x;
/* TABULATE is SAS 9.3; you can also use UNIQUE + LOC */
call tabulate(u, count, C); /* 1. find unique values and count obs */
labels = "x_" + u;          /* 2. create new variable names */
y = j(max(count), ncol(count), .); /* 3. allocate result matrix */
do i = 1 to ncol(count);           /* 4. for each level... */
   y[1:count[i], i] = x[loc(C=u[i])]; /* copy values into i_th column */
print y[colname=labels];

The y matrix contains the desired values. Each column corresponds to a level of the categorical variable. Notice how the LOC function (also known as "the most useful function you've never heard of") is used to identify the observations for each category; the output from the LOC function is used to extract the corresponding values of x.

The program not only works for the Sashelp.Class data, it works in general. The TABULATE function (new in SAS 9.3) finds the unique values of the categorical variable and counts the number of observations for each value. If you do not have SAS 9.3, you can use the UNIQUE-LOC technique to obtain these values (see Section 3.3.5 of my book, Statistical Programming with SAS/IML Software). The remainder of the program is written in terms of these quantities. So, for example, if I want to reshape the MPG_CITY variable in the Sashelp.Cars data according to levels of the ORIGIN variable, all I have to do is change the first few lines of the program:

   read all var {origin} into C;
   read all var {mpg_city} into x;

Obviously, this could also be made into a macro or a SAS/IML module[REF], where the data set name, the name of the categorical variable, and the name of the numerical variable are included as parameters. It is also straightforward to support numerical categorical variables.

Yes, it can be done with Base SAS...

Although the SAS/IML solution is short, simple, and does not require any macro shenanigans, I might as well provide a generalization of the earlier DATA step program. The trick is to use PROC SQL to compute the number of levels and the unique values of the categorical variable, and to put this information into SAS macro variables, like so:

proc sql noprint;
select strip(put(count(distinct sex),8.)) into :Count 
       from sashelp.class;
select distinct sex into :C1- :C&Count
       from sashelp.class;

The macro variable Count contains the number of levels, and the macro variables C1, C2,... contain the unique values. Using these quantities, you can write a short macro loop that generalizes the KEEP and MERGE statements in the original DATA step:

/* create the string x_C1 x_C2 ... where C_i are unique values */
%macro keepvars;
   %do i=1 %to &Count; x_&&C&i %end;
/* create the various data sets to merge together, and create 
   variable names x_C1 x_C2 ... where C_i are unique values */
%macro mergevars;
   %do i=1 %to &Count; 
      sashelp.class(where=(sex="&&C&i") rename=(height=x_&&C&i))
data combo;
 keep %keepvars;
 merge %mergevars;

Again, this could be made into a macro where the data set name, the name of the categorical variable, and the name of the numerical variable are included as parameters.

I'd like to thank my colleagues Jerry and Jason for ideas that led to the formulation of the preceding macro code. My colleagues also suggested several other methods for accomplishing the same task. I invite you to post your favorite technique in the comments.

Addendum (11:00am): Several people asked why I want to do this. The reason is that some procedures do not support a classification variable (or don't handle classification variables the way I want). By using this transformation, you can create multiple variables and have a procedure operate on those. For example, the DENSITY statement in PROC SGPLOT does not support a GROUP= option, but you can use this trick to overlay the densities of subgroups.

In reponse to this post, several other techniques in Base SAS were submitted to the SAS-L discussion forum. I particularly like Nat Wooding's solution, which uses PROC TRANSPOSE.

tags: Data Analysis, Reading and Writing Data, Statistical Programming
10月 282011

"I think that my data are exponentially distributed, but how can I check?"

I get asked that question a lot. Well, not specifically that question. Sometimes the question is about the normal, lognormal, or gamma distribution. A related question is "Which distribution does my data have," which was recently discussed by John D. Cook on his blog.

Regardless of the exact phrasing, the questioner wants to know "What methods are available for checking whether a given distribution fits the data?" In SAS, I recommend the UNIVARIATE procedure. It supports three techniques that are useful for comparing the distribution of data to some common distributions: goodness-of-fit tests, overlaying a curve on a histogram of the data, and the quantile-quantile (Q-Q) plot. (Some people drop the hyphen and write "the QQ plot.")

Constructing a Q-Q Plot for any distribution

The UNIVARIATE procedure supports many common distributions, such as the normal, exponential, and gamma distributions. In SAS 9.3, the UNIVARIATE procedure supports five new distributions. They are the Gumbel distribution, the inverse Gaussian (Wald) distribution, the generalized Pareto distribution, the power function distribution, and the Rayleigh distribution.

But what if you want to check whether your data fits some distribution that is not supported by PROC UNIVARIATE? No worries, creating a Q-Q plot is easy, provided you can compute the quantile function of the theoretical distribution. The steps are as follows:

  1. Sort the data.
  2. Compute n evenly spaced points in the interval (0,1), where n is the number of data points in your sample.
  3. Compute the quantiles (inverse CDF) of the evenly spaced points.
  4. Create a scatter plot of the sorted data versus the quantiles computed in Step 3.

If the data are in a SAS/IML vector, the following statements carry out these steps:

proc iml;
y = {1.7, 1.0, 0.5, 3.5, 1.9, 0.7, 0.4, 
     5.1, 0.2, 5.6, 4.6, 2.8, 3.8, 1.4, 
     1.6, 0.9, 0.3, 0.4, 1.9, 0.5};
n = nrow(y);
call sort(y, 1); /* 1 */
v = ((1:n) - 0.375) / (n + 0.25);  /* 2 (Blom, 1958) */
q = quantile("Exponential", v, 2); /* 3 */

If you plot the data (y) against the quantiles of the exponential distribution (q), you get the following plot:

"But, Rick," you might argue, "the plotted points fall neatly along the diagonal line only because you somehow knew to use a scale parameter of 2 in Step 3. What if I don't know what parameter to use?!"

Ahh, but that is the beauty of the Q-Q plot! If you plot the data against the standardized distribution (that is, use a unit scale parameter), then the slope of the line in a Q-Q plot is an estimate of the unknown scale parameter for your data! For example, modify the previous SAS/IML statements so that the quantiles of the exponential distribution are computed as follows: q = quantile("Exponential", v); /* 3 */

The resulting Q-Q plot shows points that lie along a line with slope 2, which implies that the distribution of the data is approximately exponentially distributed with a shape parameter close to 2.

Choice of quantiles for the theoretical distribution

The Wikipedia article on Q-Q plots states, "The choice of quantiles from a theoretical distribution has occasioned much discussion." Wow, is that an understatement! Literally dozens of papers have been written on this topic. SAS uses a formula suggested by Blom (1958): (i - 3/8) / (n + 1/4), i=1,2,...,n. Another popular choice is (i-0.5)/n, or even i/(n+1). For large n, the choices are practically equivalent. See O. Thas (2010), Comparing Distributions, p. 57–59 for a discussion of various choices. In SAS, you can use the RANKADJ= and NADJ= options to accomodate different choices.

Repeating the construction by using the DATA step

These computations are simple enough to perform by using the DATA step and PROC SORT. For completeness, here is the SAS code:

data A;
input y @@;
1.7 1.0 0.5 3.5 1.9 0.7 0.4 
5.1 0.2 5.6 4.6 2.8 3.8 1.4 
1.6 0.9 0.3 0.4 1.9 0.5
proc sort data=A; by y; run; /* 1 */
data Exp;
set A nobs=nobs;
v = (_N_ - 0.375) / (nobs + 0.25); /* 2 */
q = quantile("Exponential", v, 2); /* 3 */
proc sgplot data=Exp noautolegend; /* 4 */
scatter x=q y=y;
lineparm x=0 y=0 slope=1; /* SAS 9.3 statement */
xaxis label="Exponential Quantiles" grid; 
yaxis label="Observed Data" grid;

Use PROC UNIVARIATE for Simple Q-Q Plots

Of course, for this example, I don't need to do any computations at all, since PROC UNIVARIATE supports the exponential distribution and other common distributions. The following statements compute goodness-of-fit tests, overlay a curve on the histogram, and display a Q-Q plot:

proc univariate data=A;
var y;
histogram y / exp(sigma=2); 
QQplot y / exp(theta=0 sigma=2);

However, if you think your data are distributed according to some distribution that is not built into PROC UNIVARIATE, the techniques in this article show how to construct a Q-Q plot to help you assess whether some "named" distribution might model your data.

tags: Data Analysis, Statistical Programming
10月 212011

When I learn a new statistical technique, one of first things I do is to understand the limitations of the technique. This blog post shares some thoughts on modeling finite mixture models with the FMM procedure. What is a reasonable task for FMM? When are you asking too much?

I previously showed how you can use the FMM procedure to model Scrabble® scores as a mixture of three components: typical plays, triple-word scores, and penalties. That experience taught me three lessons about finite mixture models and their ability to "resolve" components that are close to each other:

  • The further away the components are, the easier it is to resolve them.
  • The more data you have, the better you can resolve components.
  • If you can give the software hints about the components, you can resolve them better.

These lessons are not surprising, and the first two are used to determine the "power" of a statistical test. In brief, the larger the "effect" you are trying to measure, the easier it is to measure it. Similarly, larger samples have less sampling error than smaller samples, so an effect is easier to detect in larger samples. The third statement (give hints) is familiar to anyone who has used maximum likelihood estimation: a good guess for the parameter estimates can help the algorithm converge to an optimal value.

To be concrete, suppose that a population is composed of a mixture of two normal densities. Given a sample from the population, can the FMM procedure estimate the mean and variance of the underlying components? The answer depends on how far apart the two means are (relative to their variances) and how large the sample is.

The distance between components

The following figure shows the exact density for a 50/50 mixture of normal distributions, where the first component is N(0,1) and the second component is N(δ, 1). The figure shows two cases: δ=1 and 2.

Would a human be able to look at the dark curve on the left and deduce that it is a mixture of two normal components? Unlikely. The problem is that the distance between the component densities, which is 1, is not sufficiently large relative to the variances of the components (also 1). What about the dark curve on the right? Perhaps. It depends on the person's experience with mixture models.

What happens if you generate a small number of random values (say, 100) from each probability distribution, and ask the FMM procedure to try to detect two normal components in the data? The following statements generate random values from a mixture of normals with δ=1 and δ=2, and use the FMM procedure to try to estimate the two components:

%let p = 0.5;
%let n = 100;
/* generate random values from mixture distribution */
data Sim(drop=i);
call streaminit(12345);
do delta = 1 to 2;
   do i = 1 to &n;
      c = rand("Bernoulli", &p);
      if c=0 then x = rand("normal");
      else        x = rand("normal", delta);
/* Doesn't identify underlying components for delta=1. 
   But DOES for delta=2! */
proc fmm data=Sim;
by delta;
model x= / k=2;

With only 100 points in the sample, the FMM procedure can't correctly identify the N(0,1) and N(1,1) components in the data for δ=1. However, it does correctly identify the components that are separated by a larger amount, as shown below:

The conclusions are clear, if you have a small sample, don't expect to the FMM procedure to be able to estimate components that practically overlap.

More data = Better estimates

If you have a larger sample, the FMM procedure has more data to work with, and can detect components that do not appear distinct to the human eye. For example, if you rerun the previous example with, say, 10,000 simulated observations, then the FMM procedure has no trouble correctly identifying the N(0,1) and N(1,1) components.

This "lesson" (more data leads to better estimates) is as old as statistics itself. Yet, when applied to the FMM procedure, you can get some pretty impressive results, such as those described in Example 37.2 in the documentation for the FMM procedure. In this example (reproduced below), two normal components and a Weibull component are identified in data that does not—to the untrained eye—look like a mixture of normals. From biological arguments, researchers knew that the density should have three components. Because the biologists had a large number of observations (141,414 of them!), the FMM procedure is able to estimate the components.

More hints = Better estimates

"Well, great," you might say, "but my sample is small and I can't collect more data. Is there any other way to coax the FMM procedure into finding relatively small effects?"

Yes, you can provide hints to the procedure. The sample and the choice of a model determines a likelihood function, which the procedure uses to estimate the parameters in the model for which the data are most likely. The likelihood function might have several local maxima, and we can use the PARMS option in the MODEL statement to provide guesses for the parameters. I used this technique to help PROC FMM estimate parameters for the three-component model of my mother's Scrabble scores.

There is another important way to give a hint: you can tell the FMM procedure that one or more observations belong to particular components by using the PARTIAL= option in the PROC FMM statement. For example, it is difficult for the FMM procedure to estimate three normal components for the PetalWidth variable in the famous Fisher's iris data. Why? Because there are only 150 observations, and the means of two components are close to each other (relative to their variance). But for these data, we actually have a classification variable, Species, that identifies to which component each observation belongs. By using this extra information, the FMM procedure correctly estimates the three components:

/* give partial information about component membership */
proc fmm data=sashelp.iris partial=Species;
   class Species;
   model PetalWidth = / K=3;

Is this cheating? In this case, yes, because having completely labeled data is equivalent to doing three separate density estimates, one for each species. However, the PARTIAL= option is intended to be used when you have only partial information (hence its name) about which observations are members of which component. For example, the procedure can still estimate the three components when half of the Species variable is randomly set to missing:

data iris;
set sashelp.iris;
if ranuni(1)<0.5 then Species=" "; /* set to missing */

It is important to understand the limitations of any statistical modeling procedure that you use. Modeling finite mixtures with PROC FMM is no exception. If you know which tasks are reasonable and which are unrealistic, you can be more successful as a data analyst.

tags: Data Analysis, Sampling and Simulation
10月 122011

A popular use of SAS/IML software is to optimize functions of several variables. One statistical application of optimization is estimating parameters that optimize the maximum likelihood function. This post gives a simple example for maximum likelihood estimation (MLE): fitting a parametric density estimate to data.

Which density curve fits the data?

If you plot a histogram for the SepalWidth variable in the famous Fisher's iris data, the data look normally distributed. The normal distribution has two parameters: μ is the mean and σ is the standard deviation. For each possible choice of (μ, σ), you can ask the question, "If the true population is N(μ, σ), how likely is it that I would get the SepalWidth sample?" Maximum likelihood estimation is a technique that enables you to estimate the "most likely" parameters. This is commonly referred to as fitting a parametric density estimate to data.

Visually, you can think of overlaying a bunch of normal curves on the histogram and choosing the parameters for the best-fitting curve. For example, the following graph shows four normal curves overlaid on the histogram of the SepalWidth variable:

proc sgplot data=Sashelp.Iris;
histogram SepalWidth;
density SepalWidth / type=normal(mu=35 sigma=5.5);
density SepalWidth / type=normal(mu=32.6 sigma=4.2);
density SepalWidth / type=normal(mu=30.1 sigma=3.8);
density SepalWidth / type=normal(mu=30.5 sigma=4.3);

It is clear that the first curve, N(35, 5.5), does not fit the data as well as the other three. The second curve, N(32.6, 4.2), fits a little better, but mu=32.6 seems too large. The remaining curves fit the data better, but it is hard to determine which is the best fit. If I had to guess, I'd choose the brown curve, N(30.5, 4.3), as the best fit among the four.

The method of maximum likelihood provides an algorithm for choosing the best set of parameters: choose the parameters that maximize the likelihood function for the data. Parameters that maximize the log-likelihood also maximize the likelihood function (because the log function is monotone increasing), and it turns out that the log-likelihood is easier to work with. (For the normal distribution, you can explicitly solve the likelihood equations for the normal distribution. This provides an analytical solution against which to check the numerical solution.)

The likelihood and log-likelihood functions

The UNIVARIATE procedure uses maximum likelihood estimation to fit parametric distributions to data. The UNIVARIATE procedure supports fitting about a dozen common distributions, but you can use SAS/IML software to fit any parametric density to data. There are three steps:

  1. Write a SAS/IML module that computes the log-likelihood function. Optionally, since many numerical optimization techniques use derivative information, you can provide a module for the gradient of the log-likelihood function. If you do not supply a gradient module, SAS/IML software automatically uses finite differences to approximate the derivatives.
  2. Set up any constraints for the parameters. For example, the scale parameters for many distributions are restricted to be positive values.
  3. Call one of the SAS/IML nonlinear optimization routines. For this example, I call the NLPNRA subroutine, which uses a Newton-Raphson algorithm to optimize the log-likelihood.

Step 1: Write a module that computes the log-likelihood

A general discussion of log-likelihood functions, including formulas for some common distributions, is available in the documentation for the GENMOD procedure. The following module computes the log-likelihood for the normal distribution:

proc iml;
/* write the log-likelihood function for Normal dist */
start LogLik(param) global (x);
   mu = param[1];
   sigma2 = param[2]##2;
   n = nrow(x);
   c = x - mu;
   f = -n/2*log(sigma2) - 1/2/sigma2*sum(c##2);
   return ( f );

Notice that the arguments to this module are the two parameters mu and sigma. The data vector (which is constant during the optimization) is specified as the global variable x. When you use an optimization method, SAS/IML software requires that the arguments to the function be the quantities to optimize. Pass in other parameters by using the GLOBAL parameter list.

It's always a good idea to test your module. Remember those four curves that were overlaid on the histogram of the SepalWidth data? Let's evaluate the log-likelihood function at those same parameters. We expect that log-likelihood should be low for parameter values that do not fit the data well, and high for parameter values that do fit the data.

use Sashelp.Iris;
read all var {SepalWidth} into x;
close Sashelp.Iris;
/* optional: test the module */
params = {35   5.5,
          32.6 4.2,
          30.1 3.8,
          30.5 4.3};
LogLik = j(nrow(params),1);
do i = 1 to nrow(params);
   p = params[i,];
   LogLik[i] = LogLik(p);
print Params[c={"Mu" "Sigma"} label=""] LogLik;

The log-likelihood values confirm our expectations. A normal density curve with parameters (35, 5.5) does not fit the data as well as the other parameters, and the curve with parameters (30.5, 4.3) fits the curve the best because its log-likelihood is largest.

Step 2: Set up constraints

The SAS/IML User's Guide describes how to specify constraints for nonlinear optimization. For this problem, specify a matrix with two rows and k columns, where k is the number of parameters in the problem. For this example, k=2.

  • The first row of the matrix specifies the lower bounds on the parameters. Use a missing value (.) if the parameter is not bounded from below.
  • The second row of the matrix specifies the upper bounds on the parameters. Use a missing value (.) if the parameter is not bounded from above.

The only constraint on the parameters for the normal distribution is that sigma is positive. Therefore, the following statement specifies the constraint matrix:

/*     mu-sigma constraint matrix */
con = { .   0,  /* lower bounds: -infty < mu; 0 < sigma */
        .   .}; /* upper bounds:  mu < infty; sigma < infty */

Step 3: Call an optimization routine

You can now call an optimization routine to find the MLE estimate for the data. You need to provide an initial guess to the optimization routine, and you need to tell it whether you are finding a maximum or a minimum. There are other options that you can specify, such as how much printed output you want.

p = {35 5.5};/* initial guess for solution */
opt = {1,    /* find maximum of function   */
       4};   /* print a LOT of output      */
call nlpnra(rc, result, "LogLik", p, opt, con);

The following table and graph summarize the optimization process. Starting from the initial condition, the Newton-Raphson algorithm takes six steps before it reached a maximum of the log-likelihood function. (The exact numbers might vary in different SAS releases.) At each step of the process, the log-likelihood function increases. The process stops when the log-likelihood stops increasing, or when the gradient of the log-likelihood is zero, which indicates a maximum of the function.

You can summarize the process graphically by plotting the path of the optimization on a contour plot of the log-likelihood function. You can see that the optimization travels "uphill" until it reaches a maximum.

Check the optimization results

As I mentioned earlier, you can explicitly solve for the parameters that maximize the likelihood equations for the normal distribution. The optimal value of the mu parameter is the sample mean of the data. The optimal value of the sigma parameter is the unadjusted standard deviation of the sample. The following statements compute these quantities for the SepalWidth data:

OptMu = x[:];
OptSigma = sqrt( ssq(x-OptMu)/nrow(x) );

The values found by the NLPNRA subroutine agree with these exact values through seven decimal places.

tags: Data Analysis, Statistical Programming
10月 052011

When you misspell a word on your mobile device or in a word-processing program, the software might "autocorrect" your mistake. This can lead to some funny mistakes, such as the following:

I hate Twitter's autocorrect, although changing "extreme couponing" to "extreme coupling" did make THAT tweet more interesting. [@AnnMariaStat]

When you type "hte," how does the software know that you probably meant "the"? Even more impressively, when you type "satitsisc" into Microsoft Word, how does it create a list of the words you might have meant, a list that includes "statistics"?

For that matter, have you every mistyped the name of a SAS procedure or an option, only to discover that your program still ran? For example, if you type dat a; run; the program will run, although the SAS log will warn you of the misspelling: "WARNING 14-169: Assuming the symbol DATA was misspelled as dat." How can SAS determine that what you typed is close enough to an actual keyword that it assumes that the keyword was misspelled?

There are various techniques for correcting misspellings, but most of them involve a notion of the distance between words. Given a string of letters, you can define a number that relates how "close" the string is to other words (for example, in a dictionary). The string that is typed is called the keyword. The words in the dictionary are called query words. If the keyword is not in the dictionary, then you can list the query words that are close to the keyword. Often the keyword is a misspelling of one of the words on this list.

Base SAS has several functions that you can use to compare how close words are to each other. In this post, I describe the SPEDIS function, which I assume is an acronym for "spelling distance." Similar SAS functions include the COMPLEV function, the COMPGED function, and the SOUNDEX function.

What words are close to "satitsisc"

The SPEDIS function computes the "cost" of converting the keyword to a query word. Each operation—deletion, insertion, transposition, and so forth—is assigned a certain cost.

As an example, type the letters "satitsisc" into Microsoft Word and look at the top words that Word suggests for the misspelling. The words are statistics, statistic, and sadistic. The words satisfy and Saturday are not on the list of suggested words. By using the SAS DATA step or the SAS/IML language, you can examine how close each of these words are to the garbled keyword, as measured by the SPEDIS function:

proc iml;
keyword = "satitsisc";
query = {"statistics" "statistic" "sadistic" "satisfy" "Saturday"};
cost = spedis(query, keyword);
print cost;

According to the SPEDIS function, the words suggested by MS Word (the first three query words) have a lower cost than satisfy and Saturday, which were not suggested.

The cost returned by the SPEDIS function is not symmetric. For example, the cost of converting sadistic to statistic is different from the cost of converting statistic to sadistic, because inverse operations have different costs. For example, the cost of inserting a letter is different from the cost of deleting a letter.

This lack of symmetry means that the SPEDIS function does not return a true distance between words, although some people refer to the cost matrix as an "asymmetric distance." You can use the SAS/IML language to compute the cost matrix, in which the ijth element is the cost of converting the ith word into the jth word:

words = query || keyword;
n = ncol(words);
/* compute (n x n) cost matrix (not symmetric) */
cost = j(n,n,0);
do i = 1 to ncol(words);
   key = words[i];
   cost[i,] = spedis(words, key);
print cost[c=words r=words];

It is possible to visualize the distance between words by using the MDS procedure to compute how to position the words in the plane so that the planar distances most nearly match the (scaled) values in the cost matrix:

/* write SAS/IML cost matrix to data set */
create cost(type=DISTANCE) from cost[c=words r=words];
append from cost[r=words];
/* find 2D configuration of points */
ods graphics on;
proc mds data=cost dimension=2 shape=square;
subject words;

PROC MDS creates a graph that is similar to the one shown. Notice that the cluster in the upper right consists of the keyword and the words that MS Word suggests. These words are close to the keyword. On the other hand, the two words that were not suggested by MS Word are situated far from the keyword.

Further Reading

This post was inspired by Charlie Huang's interesting article titled Top 10 most powerful functions for PROC SQL. The SPEDIS and SOUNDEX functions are listed in item #6.

If you want to read funny examples of mistakes in text messages that were caused by the autocorrect feature of the iPhone, do an internet search for "funny iPhone autocorrect mistakes." Be aware, however, that many of the examples involve adult humor.

You can use the SPEDIS function to do fuzzy matching, as shown in the following papers:

tags: Data Analysis, Statistical Programming