Video killed the radio star....

We can't rewind, we've gone too far.

-- The Buggles (1979)

"You kids have it easy," my father used to tell me. "When I was a kid, I didn't have all the conveniences you have today." He's right, and I could say the same thing to my kids, especially about today's handheld technology. In particular, I've noticed that powerful handheld technology (especially the modern calculator) has killed the standard probability tables that were once ubiquitous in introductory statistics textbooks.

In my first probability and statistics course, I constantly referenced the 23 statistical tables (which occupied 44 pages!) in the appendix of my undergraduate textbook. Any time I needed to compute a probability or test a hypothesis, I would flip to a table of probabilities for the normal, t, chi-square, or F distribution and use it to compute a probability (area) or quantile (critical value). If the value I needed wasn't tabulated, I had to manually perform linear interpolation from two tabulated values. I had no choice: my calculator did not have support for these advanced functions.

In contrast, kids today have it easy! When my son took AP statistics in high school, his handheld calculator (a TI-84, which costs about $100) could compute the PDF, CDF, and quantiles of all the important probability distributions.
Consequently, his textbook did not include an appendix of statistical tables.

It makes sense that publishers would choose to omit these tables, just as my own high school textbooks excluded the trig and logarithm tables that were prevalent in my father's youth. When handheld technology can reproduce all the numbers in a table, why waste the ink and paper?

In fact, by using SAS software, you can generate and display a statistical table with only a few statements. To illustrate this, let's use SAS to generate two common statistical tables: a normal probability table and a table of critical values for the chi-square statistic.

### A normal probability table

A normal probability table gives an area under the standard normal density curve. As discussed in a Wikipedia article about the standard normal table, there are three equivalent kinds of tables, but I will use SAS to produce the first table on the list. Given a standardized z-score, *z* > 0, the table gives the probability that a standard normal random variate is in the interval (0, *z*). That is, the table gives P(0 < Z < *z*) for a standard normal random variable Z.
The graph below shows the shaded area that is given in the body of the table.

You can create the table by using the SAS DATA step, but I'll use SAS/IML software.
The rows of the table indicate the z-score to the first decimal place. The columns of the table indicate the second decimal place of the z-score.
The key to creating the table is to recall that you can call any Base SAS function from SAS/IML, and you can use vectors of parameters. In particular, the following statements use the EXPANDGRID function to generate all two-decimal z-scores in the range [0, 3.4]. The program then calls the CDF function to evaluate the probability P(Z < z) and subtracts 1/2 to obtain the probability P(0 < Z < z). The SHAPE function reshapes the vector of probabilities into a 10-column table. Finally, the PUTN function converts the column and row headers into character values that are printed at the top and side of the table.

proc iml;
/* normal probability table similar to
https://en.wikipedia.org/wiki/Standard_normal_table#Table_examples */
z1 = do(0, 3.4, 0.1); /* z-score to first decimal place */
z2 = do(0, 0.09, 0.01); /* second decimal place */
z = expandgrid(z1, z2)[,+]; /* sum of all pairs from of z1 and z2 values */
p = cdf("Normal", z) - 0.5; /* P( 0 < Z < z ) */
Prob = shape( p, ncol(z1) ); /* reshape into table with 10 columns */
z1Lab = putn(z1, "3.1"); /* formatted values of z1 for row headers */
z2Lab = putn(z2, "4.2"); /* formatted values of z2 for col headers */
print Prob[r=z1Lab c=z2Lab F=6.4
L="Standard Normal Table for P( 0 < Z < z )"]; |

To find the probability between 0 and z=0.67, find the row for z=0.6 and then move over to the column labeled 0.07. The value of that cell is P(0 < Z < 0.67) = 0.2486.

### Chi-square table of critical values

Some statistical tables display critical values for a test statistic instead of probabilities. This section shows how to construct a table of the critical values of a chi-square test statistic for common significance levels (α). The rows of the table correspond to degrees of freedom; the columns correspond to significance levels. The following graph shows the situation. The shaded area corresponds to significance levels.

The corresponding table provides the quantile of the distribution that corresponds to each significance level. Again, you can use the DATA step, but I have chosen to use SAS/IML software to generate the table:

/* table of critical values for the chi-square distribution
https://flylib.com/books/3/287/1/html/2/images/xatab02.jpg */
df = (1:30) || do(40,100,10); /* degrees of freedom */
/* significance levels for common one- or two-sided tests */
alpha = {0.99 0.975 0.95 0.9 0.1 0.05 0.025 0.01};
g = expandgrid(df, alpha); /* all pairs of (df, alpha) values */
p = quantile("ChiSquare", 1 - g[,2], g[,1]); /* compute quantiles for (df, 1-alpha) */
CriticalValues = shape( p, ncol(df) ); /* reshape into table */
dfLab = putn(df, "3."); /* formatted row headers */
pLab = putn(alpha, "5.3"); /* formatted col headers */
print CriticalValues[rowname=dfLab colname=pLab F=6.3
L="Critical Values of Chi-Square Distribution"]; |

To illustrate how to use the table, suppose that you have computed a chi-square test statistic for 9 degrees of freedom. You want to determine if you should reject the (one-sided) null hypothesis at the α = 0.05 significance level. You trace down the table to the row for df=9 and then trace over to the column for 0.05. The value in that cell is 16.919, so you would reject the null hypothesis if your test statistic exceeds that critical value.

Just as digital downloads of songs have supplanted records and CDs, so, too, have modern handheld calculators replaced the statistical tables that used to feature prominently in introductory statistics courses.
However, if you ever feel nostalgic for the days of yore, you can easily resurrect your favorite table by writing a few lines of SAS code.
To be sure, there are some advanced tables (the Kolmogorov-Smirnov test comes to mind) that have not yet been replaced by a calculator key, but the simple tables are dead. Killed by the calculator.

It might be ill to speak of the dead, but I say, "good riddance"; I never liked using tables those tables anyway. What are your thoughts? If you are old enough to remember tables do you have fond memories of using them? If you learned statistics recently, are you surprised that tables were once so prevalent? Share your experiences by leaving a comment.

The post Calculators killed the standard statistical table appeared first on The DO Loop.