sapply()

1月 152014
 
Hello, folks! I'm pleased to report that Nick and I have turned in the manuscript for the second edition of SAS and R: Data Management, Statistical Analysis, and Graphics. It should be available this summer. New material includes some of our more popular blog posts, plus reproducible analysis, RStudio, and more.

To celebrate, here's a new example. Parenthetically, I was fortunate to be able to present my course: R Boot Camp for SAS users at Boston University last week. One attendee cornered me after the course. She said: "Ken, R looks great, but you use SAS for all your real work, don't you?" Today's example might help a SAS diehard to see why it might be helpful to know R.

OK, the example: A colleague contacted me with a typical "5-minute" question. She needed to write a convincing power calculation for the sensitivity-- the probability that a test returns a positive result when the disease is present, for a fixed number of cases with the disease. I don't know how well this has been explored in the peer-reviewed literature, but I suggested the following process:
1. Guess at the true underlying sensitivity
2. Name a lower bound (less than the truth) which we would like the observed CI to exclude
3. Use basic probability results to report the probability of exclusion, marginally across the unknown number of observed positive tests.

This is not actually a power calculation, of course, but it provides some information about the kinds of statements that it's likely to be possible to make.

R

In R, this is almost trivial. We can get the probability of observing x positive tests simply, using the dbinom() function applied to a vector of numerators and the fixed denominator. Finding the confidence limits is a little trickier. Well, finding them is easy, using lapply() on binom.test(), but extracting them requires using sapply() on the results from lapply(). Then it's trivial to generate a logical vector indicating whether the value we want to exclude is in the CI or not, and the sum of the probabilities we see a number of positive tests where we include this value is our desired result.
> truesense = .9
> exclude = .6
> npos = 20
> probobs = dbinom(0:npos,npos,truesense)
> cis = t(sapply(lapply(0:npos,binom.test, n=npos),
function(bt) return(bt$conf.int)))
> included = cis[,1] < exclude & cis[,2] > exclude
> myprob = sum(probobs*included)
> myprob
[1] 0.1329533
(Note that I calculated the inclusion probability, not the exclusion probability.)

Of course, the real beauty and power of R is how simple it is to turn this into a function:
> probinc = function(truesense, exclude, npos) {
probobs = dbinom(0:npos,npos,truesense)
cis = t(sapply(lapply(0:npos,binom.test, n=npos),
function(bt) return(bt$conf.int)))
included = cis[,1] < exclude & cis[,2] > exclude
return(sum(probobs*included))
}
> probinc(.9,.6,20)
[1] 0.1329533


SAS

My SAS process took about 4 times as long to write.
I begin by making a data set with a variable recording both the number of events (positive tests) and non-events (false negatives) for each possible value. These serve as weights in the proc freq I use to generate the confidence limits.
%let truesense = .9;
%let exclude = .6;
%let npos = 20;

data rej;
do i = 1 to &npos;
w = i; event = 1; output;
w = &npos - i; event = 0; output;
end;
run;

ods output binomialprop = rej2;
proc freq data = rej;
by i;
tables event /binomial(level='1');
weight w;
run;
Note that I repeat the proc freq for each number of events using the by statement. After saving the results with the ODS system, I have to use proc transpose to make a table with one row for each number of positive tests-- before this, every statistic in the output has its own row.
proc transpose data = rej2 out = rej3;
where name1 eq "XL_BIN" or name1 eq "XU_BIN";
by i;
id name1;
var nvalue1;
run;
In my fourth data set, I can find the probability of observing each number of events and multiply this with my logical test of whether the CI included my target value or not. But here there is another twist. The proc freq approach won't generate a CI for both the situation where there are 0 positive tests and the setting where all are positive in the same run. My solution to this was to omit the case with 0 positives from my for loop above, but now I need to account for that possibility. Here I use the end=option to the set statement to figure out when I've reached the case with all positive (sensitivity =1). Then I can use the reflexive property to find the confidence limits for the case with 0 events. Then I'm finally ready to sum up the probabilities associated with the number of positive tests where the CI includes the target value.
data rej4;
set rej3 end = eof;
prob = pdf('BINOMIAL',i,&truesense,&npos);
prob_include = prob * ((xl_bin < &exclude) and (xu_bin > &exclude));
output;
if eof then do;
prob = pdf('BINOMIAL',0,&truesense,&npos);
prob_include = prob * (((1 - xu_bin) < &exclude) and ((1 - xl_bin) > &exclude));
output;
end;
run;

proc means data = rej4 sum;
var prob_include;
run;
Elegance is a subjective thing, I suppose, but to my eye, the R solution is simple and graceful, while the SAS solution is rather awkward. And I didn't even make a macro out of it yet!

An unrelated note about aggregators:We love aggregators! Aggregators collect blogs that have similar coverage for the convenience of readers, and for blog authors they offer a way to reach new audiences. SAS and R is aggregated by R-bloggers, PROC-X, and statsblogs with our permission, and by at least 2 other aggregating services which have never contacted us. If you read this on an aggregator that does not credit the blogs it incorporates, please come visit us at SAS and R. We answer comments there and offer direct subscriptions if you like our content. In addition, no one is allowed to profit by this work under our license; if you see advertisements on this page, the aggregator is violating the terms by which we publish our work.
2月 232012
 


The so-called birthday paradox or birthday problem is simply the counter-intutitive discovery that the probability of (at least) two people in a group sharing a birthday goes up surprisingly fast as the group size increases. If the group is only 23 people, there is a 50% chance that two of them share a birthday, and with 40 people it's about 90%. There is an excellent wikipedia page discussing this.

However, this analytically derived probability is based on the assumption that births are equally likely on any day of the year. (It also ignores the occasional February 29th, and any social factors that lead people born at the same time of year to seek like spouses, and so forth.) But this assumption does not appear to be true, as laid out anecdotally and in press.

As noted in the latter link, any disparity in the probability of birth between days will improve the chances of a match. But how much? An analytic solution seems quite complex, even if we approximate the true daily distribution with a constant birth probability per month. Simulation will be simpler. While we're at it, we'll include leap days as well, since February 29th approaches.

SAS

Our approach here is based on the observation that the probability of at least one match among N people is equal to the sum of the probabilities of exactly one match in 2,...,N people. In addition, rather than simulating groups of 2, estimating the probability of a match, and repeating for groups of 3,...,N, we'll keep adding people to a group until we have a match, finding the probability of a match in all group sizes at once.

Here we use arrays (section 1.11.5) to keep track of the number of days in a month and of the people in our group. To reduce computation, we'll check for matches as we add people to the group, and only generate their birthdays if there is not yet a match. We also demonstrate the useful hyphen tool for referring to ranges of variables (1.11.4).

data bd1;
array daysmo [12] _temporary_ (31 28.25 31 30 31 30 31 31 30 31 30 31);
array dob [367] dob1 - dob367; * these variables will hold the birthdays
* the hyphen includes all the variables in the
* sequence

do group = 1 to 10000000; * simulate this many groups;
match = 0; * initialize whether there's a match in this
group, yet;
do i = 1 to 367; * loop through up to 367 subjects... the maximum
possible, obviously;
month = rantbl(0, 31*.0026123, 28*.0026785, 31*.0026838, 30*.0026426,
31*.0026702, 30*.0027424, 31*.0028655, 31*.0028954, 30*.0029407,
31*.0027705, 30*.0026842);
* choose a month of birth, by probabilities reported
in the Science News link, which are daily by month;
day = ceil((4 * daysmo[month] * uniform(0))/4);
* choose a day within the month,
note the trick used to get Leap Days;
dob[i] = mdy(month, day, 1960);
* convert month and day into a day in the year--
1960 is a convenient leap year;
do j = 1 to (i-1) until (match gt 0);
* compare each old person to the new one;
if dob[j] = dob[i] then match = i;
* if there was a match, we needed i people in the
group to make it;
end;
if match gt 0 then leave;
* no need to generate the other 367-i people;
end;
output;
end;
run;

We note here that while we allow up to 367 birthdays before a match, the probability of more than 150 is so infinitesimal that we could save the space and speed up processing time by ignoring it. Now that the groups have been simulated, we just need to summarize and present them. We tabulate how many cases of groups of size N were recorded, generate the simple analytic answer, and merge them.

proc freq data = bd1;
tables match / out=bd2 outcum; * the bd2 data set has the results;
run;

data simpreal;
set bd2;
prob = 1 - ((fact(match) * comb (365,match)) / 365**match);
realprob = cum_freq/10000000;
diff = realprob-prob;
diffpct = 100 * (diff)/prob;
run;

It's easiest to interpret the results by way of a plot. We'll plot the absolute and the relative difference on the same image with two different axes. The axis and symbol statements will make it slightly prettier, and allow us to make 0 appear at the same point on both axes.

axis1 order = (-.75 to .75 by .25) minor = none;
axis2 order = (-.00025 to .00025 by .00005) minor = none;
symbol1 v = dot h = .75 c = blue;
symbol2 font=marker v = U h = .5 c = red;

proc gplot data= simpreal (obs = 89);
plot diffpct * match / vref = 0 vaxis=axis1 legend;
plot2 diff *match/ vaxis = axis2 legend;
run; quit;

The results, shown below, are very clear-- leap day and the disequilibrium in birth month probability does increase the probability of at least one match in any group of a given size, relative to the uniform distribution across days assumed in the analytic solution. But the difference is miniscule in both the absolute and relative scale.

R
Here we mimic the approach used above, but use the apply() function family in place of some of the looping.

dayprobs = c(.0026123,.0026785,.0026838,.0026426,.0026702,.0027424,.0028655,
.0028954,.0029407,.0027705,.0026842,.0026864)
daysmo = c(31,28,31,30,31,30,31,31,30,31,30,31)
daysmo2 = c(31,28.25,31,30,31,30,31,31,30,31,30,31)
# need both: the former is how the probs are reported,
# while the latter allows leap days

moprob = daysmo * dayprobs

With the monthly probabilities established, we can sample a birth month for everyone, and then choose a birth day within month. We use the same trick as above to allow birth days of February 29th. Here we show code for 10,000 groups; with the simple cloud R this code was developed on, more caused a crash.

We've stopped referencing our book exhaustively, and doing so here would be tedious. Instead, we'll just comment that the tools we use here can be found in sections 1.4.5, 1.4.15, 1.4.16, 1.5.2, 1.8.3, 1.8.4, 1.9.1, 1.11.1, 5.2.1, 5.6.1, B.5.2, and probably others.

mob = sample(1:12,10000 * 367,rep=TRUE,prob=moprob)
dob = sapply(mob,function(x) ceiling(sample((4*daysmo2[x]),1)/4) )
# The ceiling() function isn't vectorized, so we make the equivalent
# using sapply().

mobdob = paste(mob,dob)
# concatenate the month and day to make a single variable to compare
# between people. The ISOdate() function would approximate the SAS mdy()
# function but would be much longer, and we don't need it.

# convert the vector into a matrix with the maximum
# group size as the number of columns
# as noted above, this could safely be truncated, with great savings
mdmat = matrix(mobdob, ncol=367, nrow=10000)

To find duplicate birthdays in each row of the matrix, we'll write a function to compare the number of unique values with the length of the vector, then call it repeatedly in a for() loop until there is a difference. Then, to save (a lot) of computations, we'll break out of the loop and report the number needed to make the match. Finally, we'll call this vector-based function using apply() to perform it on each row of the birthday matrix.

matchn = function(x) {
for (i in 1:367){
if (length(unique(x[1:i])) != i) break
}
return(i)
}

groups = apply(mdmat, 1, matchn)

bdprobs = cumsum(table(groups)/10000)
# find the N with each group number, divide by number of groups
# and get the cumulative sum

rgroups = as.numeric(names(bdprobs))
# extract the group sizes from the table
probs = 1 - ((factorial(rgroups) * choose(365,rgroups)) / 365**rgroups)
# calculate the analytic answer, for any group size
# for which there was an observed simulated value

diffs = bdprobs - probs
diffpcts = diffs/probs

To plot the differences and percent differences in probabilities, we modify (slightly) the functions for a multiple-axis scatterplot we show in our book in section 5.6.1. You can find the code for this and all the book examples on the book web site.

addsecondy <- function(x, y, origy, yname="Y2") {
prevlimits <- range(origy)
axislimits <- range(y)
axis(side=4, at=prevlimits[1] + diff(prevlimits)*c(0:5)/5,
labels=round(axislimits[1] + diff(axislimits)*c(0:5)/5, 3))
mtext(yname, side=4)
newy <- (y-axislimits[1])/(diff(axislimits)/diff(prevlimits)) +
prevlimits[1]
points(x, newy, pch=2)
abline(h=(-axislimits[1])/(diff(axislimits)/diff(prevlimits)) +
prevlimits[1])
}

plottwoy <- function(x, y1, y2, xname="X", y1name="Y1", y2name="Y2")
{
plot(x, y1, ylab=y1name, xlab=xname)
abline(h=0)
addsecondy(x, y2, y1, yname=y2name)
}

plottwoy(rgroups, diffs, diffpcts, xname="Number in group",
y1name="Diff in prob", y2name="Diff in percent")
legend(80, .0013, pch=1:2, legend=c("Diffs", "Pcts"))

The resulting plot, (which is based on 100,000 groups, tolerable compute time on a laptop) is shown at the top. Aligning the 0 on each axis was more of a hassle than seemed worth it for today. However, the message is equally clear-- a clearly larger probability with the observed birth distribution, but not a meaningful difference.
2月 232011
 
A student came with a question about how to snag data from a PDF report for analysis. Once she'd copied things her text file looked like:

1 Las Vegas, NV --- 53.3 --- --- 1
2 Sacramento, CA --- 42.3 --- --- 2
3 Miami, FL --- 41.8 --- --- 3
4 Tucson, AZ --- 41.7 --- --- 4
5 Cleveland, OH --- 38.3 --- --- 5
6 Cincinnati, OH 15 36.4 --- --- 6
7 Colorado Springs, CO --- 36.1 --- --- 7
8 Memphis, TN --- 35.3 --- --- 8
8 New Orleans, LA --- 35.3 --- --- 8
10 Mesa, AZ --- 34.7 --- --- 10
11 Baltimore, MD --- 33.2 --- --- 11
12 Philadelphia, PA --- 31.7 --- --- 12
13 Salt Lake City, UT --- 31.9 17 --- 13

Here the --- means a missing value, and there's some complexity induced because some cities are made up of multiple words (so the number of spaced delimited fields varies). Unfortunately, she had hundreds of such datasets to process.

While section 1.1.3 (p. 3) of the book describes reading more complex files, neither it nor the entry on finding the Amazon sales rank are directly relevant here.

R

In R, we first craft a function that processes a line and converts each field other than the city name into a numeric variable. The function works backwards to snag the appropriate elements, then calculates what is left over to stash in the city variable.

readellaline = function (thisline) {
thislen = length(thisline)
id = as.numeric(thisline[1])
v1 = as.numeric(thisline[thislen-4])
v2 = as.numeric(thisline[thislen-3])
v3 = as.numeric(thisline[thislen-2])
v4 = as.numeric(thisline[thislen-1])
v5 = as.numeric(thisline[thislen])
city = paste(thisline[2:(thislen-5)], collapse=" ")
return(list(id=id,city=city,v1=v1,v2=v2,v3=v3,v4=v4,v5=v5))
}

However, before this function can work, it needs each line to be converted into a character vector containing each "word" (character strings divided by spaces) as a separate element. We'll do this by first reading each line, then split()ting it into words. This results in a list object, where the items in the list are the vectors of words. Then we can call the readellaline() function for each vector using an invocation of sapply() (section 1.3.2, p. 12), which avoids the need for a for loop. The resulting object can be transposed then coerced into a dataframe.

# read the input file
file = readLines("http://www.math.smith.edu/r/data/ella.txt")
split = strsplit(file, " ") # split up fields for each line
processed = as.data.frame(t(sapply(split, readellaline)))
processed

This generates the following output:

id city v1 v2 v3 v4 v5
1 1 Las Vegas, NV NA 53.3 NA NA 1
2 2 Sacramento, CA NA 42.3 NA NA 2
3 3 Miami, FL NA 41.8 NA NA 3
4 4 Tucson, AZ NA 41.7 NA NA 4
5 5 Cleveland, OH NA 38.3 NA NA 5
6 6 Cincinnati, OH 15 36.4 NA NA 6
7 7 Colorado Springs, CO NA 36.1 NA NA 7
8 8 Memphis,TN NA 35.3 NA NA 8
9 8 New Orleans, LA NA 35.3 NA NA 8
10 10 Mesa, AZ NA 34.7 NA NA 10
11 11 Baltimore, MD NA 33.2 NA NA 11
12 12 Philadelphia, PA NA 31.7 NA NA 12
13 13 Salt Lake City, UT NA 31.9 17 NA 13


SAS
The input tools in SAS can accommodate this data directly, without resorting to reading the data once and then processing it. However, two separate special tools are needed. These are: 1) the dlm option to the infile statement, to make both commas and spaces be treated as field delimiters, and 2) the & format modifier, which allows spaces within a variable that's being read in.

data ella4;
infile "c:\book\ella.txt" dlm=", ";
input id city & $20. state $2. v1 - v5;
run;

proc print data=ella; run;

In effect, the forgoing input statement instructs SAS that the field following id is to be named city, that it may have spaces in it, and that it is a character variable with a length of up to 20 characters; it will read into that variables for 20 spaces, or until the next non-space delimiter. The dlm option means that a comma is a delimiter.

Examining the log resulting from running the preceding code will reveal many notes regarding "invalid data", corresponding to the dashes. However, these result (correctly) in missing data codes, so they can safely be ignored.

Obs id city state v1 v2 v3 v4 v5

1 1 Las Vegas NV . 53.3 . . 1
2 2 Sacramento CA . 42.3 . . 2
3 3 Miami FL . 41.8 . . 3
4 4 Tucson AZ . 41.7 . . 4
5 5 Cleveland OH . 38.3 . . 5
6 6 Cincinnati OH 15 36.4 . . 6
7 7 Colorado Springs CO . 36.1 . . 7
8 8 Memphis TN . 35.3 . . 8
9 8 New Orleans LA . 35.3 . . 8
10 10 Mesa AZ . 34.7 . . 10
11 11 Baltimore MD . 33.2 . . 11
12 12 Philadelphia PA . 31.7 . . 12
13 13 Salt Lake City UT . 31.9 17 . 13
12月 132010
 
In recent weeks, we've explored methods to fit logistic regression models when a state of quasi-complete separation exists. We considered Firth's penalized likelihood approach, exact logistic regression, and Bayesian models using Markov chain Monte Carlo (MCMC).

Today we'll show how to build a Monte Carlo experiment to compare these approaches. Suppose we have 100 observations with x=0 and 100 with x=1, and suppose that the Pr(Y=1|X=0) = 0.001, while the Pr(Y=1|X=1) = 0.05. Thus the true odds ratio is (0.05/0.95)/(0.001/0.999) = 52.8 and the log odds ratio we want to find is 3.96. But we will rarely observe any y=1 when x=0. Which of these approaches is most likely to give us acceptable results?

Note that in all of the MCMC analyses we use only 6000 iterations, which is likely too few to trust in practice.

The code is long enough here that we annotate within rather than write much text around the code.

SAS

All the SAS procedures used accept the events/trials syntax (section 4.1.1), so we'll generate example data sets as two observations of binomial random variates with the probabilities noted above. We also make extensive use of the ODS system to suppress all printed output (section A.7.1) and to save desired pieces of output as SAS data sets. The latter usage requires first using the ods trace on/listing statement to find the name of the output before saving it. Finally, we use the by statement (section A.6.2) to replicate the analysis for each simulated data set.

data rlog;
do trial = 1 to 100;
/* each "trial" is a simulated data set with two observations
containing the observed number of events with x=0 or x=1 */
x=0; events = ranbin(0,100,.001); n=100; output;
x=1; events = ranbin(0,100,.05); n=100; output;
end;
run;

ods select none; /* omit _all_ printed output */

ods output parameterestimates=glm; /* save the estimated betas */
proc logist data = rlog;
by trial;
model events / n=x; /* ordinary logistic regression */
run;

ods output parameterestimates=firth; /* save the estimated betas */
/* note the output data set has the same name
as in the uncorrected glm */
proc logist data = rlog;
by trial;
model events / n = x / firth; /* do the firth bias correction */
run;

ods output exactparmest=exact;
/* the exact estimates have a different name under ODS */
proc logist data=rlog;
by trial;
model events / n = x;
exact x / estimate; /* do the exact estimation */
run;

data prior;
input _type_ $ Intercept x;
datalines;
Var 25 25
Mean 0 0
;
run;

ods output postsummaries=mcmc;
proc genmod data = rlog;
by trial;
model events / n = x / dist=bin;
bayes nbi=1000 nmc=6000
coeffprior=normal(input=prior) diagnostics=none
statistics=summary;
/* do the Bayes regression, using the prior made in the
previous data step */
run;

Now I have four data sets with parameter estimates in them. I could use them separately, but I'd like to merge them together. I can do this with the merge statement (section 1.5.7) in a data step. I also need to drop the lines with the estimated intercepts and rename the variables that hold the parameter estimates. The latter is necessary because the names are duplicated across the output data sets and desirable in that it allows names that are meaningful. In any event, I can use the where and rename data set options to include these modifications as I do the merge. I'll also add the number of events when x=0 and when x=1, which requires merging in the original data twice.

data lregsep;
merge
glm (where = (variable = "x") rename = (estimate = glm))
firth (where = (variable = "x") rename = (estimate = firth))
exact (rename = (estimate = exact))
mcmc (where = (parameter = "x") rename = (mean=mcmc))
rlog (where = (x = 1) rename = (events = events1))
rlog (where = (x = 0) rename = (events = events0));
by trial;
run;

ods select all; /* now I want to see the output! */
/* check to make sure the output dataset looks right */
proc print data = lregsep (obs = 5) ;
var trial glm firth exact mcmc;
run;

/* what do the estimates look like? */
proc means data=lregsep;
var glm firth exact mcmc;
run;

With the following output.

Obs trial glm firth exact mcmc

1 1 12.7866 2.7803 2.3186 3.9635
2 2 12.8287 3.1494 2.7223 4.0304
3 3 10.7192 1.6296 0.8885 2.5613
4 4 11.7458 2.2378 1.6906 3.3409
5 5 10.7192 1.6296 0.8885 2.5115

Variable Mean Std Dev
----------------------------------------
glm 10.6971252 3.4362801
firth 2.2666700 0.5716097
exact 1.8237047 0.5646224
mcmc 3.1388274 0.9620103
----------------------------------------

The ordinary logistic estimates are entirely implausible, while the three alternate approaches are more acceptable. The MCMC result has the least bias, but it's unclear to what degree this is a happy coincidence between the odds ratio and the prior precision. The Firth approach appears to be less biased than the exact logistic regression

R
The R version is roughly analogous to the SAS version. The notable differences are that 1) I want the "weights" version of the data (see example 8.15) for the glm() and logistf() functions and need the events/trials syntax for the elrm() function and the expanded (one row per observation) version for the MCMClogit() funtion. The sapply() function (section B.5.3) serves a similar function to the by statement in SAS. Finally, rather than spelunking through the ods trace output to find the parameter estimates, I used the str() function (section 1.3.2) to figure out where they are stored in the output objects and indexes (rather than data set options) to pull out the one estimate I need.


# make sure the needed packages are present
require(logistf)
require(elrm)
require(MCMCpack)
# the runlogist() function generates a dataset and runs each analysis
# the parameter "trial" keeps track of which time we're calling runlogist()
runlogist = function(trial) {
# the result vector will hold the estimates temporarily
result = matrix(0,4)
# generate the number of events once
events.0 =rbinom(1,100, .001) # for x = 0
events.1 = rbinom(1,100, .05) # for x = 1
# following for glm and logistf "weights" format
xw = c(0,0,1,1)
yw = c(0,1,0,1)
ww = c(100 - events.0, events.0, 100 - events.1,events.1)
# run the glm and logistf, grab the estimates, and stick
# them into the results vector
result[1] =
glm(yw ~ xw, weights=ww, binomial)$coefficients[2]
result[2] = logistf(yw ~ xw, weights=ww)$coefficients[2]
# elrm() needs a data frame in the events/trials syntax
elrmdata = data.frame(events=c(events.0,events.1), x =c(0,1),
trials = c(100,100))
# run it and grab the estimate
result[3]=elrm(events/trials ~ x, interest = ~ x, iter = 6000,
burnIn = 1000, data = elrmdata, r = 2)$coeffs
# MCMClogit() needs expanded data
x = c(rep(0,100), rep(1,100))
y = c(rep(0,100-events.0), rep(1,events.0),
rep(0, 100-events.1), rep(1, events.1))
# run it and grab the mean of the MCMC posteriors
result[4] = summary(MCMClogit(y~as.factor(x), burnin=1000,
mcmc=6000, b0=0, B0=.04,
seed = list(c(781306, 78632467, 364981736, 6545634, 7654654,
4584),trial)))$statistics[2,1]
# send back the four estimates, plus the number of events
# when x=0 and x=1
return(c(trial, events.0, events.1, result))
}

Note the construction of the seed= option to the MCMClogit() function. This allows a different seed in every call without actually using sequential seeds.

Now we're ready to call the function repeatedly. We'll do that with the sapply() function, but we need to nest that inside a t() function call to get the estimates to appear as columns rather than rows, and we'll also make it a data frame in the same command. Note that the parameters we change within the sapply() function are merely a list of trial numbers. Finally, we'll add descriptive names for the columns with the names() function (section 1.3.4).

res2 = as.data.frame(t(sapply(1:10, runlogist)))
names(res2) <- c("trial","events.0","events.1", "glm",
"firth", "exact-ish", "MCMC")
head(res2)
mean(res2[,4:7], na.rm=TRUE)


trial events.0 events.1 glm firth exact-ish MCMC
1 1 0 6 18.559624 2.6265073 2.6269087 3.643560
2 2 1 3 1.119021 0.8676031 1.1822296 1.036173
3 3 0 5 18.366720 2.4489268 2.1308186 3.555314
4 4 0 5 18.366720 2.4489268 2.0452446 3.513743
5 5 0 2 17.419339 1.6295391 0.9021854 2.629160
6 6 0 9 17.997524 3.0382577 2.1573979 4.017105

glm firth exact-ish MCMC
17.333356 2.278344 1.813203 3.268243

The results are notably similar to SAS, except for the unacceptable glm() results.

In most Monte Carlo experimental settings, one would also be interested in examining the confidence limits for the parameter estimates. Notes and code for doing this can be found here. In a later entry we'll consider plots for the results generated above. As a final note, there are few combinations of event numbers with any mass worth considering. One could compute the probability of each of these and the associated parameter estimates, deriving a more analytic answer to the question. However, this would be difficult to replicate for arbitrary event probabilities and Ns, and very awkward for continuous covariates, while the above approach could be extended with trivial ease.