My first R package has been released on CRAN recently. It is named MonteCarlo and aims to make simulation studies as easy as possible – including parallelization and the generation of tables.

## What Are Simulation Studies Good For?

Monte Carlo simulations are an essential tool in statistics and related disciplines. They are routinely used to determine distributional properties, where no analytical results are available. A typical example is to study the finite sample properties of a new statistical procedure. Finite sample properties are often unknown, since theoretical results usually rely on asymptotic theory that only applies to infinite sample sizes. This may be a good approximation in practice if the sample size is large, but it may also be bad if the sample is small or if the rate of convergence is very slow.

Simulation studies are also extremely useful to compare the performance of alternative procedures for the same task. Imagine you want to test whether your sample comes from a Gaussian distribution and you have to decide whether to use the Jarque-Bera test or the Kolmogorov Smirnov test. Both appear to be legitimate choices. A simulation study that is tailored so that it reflects the situation at hand might uncover that one of the procedures is much more powerful than the other.

Finally, small scale simulation studies are an essential tool for statistical programming. Testing is an essential part of programming and software development. Usually, if one writes a function, it is good practice to let it run on some test cases. This is easy, if the outcome is deterministic. But if your function is a statistical test or an estimator, the output depends on the sample and is stochastic. Therefore, the only way to test whether the implementation is correct, is to generate a large number of samples and to see whether it has the expected statistical properties. For example one might test whether the mean squared error of an estimator converges to zero as the sample size increases, or whether the test has the correct alpha error.

Therefore, writing Monte Carlo simulations is an everyday task in many areas of statistics. This comes with considereable effort. It is not unusual that the required lines of code to produce a simulation study are a multiple of that needed to implement the procedure of interest. As a consequence of that they are also one of the main sources for errors. On top of this, the large computational effort often requires parallelization which brings additional complications and programming efforts. This efforts can often be prohibitive – especially for less advanced users.

## The MonteCarlo Package

The MonteCarlo package streamlines the process described above. It allows to create simulation studies and to summarize their results in LaTeX tables quickly and easily.

There are only two main functions in the package:

*MonteCarlo()*runs a simulation study for a user defined parameter grid. It handles the generation of loops over these parameter grid and parallelizes the computation on a user specified number of CPU units.*MakeTable()*creates LaTeX tables from the output of*MonteCarlo()*. It stacks high dimensional output arrays into tables with a user specified ordering of rows and columns.

To run a simulation study, the user has to nest both – the generation of a sample and the calculation of the desired statistics from this sample – in a single function. This function is passed to *MonteCarlo()*. No additional programming is required. This approach is not only very versatile – it is also very intuitive. The user formulates his experiment as if he/she was only interested in a single draw.

The aim of this approach is to allow the user full control and flexibility with regard to the design of the Monte Carlo experiment, but to take away all the effort of setting up the technical part of the simulation.

## A Simple Example: The t-test

Suppose we want to evaluate the performance of a standard t-test for the hypothesis that the mean is equal to zero. We are interested to see how the size and power of the test change with the sample size (*n*), the distance from the null hypothesis (*loc* for location) and the standard deviation of the distribution (*scale*). The sample is generated from a normal distribution.

To conduct this analysis, we proceed as follows. First, we load the MonteCarlo package.

```
library(MonteCarlo)
```

Then define the following function.

```
#########################################
## Example: t-test
# Define function that generates data and applies the method of interest
ttest<-function(n,loc,scale){
# generate sample:
sample<-rnorm(n, loc, scale)
# calculate test statistic:
stat<-sqrt(n)*mean(sample)/sd(sample)
# get test decision:
decision1.96
# return result:
return(list("decision"=decision))
}
```

As discussed above, *ttest()* is formulated in a way as if we only want to generate a single test decision. The arguments of the function are the parameters we are interested in. *ttest()* carries out 4 steps:

- Draw a sample of
*n*observations from a normal distribution with mean*loc*and standard deviation*scale*. - Calculate the t-statistic.
- Determine the test decision.
- Return the desired result in form of a list.

We then define the combinations of parameters that we are interested in and collect them in a list. The elements of the lists must have the same names as the parameters for which we want to supply grids.

```
# define parameter grid:
n_grid<-c(50,100,250,500)
loc_grid<-seq(0,1,0.2)
scale_grid<-c(1,2)
# collect parameter grids in list:
param_list=list("n"=n_grid, "loc"=loc_grid, "scale"=scale_grid)
```

To run the simulation, the function *ttest()* and the parameter grid (*param_list*) are passed to *MonteCarlo()*, together with the desired number of Monte Carlo repetitions (*nrep=1000*).

```
# run simulation:
MC_result<-MonteCarlo(func=ttest, nrep=1000, param_list=param_list)
```

There is no further coding required. All the mechanics of the Monte Carlo experiment are handled by the *MonteCarlo()* function.

Calling summary produces a short information on the simulation.

```
summary(MC_result)
```

```
## Simulation of function:
##
## function(n,loc,scale){
##
## # generate sample:
## sample<-rnorm(n, loc, scale)
##
## # calculate test statistic:
## stat<-sqrt(n)*mean(sample)/sd(sample)
##
## # get test decision:
## decision1.96
##
## # return result:
## return(list("decision"=decision))
## }
##
## Required time: 13.38 secs for nrep = 1000 repetitions on 1 CPUs
##
## Parameter grid:
##
## n : 50 100 250 500
## loc : 0 0.2 0.4 0.6 0.8 1
## scale : 1 2
##
##
## 1 output arrays of dimensions: 4 6 2 1000
```

As one can see from the summary, the simulation results are stored in an array of dimension `c(4,6,2,1000)`

, where the Monte Carlo repetitions are collected in the last dimension of the array.

To summarize the results in a reasonable way and to include them as a table in a paper or report, we have to represent them in a matrix. This is handled by the *MakeTable()* function that stacks the submatrices collected in the array in the rows and columns of a matrix and prints the result in the form of code to generate a LaTeX table.

To determine in which order the results are stacked in rows and columns, we supply the function arguments *rows* and *cols*. These are vectors of the names of the parameters in the order in which we want them to appear in the table (sorted from the inside to the outside).

```
# generate table:
MakeTable(output=MC_result, rows="n", cols=c("loc","scale"), digits=2, include_meta=FALSE)
```

```
## \begin{table}[h]
## \centering
## \resizebox{ 1 \textwidth}{!}{%
## \begin{tabular}{ rrrrrrrrrrrrrrr }
## \hline\hline\\\\
## scale && \multicolumn{ 6 }{c}{ 1 } & & \multicolumn{ 6 }{c}{ 2 } \\
## n/loc & & 0 & 0.2 & 0.4 & 0.6 & 0.8 & 1 & & 0 & 0.2 & 0.4 & 0.6 & 0.8 & 1 \\
## & & & & & & & & & & & & & & \\
## 50 & & 0.05 & 0.30 & 0.83 & 0.98 & 1.00 & 1.00 & & 0.05 & 0.10 & 0.28 & 0.55 & 0.79 & 0.94 \\
## 100 & & 0.05 & 0.51 & 0.98 & 1.00 & 1.00 & 1.00 & & 0.07 & 0.16 & 0.53 & 0.84 & 0.98 & 1.00 \\
## 250 & & 0.05 & 0.89 & 1.00 & 1.00 & 1.00 & 1.00 & & 0.05 & 0.35 & 0.90 & 1.00 & 1.00 & 1.00 \\
## 500 & & 0.05 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & & 0.06 & 0.58 & 1.00 & 1.00 & 1.00 & 1.00 \\
## \\
## \\
## \hline\hline
## \end{tabular}%
## }
## \caption{ decision }
## \end{table}
```

To change the ordering, just change the vectors *rows* and *cols*.

```
# generate table:
MakeTable(output=MC_result, rows=c("n","scale"), cols="loc", digits=2, include_meta=FALSE)
```

```
## \begin{table}[h]
## \centering
## \resizebox{ 1 \textwidth}{!}{%
## \begin{tabular}{ rrrrrrrrr }
## \hline\hline\\\\
## scale & n/loc & & 0 & 0.2 & 0.4 & 0.6 & 0.8 & 1 \\
## & & & & & & & & \\
## \multirow{ 4 }{*}{ 1 } & 50 & & 0.05 & 0.30 & 0.83 & 0.98 & 1.00 & 1.00 \\
## & 100 & & 0.05 & 0.51 & 0.98 & 1.00 & 1.00 & 1.00 \\
## & 250 & & 0.05 & 0.89 & 1.00 & 1.00 & 1.00 & 1.00 \\
## & 500 & & 0.05 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 \\
## & & & & & & & & \\
## \multirow{ 4 }{*}{ 2 } & 50 & & 0.05 & 0.10 & 0.28 & 0.55 & 0.79 & 0.94 \\
## & 100 & & 0.07 & 0.16 & 0.53 & 0.84 & 0.98 & 1.00 \\
## & 250 & & 0.05 & 0.35 & 0.90 & 1.00 & 1.00 & 1.00 \\
## & 500 & & 0.06 & 0.58 & 1.00 & 1.00 & 1.00 & 1.00 \\
## \\
## \\
## \hline\hline
## \end{tabular}%
## }
## \caption{ decision }
## \end{table}
```

Now we can simply copy the code and add it to our paper, report or presentation. That is all. Only make sure that the package *multirow* is included in the header of the *.tex* file.

## Parallelised Simulation

If the procedure you are interested in is not so fast or you need a large number of replications to produce very accurate results, you might want to use parallelized computation on multiple cores of your computer (or cluster). To achive this, simply specify the number of CPUs by supplying a value for the argument *ncpus* of *MonteCarlo* as shown below. Of course you should actually have at least the specified number of units.

```
# run simulation:
MC_result<-MonteCarlo(func=ttest, nrep=1000, param_list=param_list, ncpus=4)
```

This automatically sets up a *snow* cluster, including the export of functions and the loading of packages. The user does not have to take care of anything.

## Further Information

This is an introduction to get you up and running with the *MonteCarlo* package as quickly as possible. Therefore, I only included a short example. However, the functions *MonteCarlo()* and particularly *MakeTable()* provide much more functionality. This is described in more detail in the package vignette, that also provides additional examples.

[…] article was first published on R – first differences, and kindly contributed to […]

LikeLike

[…] article was first published on R – first differences, and kindly contributed to […]

LikeLike

Hi, thanks for the package! It sounds very exciting. However, the code below take 1.38 secs when ncpus=1, but 28.5 secs when ncpus=2. This is on Windows 10 64 bit. Any ideas why? Someone else had the same observation: https://twitter.com/uri_sohn/status/873205000545218561

library(MonteCarlo)

test_func<-function(n,loc,scale){

sample<-rnorm(n, loc, scale)

stat<-sqrt(n)*mean(sample)/sd(sample)

decision1.96

return(list(“decision”=decision))

}

# Example without parallization

n_grid<-c(50,100,250,500)

loc_grid<-seq(0,1,0.2)

scale_grid<-c(1,2)

param_list=list("n"=n_grid, "loc"=loc_grid, "scale"=scale_grid)

erg<-MonteCarlo(func=test_func, nrep=250, param_list=param_list, ncpus=1)

summary(erg)

rows<-c("n")

cols<-c("loc","scale")

MakeTable(output=erg, rows=rows, cols=cols, digits=2)

LikeLiked by 1 person

Hi! I am very happy thank you find the package exciting ;)! Parallelization is not necessarily faster. The reason for this behavior is that the computer needs time to initialize the cluster (start R instances, load packages there, export things from your workspace to these new instances, etc…). If this setup time exceeds the time saved by computing on more than one CPU the performance gets worse not better. The t-test simulated in this example is very quick to calculate. That is why you do not save very much time if you parallize it. This should change if you consider something bigger – for example nrep=5000 and n=2500, or if you evaluate more complicate functions that contain loops or numerical optimization steps. Best regards!

LikeLike

Hi Daniel:

Indeed, parallel computing can be slower–sometimes much slower. To see benefits, you need many many conditions that can overcome the overhead time. For simpler simulation studies (conditions < 100) , I usually use one core when using frequetist methods.

Donny

LikeLike

Do you have an example you would feel comfortable posting?

LikeLiked by 1 person

`>`

rm(list=ls())

library(MonteCarlo)

slow_mean<-function(data){

sum_est<-0

for(i in 1:length(data)){sum_est<-sum_est+data[i]}

sum_est/length(data)

}

test_func<-function(n,loc,scale){

sample<-rnorm(n, loc, scale)

stat<-sqrt(n)*slow_mean(sample)/sd(sample)

decision1.96

return(list("decision"=decision))

}

n_grid<-c(1000,2500,5000)

loc_grid<-1

scale_grid<-c(1,2)

`param_list=list("n"=n_grid, "loc"=loc_grid, "scale"=scale_grid)`

erg1<-MonteCarlo(func=test_func, nrep=5000, param_list=param_list, ncpus=1)

erg2 erg1$meta$time

Time difference of 55.06 secs

> erg2$meta$time

Time difference of 34.71 secs

LikeLike

Hmmm smth is wrong. The example does not work 😦

LikeLike

Sorry, some of the code had disappeared after copying and pasting. The version below should work.

rm(list=ls())

library(MonteCarlo)

slow_mean<-function(data){

sum_est<-0

for(i in 1:length(data)){sum_est<-sum_est+data[i]}

sum_est/length(data)

}

test_func<-function(n,loc,scale){

sample<-rnorm(n, loc, scale)

stat<-sqrt(n)*slow_mean(sample)/sd(sample)

decision<-stat<1.96

return(list("decision"=decision))

}

n_grid<-c(1000,2500,5000)

loc_grid<-1

scale_grid<-c(1,2)

param_list=list("n"=n_grid, "loc"=loc_grid, "scale"=scale_grid)

erg1<-MonteCarlo(func=test_func, nrep=5000, param_list=param_list, ncpus=1)

erg2<-MonteCarlo(func=test_func, nrep=5000, param_list=param_list, ncpus=2)

erg1$meta$time

erg2$meta$time

LikeLike

Thanks – this works, but I need to find a description to understand the behind lying idea 🙂

Thanks

Lars

LikeLike

As I said – looks like it works. However, I get the time differences to 1.55 mins and 1.37 mins, respectively.

Smth still puzzling

LikeLike

That is indeed puzzling. On my machine the version with 2 cpus is about twice as fast as the version with 1. Maybe yours is much faster. What happens if you set the number of repetitions to 50000 each?

LikeLike

First of all now it took a veeeeery long time caculating if the rep=50000

The result were quite different: 15.98 and 8.27, respectively

LikeLike

Ah great. So it works. It es nearly twice as fast with two cpus instead of one. If parallelization makes sense really depends on the magnitude of the problem at hand. If the things you usually do only take a few microseconds to compute you are probably fine without it, but you see if you have simulations that take hours or days to compute it is a real time saver. As I said, I ll see that I provide a more detailed discussion soon.

LikeLike

I am going to write a longer post about the pros and cons of parallel computation and the factors that influence overhead time and time savings. Also, I am considering to add a second parallel mode that works better for very large numbers of CPUs, and large parameter grids with relatively simple calculations. The current implementation is geared very much towards situations with a lower number of cores (say <10) and situations where a single repetition is time consuming (say it takes several seconds or even minutes). It might take a couple of weeks though until I find some time for that.

LikeLike

Looks promising, but I get the following error when running the example code.

MC_result eval -> eval -> func

LikeLike

Hi! That is odd. Could you post a mini example that reproduces the error?

LikeLike