Title: | Metrics (with Uncertainty) for Simulation Studies that Evaluate Statistical Methods |
---|---|
Description: | Allows users to quickly apply individual or multiple metrics to evaluate Monte Carlo simulation studies. |
Authors: | Rex Parsons [aut, cre] |
Maintainer: | Rex Parsons <[email protected]> |
License: | MIT + file LICENSE |
Version: | 0.1.1 |
Built: | 2024-11-17 03:24:01 UTC |
Source: | https://github.com/rwparsons/simmetric |
Calculates the bias of the model estimates from the true value and the Monte Carlo standard error for this estimate.
bias(true_value, estimates, get = c("bias", "bias_mcse"), na.rm = FALSE, ...)
bias(true_value, estimates, get = c("bias", "bias_mcse"), na.rm = FALSE, ...)
true_value |
The true value which is being estimated. |
estimates |
A numeric vector containing the estimates from the model(s). |
get |
A character vector containing the values returned by the function. |
na.rm |
A logical value indicating whether NA values for |
... |
Additional arguments to be ignored. |
A named vector containing the estimate and the Monte Carlo standard error for the bias.
bias(true_value=0, estimates=rnorm(100))
bias(true_value=0, estimates=rnorm(100))
Estimate the bias-eliminated coverage and the Monte Carlo standard error of this estimate given a vector of confidence intervals and the true value.
biasEliminatedCoverage( estimates, ll, ul, get = c("biasEliminatedCoverage", "biasEliminatedCoverage_mcse"), na.rm = FALSE, ... )
biasEliminatedCoverage( estimates, ll, ul, get = c("biasEliminatedCoverage", "biasEliminatedCoverage_mcse"), na.rm = FALSE, ... )
estimates |
A numeric vector containing the estimates from the model(s). |
ll |
A numeric vector containing the lower limits of the confidence intervals. |
ul |
A numeric vector containing the upper limits of the confidence intervals. |
get |
A character vector containing the values returned by the function. |
na.rm |
A logical value indicating whether NA values for ll and ul should be removed before coverage estimation. |
... |
Additional arguments to be ignored. |
A named vector containing the estimate and the Monte Carlo standard error for the coverage.
biasEliminatedCoverage(estimates=rnorm(4), ll=c(-1, -1, -1, -1), ul=c(1, 1, 1, -0.5))
biasEliminatedCoverage(estimates=rnorm(4), ll=c(-1, -1, -1, -1), ul=c(1, 1, 1, -0.5))
Estimate the coverage and the Monte Carlo standard error of this estimate given a vector of confidence intervals and the true value.
coverage( true_value, ll, ul, get = c("coverage", "coverage_mcse"), na.rm = FALSE, ... )
coverage( true_value, ll, ul, get = c("coverage", "coverage_mcse"), na.rm = FALSE, ... )
true_value |
The true value which should be covered by the interval. |
ll |
A numeric vector containing the lower limits of the confidence intervals. |
ul |
A numeric vector containing the upper limits of the confidence intervals. |
get |
A character vector containing the values returned by the function. |
na.rm |
A logical value indicating whether NA values for ll and ul should be removed before coverage estimation. |
... |
Additional arguments to be ignored. |
A named vector containing the estimate and the Monte Carlo standard error for the coverage.
coverage(true_value=0, ll=c(-1, -1, -1, -1), ul=c(1, 1, 1, -0.5))
coverage(true_value=0, ll=c(-1, -1, -1, -1), ul=c(1, 1, 1, -0.5))
Calculates the empirical standard error of the model estimates and its Monte Carlo standard error.
empSE(estimates, get = c("empSE", "empSE_mcse"), na.rm = FALSE, ...)
empSE(estimates, get = c("empSE", "empSE_mcse"), na.rm = FALSE, ...)
estimates |
A numeric vector containing the estimates from the model(s). |
get |
A character vector containing the values returned by the function. |
na.rm |
A logical value indicating whether NA values for |
... |
Additional arguments to be ignored. |
A named vector containing the estimate and the Monte Carlo standard error for the empirical standard error.
empSE(estimates=rnorm(100))
empSE(estimates=rnorm(100))
Calculate and join selected evaluation metrics given a data.frame
of simulation study results
Provides a fast way to add multiple metrics and their Monte Carlo standard errors.
join_metrics( data, id_cols, metrics = c("coverage", "mse", "modSE"), true_value = NULL, ll_col = NULL, ul_col = NULL, estimates_col = NULL, se_col = NULL, p_col = NULL, alpha = 0.05 )
join_metrics( data, id_cols, metrics = c("coverage", "mse", "modSE"), true_value = NULL, ll_col = NULL, ul_col = NULL, estimates_col = NULL, se_col = NULL, p_col = NULL, alpha = 0.05 )
data |
A |
id_cols |
Column name(s) on which to group data and calculate metrics. |
metrics |
A vector of metrics to be calculated. |
true_value |
The true parameter to be estimated. |
ll_col |
Name of the column that contains the lower limit of the confidence intervals. (Required for calculating coverage.) |
ul_col |
Name of the column that contains the upper limit of the confidence intervals. (Required for calculating coverage.) |
estimates_col |
Name of the column that contains the parameter estimates. (Required for calculating bias, empSE, and mse.) |
se_col |
Name of the column that contains the standard errors. (Required for calculating modSE.) |
p_col |
Name of the column that contains the p-values. (Required for calculating rejection.) |
alpha |
The nominal significance level specified. (Required for calculating rejection.) |
data.frame
containing metrics and id_cols
simulations_df <- data.frame( idx=rep(1:10, 100), idx2=sample(c("a", "b"), size=1000, replace=TRUE), p_value=runif(1000), est=rnorm(n=1000), conf.ll= rnorm(n=1000, mean=-20), conf.ul= rnorm(n=1000, mean=20) ) res <- join_metrics( data=simulations_df, id_cols=c("idx", "idx2"), metrics=c("rejection", "coverage", "mse"), true_value=0, ll_col="conf.ll", ul_col="conf.ul", estimates_col="est", p_col="p_value", )
simulations_df <- data.frame( idx=rep(1:10, 100), idx2=sample(c("a", "b"), size=1000, replace=TRUE), p_value=runif(1000), est=rnorm(n=1000), conf.ll= rnorm(n=1000, mean=-20), conf.ul= rnorm(n=1000, mean=20) ) res <- join_metrics( data=simulations_df, id_cols=c("idx", "idx2"), metrics=c("rejection", "coverage", "mse"), true_value=0, ll_col="conf.ll", ul_col="conf.ul", estimates_col="est", p_col="p_value", )
Calculates the average model standard error and the Monte Carlo standard error of this estimate.
modSE(se, get = c("modSE", "modSE_mcse"), na.rm = FALSE, ...)
modSE(se, get = c("modSE", "modSE_mcse"), na.rm = FALSE, ...)
se |
A numeric vector containing the standard errors from the model(s). |
get |
A character vector containing the values returned by the function. |
na.rm |
A logical value indicating whether NA values for |
... |
Additional arguments to be ignored. |
A named vector containing the estimate and the Monte Carlo standard error for the average model standard error.
modSE(se=runif(n=20, min=1, max=1.5))
modSE(se=runif(n=20, min=1, max=1.5))
Calculates the Mean Squared Error of the model estimates from the true value and the Monte Carlo standard error for this estimate.
mse(true_value, estimates, get = c("mse", "mse_mcse"), na.rm = FALSE, ...)
mse(true_value, estimates, get = c("mse", "mse_mcse"), na.rm = FALSE, ...)
true_value |
The true value which is being estimated. |
estimates |
A numeric vector containing the estimates from the model(s). |
get |
A character vector containing the values returned by the function. |
na.rm |
A logical value indicating whether NA values for |
... |
Additional arguments to be ignored. |
A named vector containing the estimate and the Monte Carlo standard error for the bias.
mse(true_value=0, estimates=rnorm(100))
mse(true_value=0, estimates=rnorm(100))
Calculates the rejection (%) of the model p-values, according to the specified alpha, and the Monte Carlo standard error for this estimate.
rejection( p, alpha = 0.05, get = c("rejection", "rejection_mcse"), na.rm = FALSE, ... )
rejection( p, alpha = 0.05, get = c("rejection", "rejection_mcse"), na.rm = FALSE, ... )
p |
P-values from the models. |
alpha |
The nominal significance level specified. The default is |
get |
A character vector containing the values returned by the function. |
na.rm |
A logical value indicating whether NA values for |
... |
Additional arguments to be ignored. |
A named vector containing the estimate and the Monte Carlo standard error for the rejection.
rejection(p=runif(200, min=0, max=1))
rejection(p=runif(200, min=0, max=1))
Calculates the relative (%) error in model standard error and the (approximate) Monte Carlo standard error of this estimate.
relativeErrorModSE( se, estimates, get = c("relativeErrorModSE", "relativeErrorModSE_mcse"), na.rm = FALSE, ... )
relativeErrorModSE( se, estimates, get = c("relativeErrorModSE", "relativeErrorModSE_mcse"), na.rm = FALSE, ... )
se |
A numeric vector containing the standard errors from the model(s). |
estimates |
A numeric vector containing the estimates from the model(s). |
get |
A character vector containing the values returned by the function. |
na.rm |
A logical value indicating whether NA values for |
... |
Additional arguments to be ignored. |
A named vector containing the estimate and the Monte Carlo standard error for the relative (%) error in model standard error.
relativeErrorModSE(se=rnorm(n=1000, mean=10, sd=0.5), estimates=rnorm(n=1000))
relativeErrorModSE(se=rnorm(n=1000, mean=10, sd=0.5), estimates=rnorm(n=1000))
Calculates the relative (%) increase in precision between two competing methods (B vs A). As this metric compares two methods directly, it cannot be used in join_metrics()
.
relativePrecision( estimates_A, estimates_B, get = c("relativePrecision", "relativePrecision_mcse"), na.rm = FALSE )
relativePrecision( estimates_A, estimates_B, get = c("relativePrecision", "relativePrecision_mcse"), na.rm = FALSE )
estimates_A |
A numeric vector containing the estimates from model A. |
estimates_B |
A numeric vector containing the estimates from model B. |
get |
A character vector containing the values returned by the function. |
na.rm |
A logical value indicating whether NA values for |
A named vector containing the estimate and the Monte Carlo standard error for the relative (%) increase in precision of method B versus method A.
relativePrecision(estimates_A=rnorm(n=1000), estimates_B=rnorm(n=1000))
relativePrecision(estimates_A=rnorm(n=1000), estimates_B=rnorm(n=1000))