---
title: "Second Moment Method"
output:
html_vignette:
fig_width: 7
fig_height: 5
vignette: >
%\VignetteIndexEntry{Second Moment Method}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
set.seed(42)
```
The Second Moment Method (SMM) is a fast, analytical alternative to Monte Carlo simulation for estimating project cost or schedule uncertainty. Rather than running thousands of iterations, SMM propagates uncertainty through a project mathematically using only the mean and variance of each task, the "first two moments" of the probability distribution.
## When to Use SMM
SMM is best suited for early-stage estimates when:
- Speed matters and simulation run-time is a concern
- You have credible mean and variance estimates for each task
- Tasks are approximately independent or have well-characterized correlations
- You need a quick sensitivity check before committing to a full Monte Carlo run
## Method Overview
For a project with *n* tasks, SMM computes:
- **Total mean:** Sum of individual task means: \(E[X] = \sum_{i=1}^{n} E[X_i]\)
- **Total variance:** Sum of variances plus twice the sum of all pairwise covariances:
\[Var(X) = \sum_{i=1}^{n} Var(X_i) + 2 \sum_{i= ci_lower & x_range <= ci_upper]
y_ci <- dnorm(x_ci, mean = total_mean, sd = total_sd)
polygon(c(ci_lower, x_ci, ci_upper), c(0, y_ci, 0),
col = "lightblue", border = NA
)
abline(v = total_mean, col = "black", lty = 2, lwd = 1.5)
legend("topright",
legend = c("Normal density", "95% CI", "Mean"),
col = c("steelblue", "lightblue", "black"),
lty = c(1, NA, 2), lwd = c(2, NA, 1.5),
pch = c(NA, 15, NA), pt.cex = 1.5,
bty = "n"
)
```
## Comparison with Monte Carlo Simulation
Running Monte Carlo simulation with the same task distributions (and no correlation, for a clean comparison) validates the SMM mean. The two methods should yield very similar total means; differences in variance arise because SMM and MCS handle correlated sampling differently.
```{r}
# Represent each task as a normal distribution for MCS comparison (independent case)
task_dists_for_mcs <- list(
list(type = "normal", mean = task_means[1], sd = sqrt(task_vars[1])),
list(type = "normal", mean = task_means[2], sd = sqrt(task_vars[2])),
list(type = "normal", mean = task_means[3], sd = sqrt(task_vars[3]))
)
# Run MCS without correlation (identity = fully independent)
mcs_result <- mcs(10000, task_dists_for_mcs)
```
```{r}
# SMM variance without correlation = sum of individual variances
smm_var_nocor <- sum(task_vars)
comparison <- data.frame(
Method = c("SMM (independent)", "Monte Carlo (10,000 runs)"),
Total_Mean = round(c(result$total_mean, mcs_result$total_mean), 2),
Total_Variance = round(c(smm_var_nocor, mcs_result$total_variance), 2),
Total_StdDev = round(c(sqrt(smm_var_nocor), mcs_result$total_sd), 2)
)
knitr::kable(comparison, caption = "SMM vs. Monte Carlo Comparison (independent tasks)")
```
The two methods agree closely on the mean and variance. SMM is faster but assumes normality; Monte Carlo is more flexible and can use any distribution type. When tasks are correlated, SMM adds covariance terms analytically while MCS uses a correlation-based sampling scheme.
## Benefits and Limitations
| | SMM | Monte Carlo |
|--|-----|-------------|
| **Speed** | Instant (analytical) | Slow (thousands of iterations) |
| **Inputs needed** | Mean + variance per task | Full distribution per task |
| **Distribution assumption** | Normal (by Central Limit Theorem) | Any distribution |
| **Correlation handling** | Explicit covariance formula | Cholesky decomposition |
| **Skewness / tails** | Ignored | Captured accurately |
| **Best for** | Early estimates, quick checks | Detailed risk analysis, non-normal tasks |