Bootstrap is a method of estimating variance and other characteristics of statistical estimators. On any given sample any given estimator has only one value. Therefore it seems problematic to learn anything about the distribution of the estimator, unless we can make parametric assumptions about the underlying data and the distribution of the estimator can be analytically related to the distribution of the data. Bootstrap is a way out when extra assumptions and derivations are not an option. It is largely a nonparametric method. Bootstrap says: assume that the true distribution of the data is the empirical distribution dictated by the current sample. Literally, assume that the true cumulative distribution function of the studied variables equals the empirical cumulative distribution function. Simulate many realizations of the given estimator using Monte Carlo and compute its moments and other characteristics.
Parametric bootstrap is a half-measure. It says: assume a parametric model for certain parts of the data. Those are the parts that have been well studied and are unlikely to present surprises. All other distributions / relationships are assumed to be given by the empirical distribution function. The relevant characteristics of the estimator are computed via Monte Carlo.
BOOTSTRAP REFERENCES
Efron, B. & Tibshirani, R. (1993). An Introduction to the Bootstrap. Chapman and Hall, London.
Efron, B., & Hastie, T. (2017). Computer Age Statistical Inference: Algorithms, Evidence, and Data Science. Cambridge University Press.
James, G., Witten, D., Hastie, T., & Tibshirani, R. (2017). An Introduction to Statistical Learning: with Applications in R (Corr. 7th printing). Springer New York.
Efron, B. (1979). Bootstrap methods: another look at the jackknife. Annals of Statistics 7: pp. 1-26.
Tibshirani, R. & Knight, K. (1999). Model search and inference by bootstrap bumping. J. Comp. and Graph. Stat. 8: pp. 671-686.
Hall, P. (1992). The Bootstrap and Edgeworth Expansion. Springer-Verlag, New York.
BOOTSTRAP RESOURCES