Particle learning of Gaussian processes (plgp R package)
plgp is an R package implementing sequential Monte Carlo inference for fully Bayesian Gaussian process (GP) models by particle learning (PL). The sequential nature of inference and the active learning (AL) hooks provided facilitate thrifty sequential design and optimization. This package essentially provides a generic PL interface, and functions (arguments to the interface) which implement the GP models and AL heuristics. The current version supports
- static classification and regression GP models with three
types of correlation functions
- isotropic Gaussian and separable Gaussian
- single-index Gaussian, implementing a GP-SIM model
- sequential design for optimization of (noisy) real-valued functions by expected improvement (EI) statistic using a regression GP model
- sequential design for exploring classification boundaries by the predictive entropy statistic via a classification GP model
- sequential design for optimization under known and unknown constraints by an integrated expected conditional improvement (IECI) statistic using a hybrid regression-classification GP model
- Obtain R from cran.r-project.org by selecting the version for your operating system.
- Install the plgp, mvtnorm and tgp packages, from within R.
> install.packages(c("plgp", "mvtnorm", "tgp"))
- Optionally, install the akima, ellipse and splancs packages.
> install.packages(c("akima", "ellipse", "splancs"))
- Load the library as you would for any R library.
- See the package documentation. A pdf version of the
reference manual, or help pages, as also available.
The help pages can be accessed from within
R. The best way to acquaint yourself with the functionality
of this package is to run the demos which illustrate the examples
contained in the papers referenced below. Try starting with...
> ?plgp # follow the examples which point to the demos
> demo(package="plgp") # for a direct listing of the demos
- Particle learning of Gaussian process models for sequential design and optimization (2011) with Nicholas Polson. Journal of Computational and Graphical Statistics, 20(1), pp. 109-118; preprint on arXiv:0909.5262
- Gaussian process single-index models as emulators for computer experiments (2012) with Heng Lian; Technometrics, 54(1), pp. 30-41; preprint on arXiv:1009.4241
- Optimization under unknown constraints with H.K.H. Lee (2010). Bayesian Statistics 9, J. M. Bernardo, M. J. Bayarri, J. O. Berger, A. P. Dawid, D. Heckerman, A. F. M. Smith and M. West (Eds.); Oxford University Press; preprint on arXiv :1004.4027
- Carvalho, C., Johannes, M., Lopes, H., and Polson, N. (2008) Particle Learning and Smoothing. Discussion Paper 2008-32, Duke University Dept. of Statistical Science
Please send questions and comments to rbgramacy_AT (_chicagobooth_DOT_edu). Enjoy!
Robert B. Gramacy -- 2011