TechRxiv
BAL_with_MGPs_preprint.pdf (3.43 MB)
Download file

Mixture of Gaussian Processes for Bayesian Active Learning

Download (3.43 MB)

This paper introduces a Mixture of Gaussian processes (MGP) model and explores its application in the context of Bayesian active learning. The MGP offers an alternative approach to fully Bayesian Gaussian processes by leveraging the benefits of `fully' Bayesian active learning while circumventing the computationally expensive Monte Carlo sampling of the Gaussian process's hyperparameters. Through a detailed empirical analysis, we demonstrate that the MGP equipped with Bayesian Active Learning by Disagreement (BALD) improves querying efficiency and delivers competitive performance compared to both standard and fully Bayesian Gaussian processes. Across six classic simulators, our experiments reveal that the MGP with BALD achieves, on average, the lowest negative log probability with the fewest iterations. Moreover, these models are more than seven times faster than fully Bayesian Gaussian processes with BALD. Furthermore, we extend our evaluation to a real-world simulator from the air traffic management domain, where MGP outperforms both Gaussian processes and fully Bayesian Gaussian processes. Additionally, we demonstrate the applicability of the MGP within the Bayesian optimization framework, where it yields the best minimum on five out of the six simulators considered.

History

Email Address of Submitting Author

chrrii@dtu.dk

ORCID of Submitting Author

0000-0002-4540-6691

Submitting Author's Institution

Technical University of Denmark

Submitting Author's Country

  • Denmark