loading page

Mixture of Gaussian Processes for Bayesian Active Learning
  • Christoffer Riis ,
  • Filipe Rodrigues ,
  • Francisco Camara Pereira
Christoffer Riis
Author Profile
Filipe Rodrigues
Author Profile
Francisco Camara Pereira
Author Profile


This paper introduces a Mixture of Gaussian processes (MGP) model and explores its application in the context of Bayesian active learning. The MGP offers an alternative approach to fully Bayesian Gaussian processes by leveraging the benefits of ‘fully’ Bayesian active learning while circumventing the computationally expensive Monte Carlo sampling of the Gaussian process’s hyperparameters. Through a detailed empirical analysis, we demonstrate that the MGP equipped with Bayesian Active Learning by Disagreement (BALD) improves querying efficiency and delivers competitive performance compared to both standard and fully Bayesian Gaussian processes. Across six classic simulators, our experiments reveal that the MGP with BALD achieves, on average, the lowest negative log probability with the fewest iterations. Moreover, these models are more than seven times faster than fully Bayesian Gaussian processes with BALD. Furthermore, we extend our evaluation to a real-world simulator from the air traffic management domain, where MGP outperforms both Gaussian processes and fully Bayesian Gaussian processes. Additionally, we demonstrate the applicability of the MGP within the Bayesian optimization framework, where it yields the best minimum on five out of the six simulators considered.