Abstract
:
Optimizing an unknown function is at the center of many applications, from industrial design to model calibration. This task is particularly challenging when each evaluation of the function is expensive.
The goal of a sequential optimization procedure is to simultaneously explore the unknown function and converge to its optimum value using as few evaluations as possible.
To tackle this challenge we assume that the unknown function is a sample from a Gaussian process (GP) where the smoothness is a parameter.
In this talk we will focus on Upper Confidence Bounds algorithms built on the GP assumption, and exhibit performance guarantees with rates of convergence.
We will then show how this applies in two real scenarios: tsunamis analysis and optimization of a farm of wave energy converters.