[MUSIC] In this video, we will talk about the exploration-exploitation trade-off of the Bayesian optimization. But firstly, let me remind you key points about the Bayesian optimization. So the Bayesian optimization is a method of finding the optimum of expensive cost functions. This cost function is also called objective function and denoted as f. It supposed that calculation of f at one point is expensive. And the derivatives of these functions are unknown. The goal of Bayesian optimization is to find the optimum of the objective function using as small number of the function calculations as possible. The optimization algorithm is follow. And the first step finds the objective function approximation using previously calculated values, and solving the regression problem. Then using the approximation find optimum point of an acquisition function. Then, after that, sample the objective function in this new point, and repeat all the steps. There are variety of acquisition functions, and one of them used in this video is Lower Confidence Bound for the object function minimization. The Bayesian optimization with Gaussian processes allows to balance between exploration and exploitation of the objective function. Exploration is when you choose a point with high variance at each iteration of the optimization. And exploitation, it's when you choose a point with high or low mean at each iteration of the optimization. Adjustable parameters of the acquisition functions provide trade-off between exploration and exploitation. To study the exploration-exploitation properties of the Bayesian optimization, consider the following objective function f. The goal in this example is to find its minimum. Let's start the optimization from three observations. Firstly, consider the exploitation property. Exploitation can be achieved with a small values of k parameter in lower confidence bound function. After ten iterations, we have almost all observations in the local minimum. And there are just few observations in other regions of the function. After 20 iterations, we again have almost all observations in the minima regions, and there are just few observations in other regions. This is the exploitation property of the Bayesian optimization with Gaussian processes. The key features of the exploitation are follow. So exploitation takes a new point at each iteration close to found optimum of the objective function. There is no guarantee that this optimum is global. Other regions of the objective function are not explored, and it needs less iterations that for exploration to find the optimum. Now consider the exploration property of the Bayesian optimization. Exploration can be achieved with large values of k in lower confidence bound function. In the exploration case all observations are distributed through the whole region. The target function is explored during the Bayesian optimization. After twenty iterations, all observations are again distributed through the whole region. And this is exploration property of the Bayesian optimization with Gaussian processes. So during the exploration, it's taken a new point at each iteration where the variance of the approximation of the objective function is large. All regions of the objective functions are explored. It's more likely that the found optimum is global. And it needs more iterations than for exploitation to find the optimum. Finally, we have learned the Bayesian optimization. Bayesian optimization is a method of finding the optimum of expensive cost functions. We know how it works and how to use the exploration-exploitation trade-off during the optimization. In high energy physics, Bayesian optimization with Gaussian processes can be used for detector design optimization. In the next video, we consider two examples of the optimization in high energy physics. [MUSIC]