Multi-Objective Optimization

Multi-objective optimization is a generalization of the standard optimization problem, such that a decision vector \(x \in X\), must be chosen to solve the minimization problem

$$
\begin{equation*}
\min_{x \in X} (f_1(x),f_2(x),\ldots,f_k(x)),
\end{equation*}
$$

where \(f_i(x)\) are the different objectives one is trying to minimize. The problem can also be written by stacking all objectives into a vector function \(f \colon X \to Y\) that maps from the design space \(X\) to the decision space \(Y \subset \mathbb R^k\).

However, unlike the mono-objective setting, there may not be a single solution. If the objective functions conflict with each other, such that minimizing one increases others, then there isn’t a single solution, but a Pareto set of solutions, each one better than the others in some objective. This makes the optimization considerably harder.

Within this field, the main topics I’m currently tackling are:

  1. How to transform the multi-objective optimization problem into a mono-objective one, which allows standard optimization algorithms to be used.
  2. How to approximate the objective functions so that the number of function evaluations, which can be expensive, may be reduced.
  3. How to deal with dynamic environments, where the objectives may change through time.

Scalarizing objectives

There are two main traditional techniques for transforming multi-objective optimization problems into mono-objective ones.

The first one involves defining a vector of weights \(w \in \mathbb R^k, w_i > 0\), so that the optimization can be written as

$$
\begin{equation*}
\min_{x \in X} w^T f(x),
\end{equation*}
$$

which has a single objective given by the weighted original objectives. If the Pareto frontier is convex, then theoretically we are able to recover it by varying \(w\) and solving multiple optimization problems. However, for concave frontiers, there are regions that won’t be seen.

The other method is the \(\epsilon\)-constraint method, where we optimize one function at a time, while constraining the value of the others, such that the problem becomes:

$$
\begin{equation*}
\begin{aligned}
\min_{x \in X} & ~f_i(x)
\\
\text{s.t.} & ~f_j(x) \le \epsilon_{i,j}, \quad j \in \{1,\ldots,k\} \backslash \{i\},
\end{aligned}
\end{equation*}
$$

which is able to deal with concave frontiers, but it may be hard to define the constraint parameters \(\epsilon\).

Given these problems, one might be inclined to ask: is it possible to construct some scalar objective to replace multiple objectives? Well, there is an indicator named Hypervolume Indicator (also known as S-measure) that has been used in multi-objective optimization as a target for optimization, such that the estimated Pareto frontier’s hypervolume should be maximized.

Although the hypervolume has some nice theoretical properties, like allowing comparison between sets of points approximating the Pareto frontier, it can be computationally expensive and hard to be optimized directly.

So my search is for some way to improve the hypervolume optimization and new methods for transforming the multi-objective problem into a single-objective one.

Objectives surrogates

Surrogates \(\hat f_i\) are approximations to the real objectives \(f\), also mapping from \(X\) to \(Y\), so that expensive evaluation of the objective can be avoided while searching for a good candidate for real testing, which is then evaluated using the real objectives. There are many papers in the mono-objective literature using high-quality surrogates, which were then just copied to the multi-objective optimization field.

However, these methods optimize each objective separately, which ignores the inherent relationship between objectives given by the Pareto frontier and could lead to worse estimates. As a way to solve this, mono-surrogates have been proposed, where one maps directly from the decision space \(X\) to an indicator that states where the given point is in relation to the estimated Pareto frontier.

Nonetheless, even these direct methods ignore the innate shape of the Pareto frontier, finding surrogates where the estimated frontier may violate considerably the definition of the Pareto frontier itself.

Therefore, my research tries to find a method similar to the mono-surrogate ones, but with a mapping from the objective space \(Y\) instead of the decision space \(X\), which allows integration with the existing surrogate methods more appropriate for each objective, and a Pareto frontier estimate that tries its best not to violate the frontier definition.

Dynamic objectives

The problem with dynamic objectives is one that hasn’t been explored much, in my opinion, but one that I find very important to be tackled in real-world scenarios. If the problem doesn’t have to be solved only once, then it may be possible to extract information from previous iterations to predict the future behavior of the objectives and anticipate some decisions.

For instance, if the objectives have some structure as to how they evolve in time, then surrogate models \(\hat f_i \colon X \times T \to Y\) that take the current time-step into account may be able to learn the time-dependent relationship between objectives and be used to predict the system’s future behavior.

Hence my research tries to find ways to build these surrogates in a way that the estimated future objectives can be useful in the current time, either by creating diversity of solutions beforehand or guiding the choice of a solution among the Pareto set.