Approximation analysis & Pareto Optimization

Modified: 2018/04/28 19:34 by admin - Uncategorized
Pareto optimization is a kind of evolutionary algorithms that has been shown to be powerful approximate solvers for constrained optimization problems in finite discrete domains.

Approximation analysis:

  • Analysis framework. In the AIJ'12 (PDF) paper, we proposed a framework to characterize the approximation ability of a kind of evolutionary algorithms, leading to a general approximation guarantee. On k-set cover problem, it can achieve the currently best-achievable approximation ratio, revealing the advantage of evolutionary algorithms over the well known greedy algorithm.

  • Crossover is helpful: In the AIJ'13 (PDF), we disclosed that crossover operator can help fulfill the Pareto front in multi-objective optimization tasks. As a consequence, on the bi-objective Minimum Spanning Tree problem, a multi-objective EA with a crossover operator is proved to improve the running time from that without the crossover for achieving a 2-approximate solution.

Pareto optimization:

  • Theory: In the AIJ'12 (PDF) paper, we proposed a framework to characterize the approximation ability of a kind of evolutionary algorithms, leading to a general approximation guarantee. On k-set cover problem, it can achieve the currently best-achievable approximation ratio, revealing the advantage of evolutionary algorithms over the well known greedy algorithm.

  • For constrained optimization: In the IJCAI'15 (PDF) paper, we proved for large classes of P-Solvable and NP-Hard constrained optimization problems that Pareto optimization has a better worst-case performance than a commonly employed penalty method.

  • Application in ensemble pruning: In the AAAI'15 (PDF) paper, we employed the Pareto optimization to the ensemble pruning problem, which is NP-Hard. The problem asks for the best subset among a set of base classifiers. We show both theoretically and empirically that the proposed Pareto ensemble pruning is superior to the state-of-the-art.

  • Application in subset selection: In the NIPS'15 (PDF) paper, we employed the Pareto optimization to solve the NP-Hard subset selection problem. On spare regression, a typical subset selection problem, we show both theoretically and empirically that the proposed Pareto subset selection is superior to the state-of-the-art.

  • Parallel Pareto optimization: In the IJCAI'16 (PDF) paper, we propose a very simple method to parallel Pareto optimization. We prove that the parallelization can lead to almost linear speedup performance, and converges to a constant optimization time if there are sufficiently many machines.

Codes:


The end