Model-based Derivative-free Methods for Optimization

Modified: 2018/04/28 19:32 by admin - Uncategorized
Derivative-free methods can tackle complex optimizations in real domains, such as non-convex, non-differentiable, and non-continuous problems with many local optima.

Derivative-free optimization:

  • General analysis: In the CEC'14 (PDF) paper, we proposed the sampling-and-learning (SAL) framework to capture the essence of model-based optimization algorithms, and analyzed its performance using the query complexity for achieving approximate solutions with a probability. We derived a general query complexity bound for SAL algorithms where the learning model is a classifier.

  • Classification-based optimization: In the AAAI'16 (PDF) (Appendix) paper, we discovered key factors for classification-based optimization methods, and designed the RACOS algorithm accordingly. RACOS has been shown superior to some state-of-the-art derivative-free optimization algorithms.

  • Classification-based optimization in discrete domains: In the CEC'16 (PDF) paper, we analyzed the performance of the classification-based optimization in finite discrete spaces.

  • Sequential classification-based optimization: In the AAAI'17 (PDF) paper, we proposed the sequential version of RACOS, called SRACOS. Unlike the original RACOS that samples and evaluates a batch of solutions at a time, SRACOS samples one solution at a time and update the classifier immediately after the evaluation of this solution. SRACOS shows a significant improvement from RACOS, both theoretically and practically.

High-dimentionality:

  • Scaling to high-dimension by random embedding: In the AAAI'16 (PDF) paper, we consider solving high-dimensional optimization problems with a low effective dimension. We proved that the random embedding algorithm can reduce the regret bound of the simultaneous optimistic optimization (SOO) algorithm, which is a theoretical-grounded derivative-free method, from depending on the size of the high-dimensions to depending on the size of the low effective dimensions.

  • Sequential random embeddings: In the IJCAI'16 (PDF) paper, we extend the concept of effective dimension to be optimal epsilon-effective dimension that allows all variable to be effective, but many of them only have a small impact. We then propose the sequential random embedding (SRE) method to break the embedding gap of single random embedding. This method enables us to solve non-convex Ramp loss classification problem up to 100,000 dimensions, and achieve much better results than the concave-convex procedure (CCCP). As a comparison, derivative-free methods are previously used to solve problems with mostly less than 1,000 dimensions.

Codes:

  • The derivative-free optimization by classification algorithm: RACOS
  • Sequential random embeddings : SRE

The end