High-Dimensional Derivative-free Optimization

Modified: 2016/08/10 18:23 by admin - Uncategorized
Derivative-free methods can tackle complex optimizations, however, are hard to scale to high dimensional problems.

Papers:

  • Scaling to high-dimension by random embedding: In the AAAI'16 (PDF) paper, we consider solving high-dimensional optimization problems with a low effective dimension. We proved that the random embedding algorithm can reduce the regret bound of the simultaneous optimistic optimization (SOO) algorithm, which is a theoretical-grounded derivative-free method, from depending on the size of the high-dimensions to depending on the size of the low effective dimensions.

  • Sequential random embeddings: In the IJCAI'16 (PDF) paper, we extend the concept of effective dimension to be optimal epsilon-effective dimension that allows all variable to be effective, but many of them only have a small impact. We then propose the sequential random embedding (SRE) method to break the embedding gap of single random embedding. This method enables us to solve non-convex Ramp loss classification problem up to 100,000 dimensions, and achieve much better results than the concave-convex procedure (CCCP). As a comparison, derivative-free methods are previously used to solve problems with mostly less than 1,000 dimensions.

Codes:

  • Sequential random embeddings : SRE

The end