Back
History
High-Dimensional Derivative-free Optimization
Derivative-free methods can tackle complex optimizations, however, are hard to scale to high dimensional problems. '''Papers''': * '''Scaling to high-dimension by random embedding''': In the [{UP}papers/aaai16-resoo.pdf|AAAI'16 (PDF)] paper, we consider solving high-dimensional optimization problems with a low ''effective dimension''. We proved that the random embedding algorithm can reduce the regret bound of the simultaneous optimistic optimization (SOO) algorithm, which is a theoretical-grounded derivative-free method, from depending on the size of the high-dimensions to depending on the size of the low effective dimensions. * '''Sequential random embeddings''': In the [{UP}papers/ijcai16-sre.pdf|IJCAI'16 (PDF)] paper, we extend the concept of ''effective dimension'' to be ''optimal epsilon-effective dimension'' that allows all variable to be effective, but many of them only have a small impact. We then propose the sequential random embedding (SRE) method to break the embedding gap of single random embedding. This method enables us to solve non-convex Ramp loss classification problem up to 100,000 dimensions, and achieve much better results than the concave-convex procedure (CCCP). As a comparison, derivative-free methods are previously used to solve problems with mostly less than 1,000 dimensions. '''Codes''': * Sequential random embeddings : [code_re|SRE]
The end