Page History: High-dimensional derivative-free optimization

Compare Page Revisions



« Older Revision - Back to Page History - Current Revision


Page Revision: 2016/08/10 18:21


[Back to main page ]

Derivative-free optimization methods are hard to scale up, thus are hard to applied to high-dimensional problems.

Sequential Random Embedding

Sequential random embedding employs random embedding several times, such that the optimization is maintained in low dimensional spaces, and the embedding loss can be reduced. It can help optimize problems with tens of thousands dimensions in single CPU thread. For details please see:
Hong Qian, Yi-Qi Hu and Yang Yu. Derivative-free optimization of high-dimensional non-convex functions by sequential random embeddings. In: Proceedings of the 24th International Joint Conference on Artificial Intelligence (IJCAI'16), New York, NY, 2016. (PDF)

The latest version is in Github (available in Java): https://github.com/eyounx/SRE
  • The codes are released under the GNU GPL 2.0 license. For commercial purposes, please contact or .

The end