Optuna vs Hyperopt: Which Hyperparameter Optimization Library Should You Choose?

栏目: IT技术 · 发布时间: 4年前

内容简介:To train a model on a set of parameters you need to run something like this:For this study, I tried to find the best parameters withinI ran 6 experiments:

To train a model on a set of parameters you need to run something like this:

For this study, I tried to find the best parameters within 100 run budget .

I ran 6 experiments:

  • Random search (from hyperopt) as a reference
  • Tree of Parzen Estimator search strategies for both Optuna and Hyperopt
  • Adaptive TPE from Hyperopt
  • TPE from Optuna with a pruning callback for more runs but within the same time frame. It turns out that 400 runs with pruning takes as much time as 100 runs without it.
  • Optuna with Random Forest surrogate model from skopt.Sampler

You may want to scroll down to the Example Script at the end.

If you want to explore all of those experiments in more detail you can simply go to the experiment dashboard .

Note:

Register for the free tool for experiment tracking and management .

Both Optuna and Hyperopt improved over the random searchwhich is good.

TPE implementation from Optuna was slightly better than Hyperopt’s Adaptive TPE but not by much. On the other hand, when running hyperparameter optimization, those small improvements are exactly what you are going for.

What is interesting is that TPE implementation from HPO and Optuna give vastly different results on this problem. Maybe the cutoff point between good and bad parameter configurations λ is chosen differently or sampling methods have defaults that work better for this particular problem.

Moreover, using pruning decreased training time by 4x . I could run 400 searches in the time that runs 100 without pruning. On the flip side, using pruning got a lower score . It may be different for your problem but it is important to consider that when making a decision whether to use pruning or not.

For this section, I assigned points based on the improvements over the random search strategy.

  • Hyperopt got (0.850–0.844)*100 = 6
  • Optuna got (0.854–0.844)*100 = 10

Experimental results:

Optuna = Hyperopt


以上就是本文的全部内容,希望本文的内容对大家的学习或者工作能带来一定的帮助,也希望大家多多支持 码农网

查看所有标签

猜你喜欢:

本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

重新定义管理

重新定义管理

[美]布赖恩·罗伯逊 / 中信出版社 / 2015-10-1 / 45

还没听说过合弄制?你一定会听说的。终于,迎来了一本合弄制创建者的著作,讲解了这一公司经营方式的革命性新系统及其实施方法。 今天的商界,情况瞬息万变。但在绝大多数组织中,最具资格响应变化的人们却几乎都没有权力去做出改变。相反,他们不得不遵守那些由领导们设立的亘古不变的战略,而且这些领导们仍然相信“预测和控制”才是有效管理的关键。 合弄制向你展示了怎样让组织中工作的每一个人都成为一名领导,......一起来看看 《重新定义管理》 这本书的介绍吧!

HTML 压缩/解压工具
HTML 压缩/解压工具

在线压缩/解压 HTML 代码

XML 在线格式化
XML 在线格式化

在线 XML 格式化压缩工具

正则表达式在线测试
正则表达式在线测试

正则表达式在线测试