内容简介:Open source code for the paper: "AutoML-Zero aims to automatically discover computer programs that can solve machine learning tasks, starting from empty or random programs and using only basic math operations. The goal is to simultaneously search for all a
AutoML-Zero
Open source code for the paper: " AutoML-Zero: Evolving Machine Learning Algorithms From Scratch "
What is AutoML-Zero?
AutoML-Zero aims to automatically discover computer programs that can solve machine learning tasks, starting from empty or random programs and using only basic math operations. The goal is to simultaneously search for all aspects of an ML algorithm—including the model structure and the learning strategy—while employing minimal human bias .
Despite AutoML-Zero's challenging search space, evolutionary search shows promising results by discovering linear regression with gradient descent, 2-layer neural networks with backpropagation, and even algorithms that surpass hand designed baselines of comparable complexity. The figure above shows an example sequence of discoveries from one of our experiments, evolving algorithms to solve binary classification tasks. Notably, the evolved algorithms can be interpreted . Below is an analysis of the best evolved algorithm: the search process "invented" techniques like bilinear interactions, weight averaging, normalized gradient, and data augmentation (by adding noise to the inputs).
More examples, analysis, and details can be found in the paper .
5-Minute Demo: Discovering Linear Regression From Scratch
As a miniature "AutoML-Zero" experiment, let's try to automatically discover programs to solve linear regression tasks.
To get started, first install bazel
following instructions here (bazel>=2.2.0 and C++>=14 are required), then run the demo with:
git clone https://github.com/google-research/google-research.git cd google-research/automl_zero ./run_demo.sh
This script runs evolutionary search on 10 linear tasks ( T search in the paper). After each experiment, it evaluates the best algorithm discovered on 100 new linear tasks ( T select in the paper). Once an algorithm attains a fitness (1 - RMS error) greater than 0.9999, it is selected for a final evaluation on 100 unseen tasks . To conclude, the demo prints the results of the final evaluation and shows the code for the automatically discovered algorithm.
To make this demo quick, we use a much smaller search space than in the paper : only the math operations necessary to implement linear regression are allowed and the programs are constrained to a short, fixed length. Even with these limitations, the search space is quite sparse, as random search experiments show that only ~1 in 10 8 algorithms in the space can solve the tasks with the required accuracy. Nevertheless, this demo typically discovers programs similar to linear regression by gradient descent in under 5 minutes using 1 CPU (Note that the runtime may vary due to random seeds and hardware). We have seen similar and more interesting discoveries in the unconstrained search space (see more details in the paper ).
You can compare the automatically discovered algorithm with the solution from a human ML researcher (one of the authors):
def Setup(): s2 = 0.001 # Init learning rate. def Predict(): # v0 = features s1 = dot(v0, v1) # Apply weights def Learn(): # v0 = features; s0 = label s3 = s0 - s1 # Compute error. s4 = s3 * s1 # Apply learning rate. v2 = v0 * s4 # Compute gradient. v1 = v1 + v2 # Update weights.
In this human designed program, the Setup
function establishes a learning rate, the Predict
function applies a set of weights to the inputs, and the Learn
function corrects the weights in the opposite direction to the gradient; in other words, a linear regressor trained with gradient descent. The evolved programs may look different even if they have the same functionality due to redundant instructions and different ordering, which can make them challenging to interpret. See more details about how we address these problems in the paper .
Reproducing Search Baselines
First install bazel
following instructions here (bazel>=2.2.0 and C++>=14 are required),then run the following command to reproduce the results in Supplementary Section 9 ("Baselines") with the "Basic" method on 1 process (1 CPU):
[To be continued, ETA: March, 2020]
If you want to use more than 1 process, you will need to create your own implementation to parallelize the computation based on your particular distributed-computing platform. A platform-agnostic description of what we did is given in our paper.
We left out of this directory upgrades for the "Full" method that are pre-existing (hurdles) but included those introduced in this paper (e.g. FEC for ML algorithms).
Citation
If you use the code in your research, please cite:
TODO
Search keywords: machine learning, neural networks, evolution, evolutionary algorithms, regularized evolution, program synthesis, architecture search, NAS, neural architecture search, neuro-architecture search, AutoML, AutoML-Zero, algorithm search, meta-learning, genetic algorithms, genetic programming, neuroevolution, neuro-evolution.
以上就是本文的全部内容,希望本文的内容对大家的学习或者工作能带来一定的帮助,也希望大家多多支持 码农网
猜你喜欢:本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们。
Head First HTML与CSS(第2版)
Elisabeth Robson、Eric Freeman / 徐阳、丁小峰 / 中国电力出版社 / 2013-9 / 98.00元
是不是已经厌倦了那些深奥的HTML书?你可能在抱怨,只有成为专家之后才能读懂那些书。那么,找一本新修订的《Head First HTML与CSS(第2版)》吧,来真正学习HTML。你可能希望学会HTML和CSS来创建你想要的Web页面,从而能与朋友、家人、粉丝和狂热的顾客更有效地交流。你还希望使用最新的HTML5标准,能够保证随时间维护和扩展你的Web页面,使它们在所有浏览器和移动设备中都能正常工......一起来看看 《Head First HTML与CSS(第2版)》 这本书的介绍吧!