Automating Machine Learning: Google AutoML-Zero Evolves ML Algorithms From Scratch

栏目: IT技术 · 发布时间: 4年前

内容简介:We often hear how widespread artificial intelligence has become and how it is increasingly affecting our daily lives. But for most people the nature of the tech is a mystery — we know it’s powerful but we don’t know what makes it tick, much less how it’s b

We often hear how widespread artificial intelligence has become and how it is increasingly affecting our daily lives. But for most people the nature of the tech is a mystery — we know it’s powerful but we don’t know what makes it tick, much less how it’s built. While research over the past decade has greatly advanced model structures and learning methods, creating algorithms remains relatively time-consuming and difficult. This has prompted research into automation efforts, or AutoML, aimed at the simplification and democratization of AI.

In a recent ICML paper, Google researchers propose an “AutoML-Zero” approach designed to automatically search for machine learning (ML) algorithms from scratch, requiring minimal human expertise or input. Starting from empty programs, AutoML-Zero uses only basic mathematical operations as building blocks and applies evolutionary methods to automatically find the code for complete ML algorithms.

Automating Machine Learning: Google AutoML-Zero Evolves ML Algorithms From Scratch

The Google researchers propose that previous work on AutoML largely focused on the architecture of neural networks, which often relies on sophisticated expert-designed layers as building blocks. They aim to replace those expert-designed layers with simple mathematical operations and push AutoML a step further to automatically discover complete ML algorithms.

Ideally, AutoML would cover the complete pipeline, from raw datasets to deployable ML models, to totally automate the process of applying ML to real-world problems. This is the ultimate goal — achieving high-level automation that would enable even non-experts to make use of ML models and techniques.

Automating the process of applying ML end-to-end can not only boost model performance but also produce simpler solutions and accelerate the creation of these solutions.

The researchers identify a couple of drawbacks of previous handmade AutoML approaches. First, human-designed components can create biases toward the search results in favour of human-designed algorithms, which may reduce the innovation potential of AutoML. Also, while some AutoML studies have found ways to constrain search spaces to isolated algorithmic aspects, these constrained search spaces add a new burden on researchers and can undermine the original intention of saving their time.

Automatically Search for ML Algorithms From Scratch

To address these limitations, the researchers propose AutoML-Zero, which can search a fine-grained space simultaneously for the model, optimization procedure, initialization, and so on. The approach requires much less human design to automatically search for whole ML algorithms from basic operations with minimal restrictions on form, and even allows the discovery of non-neural network algorithms. The approach “demonstrates the plausibility of automatically discovering more novel ML algorithms to address harder problems in the future,” explain the researchers in a blog post .

In small image classification problems for example, the proposed search method starts from scratch but will eventually automatically “rediscover” fundamental ML techniques such as backpropagation and linear regression that were developed years ago.

The Google researchers adopted a variant of classic evolutionary methods, which have been proven useful in discovering computer programs since the 1980s, to search the space of algorithms.

The Bet on Evolutionary Algorithms

Evolutionary algorithm (EA) is a subset of evolutionary computation, a family of population-based trial and error problem solvers with a metaheuristic or stochastic optimization character. In evolutionary computation, an initial set of candidate solutions is first generated and then iteratively updated. Each new generation is produced by stochastically removing less-desired solutions and introducing small random changes.

Evolutionary algorithms use mechanisms inspired by biological evolution such as reproduction, mutation, recombination, and selection. EAs often perform well in approximating solutions to a range of problems that would otherwise take too long to exhaustively process.

The use of evolutionary principles for automated problem-solving was formally proposed and developed more than 50 years ago. Artificial evolution became a widely recognized optimization method as a result of the work of German researcher Ingo Rechenberg, who used evolution strategies to solve complex engineering problems in the 1960s and early 1970s.

In 1987, Jürgen Schmidhuber published his first paper on genetic programming, and later that year described first general-purpose learning algorithms in his diploma thesis, Evolutionary Principles in Self-Referential Learning.

Since the 1990s, nature-inspired algorithms have become an increasingly significant part of evolutionary computation. With academic interest continuing to grow and the power of computers continuing to increase, evolutionary algorithms can now be used both to solve multi-dimensional problems more efficiently than software produced by human designers, and can also be used to optimize the design of systems.

The Google researchers found the simplicity and scalability of evolutionary methods especially suitable for the discovery of learning algorithms, and their results demonstrated potential through the discovery of nuanced ML algorithms using evolutionary search.

Exploration of Vast and Sparse Search Spaces

Early research into algorithm learning from scratch focused on reducing the search space and compute required, and this approach has not been revisited much since the early 90s, the researchers wrote.

Existing AutoML search spaces have been constructed to be dense with good solutions, thus deemphasizing the search method itself. AutoML-Zero is different as the space is so generic that it ends up being quite sparse — an accurate algorithm might be as rare as 1 in 10 12 candidates. The genericity of the AutoML-Zero space makes it more difficult to search than existing AutoML methods.

In the AutoML-Zero setup, a typical search struggles to find a solution in a reasonable amount of time. Evolutionary methods however can be tens of thousands of times faster, according to the researchers.

The team first initialized a population with empty programs, which then evolves in repeating cycles to produce better and better learning algorithms. At each cycle, two or more random models compete and the most accurate model gets to be a “parent.” The parent clones itself to produce a child, which then gets mutated — meaning the child’s code is modified in a random way, which could mean for example arbitrarily inserting, removing, or modifying a line in the code. The mutated algorithm is then evaluated on image classification tasks.

Automating Machine Learning: Google AutoML-Zero Evolves ML Algorithms From Scratch

Evolutionary Methods Find Solutions in AutoML-Zero Search Space

Evolutionary search can not only find solutions in the AutoML-Zero search space despite its enormous size and sparsity, but is also able to discover more complex and effective techniques as time passes. Moreover, evolution adapts the algorithm to different task types; for instance, dropout-like operations emerge when the task needs regularization and learning rate decay appears when the task requires faster convergence.

Starting with a population of empty programs, the evolutionary search at the beginning can only find the simplest algorithms that represent linear models with hard-coded weights. However, as time passes, more complex and accurate algorithms can be automatically invented. For example, stochastic gradient descent (SGD) — an iterative method for optimizing an objective function with suitable smoothness properties — can be invented to learn weights.

In their experiments, the first SGD invented was flawed but was automatically fixed quite quickly, triggering a series of improvements to the prediction and learning algorithm. The improvements over the baseline can also be transferred to datasets that are not used during search. In the end, the proposed approach managed to produce a “best evolved algorithm” and construct a model that outperformed hand-crafted designs of comparable complexity.

The final algorithm includes techniques such as noise injection as data augmentation, a bilinear model, gradient normalization, and weight averaging.

The researchers also describe how different lines in the evolved code implement each of these techniques and present ablation studies to verify their values. Through additional experiments, they show that it is possible to guide the evolutionary search by controlling “the habitat” — the tasks on which the evolutionary process evaluates the fitness of the algorithms.

“We consider this to be preliminary work,” the researchers explain. “We have yet to evolve fundamentally new algorithms, but it is encouraging that the evolved algorithm can surpass simple neural networks that exist within the search space.”

Currently the search process still requires significant compute. But the researchers believe with the increased availability of powerful hardware and more efficient search methods, it is likely that the search space will become more inclusive and the results will improve.

The paper AutoML-Zero: Evolving Machine Learning Algorithms From Scratch is on arXiv and the open-sourced code is on GitHub .

Journalist: Yuan Yuan | Editor : Michael Sarazen

Automating Machine Learning: Google AutoML-Zero Evolves ML Algorithms From Scratch

Synced Report |  A Survey of China’s Artificial Intelligence Solutions in Response to the COVID-19 Pandemic — 87 Case Studies from 700+ AI Vendors

This report offers a look at how the Chinese government and business owners have leveraged artificial intelligence technologies in the battle against COVID-19. It is also available on Amazon Kindle .

Click here to find more reports from us.

We know you don’t want to miss any story.  Subscribe to our popular  Synced Global AI Weekly to get weekly AI updates.

Automating Machine Learning: Google AutoML-Zero Evolves ML Algorithms From Scratch

Advertisements


很遗憾的说,推酷将在这个月底关闭。人生海海,几度秋凉,感谢那些有你的时光。


以上就是本文的全部内容,希望本文的内容对大家的学习或者工作能带来一定的帮助,也希望大家多多支持 码农网

查看所有标签

猜你喜欢:

本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

大思维:集体智慧如何改变我们的世界

大思维:集体智慧如何改变我们的世界

杰夫·摩根 / 郭莉玲、尹玮琦、徐强 / 中信出版集团股份有限公司 / 2018-8-1 / CNY 65.00

智能时代,我们如何与机器互联,利用技术来让我们变得更聪明?为什么智能技术不会自动导致智能结果呢?线上线下群体如何协作?社会、政府或管理系统如何解决复杂的问题?本书从哲学、计算机科学和生物学等领域收集见解,揭示了如何引导组织和社会充分利用人脑和数字技术进行大规模思考,从而提高整个集体的智力水平,以解决我们时代的巨大挑战。是英国社会创新之父的洞见之作,解析企业、群体、社会如何明智决策、协作进化。一起来看看 《大思维:集体智慧如何改变我们的世界》 这本书的介绍吧!

CSS 压缩/解压工具
CSS 压缩/解压工具

在线压缩/解压 CSS 代码

在线进制转换器
在线进制转换器

各进制数互转换器

HEX HSV 转换工具
HEX HSV 转换工具

HEX HSV 互换工具