The Severe Limitations of Supervised Learning Are Piling Up

栏目: IT技术 · 发布时间: 4年前

内容简介:Supervised learning has dominated the field of machine learning primarily because big tech companies began to need it. The study first began formally in the 1950s to 1960s, but it has only really seen its boom recently, even though it has been studied in a

Is research turning in a different direction?

Jul 30 ·6min read

Supervised learning has dominated the field of machine learning primarily because big tech companies began to need it. The study first began formally in the 1950s to 1960s, but it has only really seen its boom recently, even though it has been studied in academia for almost three-fourths of a century and informally for centuries. Generally, research in data science — and many other fields — is driven wherever there is a heavy corporate demand for it.

For a while, prediction tasks were the most valuable for companies, so they hired data scientists and machine learning engineers in droves to research more efficient, high-performance algorithms and to deploy them in each company’s unique applications.

But as the public adoption of heavy data collection channels rapidly increases — especially with the coronavirus, where Internet usage spiked over 70% — supervised learning cannot catch up with the sheer amount of data. This lies in the realization that not all prediction tasks are valuable . Although it may be possible, with standard signal collection, to predict the chance a user will read a popup, the information and prediction the task contains may not be worth the cost of hiring a team to properly collect, analyze, model, and deploy.

This means that, out of problems that are deemed valuable, finding proper labels ( y ) is even harder. Labels are so expensive that there exist entire companies whose purpose is data annotation, or providing labels for datasets. Compared to the sheer amount of unsupervised data available in an era where everyone online is continually generating data streams, using only labelled data for supervised learning seems like a waste of data.

Unsupervised learning methods have been relatively understudied when compared to supervised learning, but as the quantity of unlabeled data that cannot be utilized with supervised algorithms shoots up, methods to use that traditionally wasted data are being increasingly studied. There is a significant corporate interest in studying at great depth how to make use of an abundance of unsupervised data.

For instance, semi-supervised learning combines the insights mined from unsupervised algorithms for use in supervised algorithms, making full use of the abundance of data.

The Severe Limitations of Supervised Learning Are Piling Up
Image created by author.

Take the semi-supervised GAN, which uses multitask learning to utilize both labeled and unlabeled data. The discriminator model must both perform an unsupervised task — classify an image as real (from the dataset or created by the generator) — and perform a supervised task, being classification.

The Severe Limitations of Supervised Learning Are Piling Up
Image created by author.

It turns out the insights mined by the unsupervised learning task tremendously assist the supervised task, such that the discriminator model can perform incredibly well on image classification datasets with hundreds, or even dozens, of labelled data. Semi-supervised methods like these make use of an influx of unlabeled data and a proportionate shortage of labeled data for an optimal result. Various applications of semi-supervised learning, including semi-supervised SVMs, are being developed. Read more about semi-supervised learninghere.

On the other hand, supervised learning is in the process of realizing another limitation: at its best, it only does exactly what we want it to do. While deep neural networks that produce convincing human language seem impressive to us now, soon it will not be. Historically, the criteria for considering machines ‘intelligent’ has always remained one step ahead of the then-current abilities. For instance, a machine that could perform arithmetic in a fraction of a second was considered supremely intelligent until it was accomplished — now, the calculator is commonplace and nothing of great surprise. This pattern has shown to be true for several breakthroughs in computing and machine learning.

Throughout the evolution of supervised learning, algorithms have been developed with one single goal: to better model the relationships in the data. While there are still many unsolved problems in supervised learning, it is not wrong to say that given the current exponential growth in research and consistent breakthroughs in the performance of deep learning, at a certain point the corporate value in further researching supervised learning to make the existing algorithms perform a fraction of a percentage point better will shrink.

Algorithms that can do exactly what we want them to do — whether that may be generating realistic art, speech, or music on-demand — will be commonplace soon. The next step is developing algorithms that can ‘think for themselves’ — algorithms that are more truly intelligent than ever before, developing unique and previously unthought-of strategies to minimize or optimize some evaluation metric.

For example, AlphaGo , the famous deep learning system that beat the world’s top Go player, didn’t operate on pure supervised learning models because they are chained to the limitations of data. If AlphaGo were trained on inputs from the top human Go players (given this board layout, make your next move that), it would only be as good as they were. In order to beat the top human players, AlphaGo incorporated elements of reinforcement learning, including self-competition and exploration, unconstrained by the data and free to choose strategy as long as the end result — winning — was achieved. Experts remarked about AlphaGo’s “unhuman” playing style, beating over 60 of the top human Go players.

The purpose of supervised learning is to mine and replicate relationships in the existing data. This is clear from the Universal Approximation Theorem, which (essentially) says that the neural network — the pinnacle of supervised learning algorithms — gains its power from its universal approximation capabilities, in that a sufficient architecture can model the underlying relationship in any dataset to any degree of accuracy.

Currently, many actively researched areas in supervised learning involve machines automating human tasks — speaking, drawing, recognizing objects. Reinforcement learning can take us further by finding strategies to, say, treat cancer, engineer the best race car, or even create new, better algorithms; all answers supervised learning algorithms cannot provide.

The guiding idea behind reinforcement learning is that we outline an agent and an environment. The agent interacts with environment and learns to behave such that a metric is optimized. For instance, in the case of self-driving cars, a metric may be miles driven before a crash or going off course. We code the agent with neural network architectures to make decisions, and let the agent learn in a free fashion.

Fundamentally, this is a difference between extrapolation and interpolation (generalization). Supervised learning is about generalizing patterns in data; the illusion of ‘intelligence’ is really a very good relationship-finder hidden under the guise of high dimensionality. There is a limit on how well supervised algorithms can perform (100%) accuracy, but there is no limit on how well reinforcement learning — extrapolation, the finding of creative solutions to maximize a metric — can perform.

To illustrate the difference, consider a demonstration of interpolation:

The Severe Limitations of Supervised Learning Are Piling Up
Image created by author.

And one of extrapolation:

The Severe Limitations of Supervised Learning Are Piling Up
Image created by author.

Supervised learning can only interpolate. Reinforcement learning and similar evolutionary algorithms have the potential to extrapolate.

While this is no claim that supervised learning is ‘inferior’ to another subfield of machine learning — to make such a relationship would be silly — it is fair to say that many opportunities await for solutions that address its limitations in a new era of big data and machine learning. Many such solutions actually incorporate ingenious breakthroughs in supervised learning, like various deep learning architectures and practices. The fundamental ideas behind reinforcement learning, when combined with neural networks to form the basis for policy decisions, are the promising next steps for algorithms that think for themselves and surprise us .

Supervised learning is a field whose flurried interest and research will not die away anytime soon, but unsupervised, semi-supervised, and reinforcement learning methods will be on the rise to address the shortcomings of supervised learning and the needs to corporate data science.


以上就是本文的全部内容,希望本文的内容对大家的学习或者工作能带来一定的帮助,也希望大家多多支持 码农网

查看所有标签

猜你喜欢:

本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

逆向工程权威指南

逆向工程权威指南

Dennis Yurichev(丹尼斯) / 安天安全研究与应急处理中心 / 人民邮电出版社 / 2017-3-1 / 168

逆向工程是一种分析目标系统的过程,旨在于识别系统的各组件以及组件间关系,以便于通过其它形式、或在较高的抽象层次上,重建系统的表征。 本书专注于软件的逆向工程,是写给初学者的一本经典指南。全书共分为12个部分,共102章,涉及X86/X64、ARM/ARM-64、MIPS、Java/JVM等重要话题,详细解析了Oracle RDBMS、Itanium、软件狗、LD_PRELOAD、栈溢出、EL......一起来看看 《逆向工程权威指南》 这本书的介绍吧!

随机密码生成器
随机密码生成器

多种字符组合密码

XML 在线格式化
XML 在线格式化

在线 XML 格式化压缩工具

UNIX 时间戳转换
UNIX 时间戳转换

UNIX 时间戳转换