Machine learning as a creative tool, and the quest for artificial general intelligence

栏目: IT技术 · 发布时间: 4年前

Machine learning as a creative tool, and the quest for artificial general intelligence

Photo by sergio souza on Unsplash

Editor’s note: The Towards Data Science podcast’s “Climbing the Data Science Ladder” series is hosted by Jeremie Harris. Jeremie helps run a data science mentorship startup called SharpestMinds . You can listen to the podcast below:

Most machine learning models are used in roughly the same way: they take a complex, high-dimensional input (like a data table, an image, or a body of text) and return something very simple (a classification or regression output, or a set of cluster centroids). That makes machine learning ideal for automating repetitive tasks that might historically have been carried out only by humans.

But this strategy may not be the most exciting application of machine learning in the future: increasingly, researchers and even industry players are experimenting with generative models, that produce much more complex outputs like images and text from scratch. These models are effectively carrying out a creative process — and mastering that process hugely widens the scope of what can be accomplished by machines.

My guest today is Xander Steenbrugge, and his focus is on the creative side of machine learning. In addition to consulting with large companies to help them put state-of-the-art machine learning models into production, he’s focused a lot of his work on more philosophical and interdisciplinary questions — including the interaction between art and machine learning. For that reason, our conversation went in an unusually philosophical direction, covering everything from the structure of language, to what makes natural language comprehension more challenging than computer vision, to the emergence of artificial general intelligence, and how all these things connect to the current state of the art in machine learning.

Here were some of my biggest take-homes:

  • An awful lot of machine learning models that are developed within companies never end up being deployed. Speaking from his consulting experience, Xander says that the reason this happens is that companies are often excited about the prospect of deploying “cool” state-of-the-art models rather than focusing on solving real business problems. This problem isn’t unique to companies though: the mark of a great data scientist is that they think about creating business value before they think about fancy models.
  • Language data is intrinsically more sensitive sensitive to change than image data. You can completely change the meaning of a sentence or paragraph by changing a word or two, but changing the color of a handful of pixels won’t have much of an impact on the meaning conveyed by an image. That’s a big part of the reason that language modelling hasn’t caught up to computer vision yet, despite major recent advances.
  • Currently, language models define the meaning of words only with reference to other words. This may feel like it’s what we do as well — after all, if I ask you to define what an apple is, the only way for you to answer is by referring to other words, like “fruit” or “sweet” or “tree”. And you might imagine — as did philosopher Jacques Derrida — that because each of those words is in turn defined only with respect to other words, all of language is basically just an arbitrary, self-referential mess that’s not tied to anything concrete.
  • This may not be the end of the story, however, because humans don’t just learn the meaning of words by reading millions of Wikipedia articles and connecting one word to another to build a web of interdependencies. Instead, we supplement this strategy with data from other sources, like vision, sound, smell and touch. So it’s possible that to create a true artificial general intelligence, a variety of input types will be needed as well.
  • There are two camps in the debate over what it would take to build an artificial general intelligence: one argues that we’ll get there primarily by improving our compute capabilities and throwing more RAM at the problem, and another argues that more complex models will be required. Xander argues it’s possible that some mix of both will be needed, and that combining models in new ways could also lead to significant jumps in AGI-like performance.

You can follow Xander on Twitter here and you can follow me on Twitter here


以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持 码农网

查看所有标签

猜你喜欢:

本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

Linux内核完全注释

Linux内核完全注释

赵炯 / 机械工业出版社 / 2005-8 / 42.00元

Linux内核完全注释,ISBN:9787111149682,作者:赵炯编著一起来看看 《Linux内核完全注释》 这本书的介绍吧!

JS 压缩/解压工具
JS 压缩/解压工具

在线压缩/解压 JS 代码

XML 在线格式化
XML 在线格式化

在线 XML 格式化压缩工具

html转js在线工具
html转js在线工具

html转js在线工具