If AI's So Smart, Why Can't It Grasp Cause and Effect?

栏目: IT技术 · 发布时间: 5年前

内容简介:Here’s a troubling fact. Aself-driving car hurtling along the highway and weaving through traffic has less understanding of what might cause an accident than a child who’s just learning to walk.A new experiment shows how difficult it is for even the bestTh

Here’s a troubling fact. Aself-driving car hurtling along the highway and weaving through traffic has less understanding of what might cause an accident than a child who’s just learning to walk.

A new experiment shows how difficult it is for even the best artificial intelligence systems to grasp rudimentary physics and cause and effect. It also offers a path for building AI systems that can learn why things happen.

The experiment was designed “to push beyond just pattern recognition,” says Josh Tenenbaum , a professor at MIT’s Center for Brains Minds & Machines , who led the work. “Big tech companies would love to have systems that can do this kind of thing.”

The most popular cutting-edge AI technique,deep learning, has delivered some stunning advances in recent years, fueling excitement about the potential of AI. It involves feeding a large approximation of aneural network copious amounts of training data. Deep-learning algorithms can often spot patterns in data beautifully, enabling impressive feats of image and voice recognition. But they lack other capabilities that are trivial for humans.

To demonstrate the shortcoming, Tenenbaum and his collaborators built a kind of intelligence test for AI systems. It involves showing an AI program a simple virtual world filled with a few moving objects, together with questions and answers about the scene and what’s going on. The questions and answers are labeled, similar to how an AI system learns to recognize a cat by being shown hundreds of images labeled “cat.”

Systems that use advanced machine learning exhibited a big blind spot. Asked a descriptive question such as “What color is this object?” a cutting-edge AI algorithm will get it right more than 90 percent of the time. But when posed more complex questions about the scene, such as “What caused the ball to collide with the cube?” or “What would have happened if the objects had not collided?” the same system answers correctly only about 10 percent of the time.

If AI's So Smart, Why Can't It Grasp Cause and Effect?

The WIRED Guide to Artificial Intelligence

Supersmart algorithms won't take all the jobs, But they are learning faster than ever, doing everything from medical diagnostics to serving up ads.

By Tom Simonite

David Cox , director of the MIT-IBM Watson AI Lab , which was involved with the work, says understanding causality is fundamentally important for AI. “We as humans have the ability to reason about cause and effect, and we need to have AI systems that can do the same.”

A lack of causal understanding can have real consequences, too. Industrial robots can increasingly sense nearby objects, in order to grasp or move them. But they don't know that hitting something will cause it to fall over or break unless they’ve been specifically programmed—and it’s impossible to predict every possible scenario.

If a robot could reason causally, however, it might be able to avoid problems it hasn’t been programmed to understand. The same is true for a self-driving car. It could instinctively know that if a truck were to swerve and hit a barrier, its load could spill onto the road.

Causal reasoning would be useful for just about any AI system. Systems trained on medical information rather than 3-D scenes need to understand the cause of disease and the likely result of possible interventions. Causal reasoning is of growing interest to many prominent figures in AI . “All of this is driving towards AI systems that can not only learn but also reason,” Cox says.

The test devised by Tenenbaum is important, says Kun Zhang , an assistant professor who works on causal inference and machine learning at Carnegie Mellon University, because it provides a good way to measure causal understanding, albeit in a very limited setting. “The development of more-general-purpose AI systems will greatly benefit from methods for causal inference and representation learning,” he says.


以上就是本文的全部内容,希望本文的内容对大家的学习或者工作能带来一定的帮助,也希望大家多多支持 码农网

查看所有标签

猜你喜欢:

本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

怪诞行为学2

怪诞行为学2

[美]丹·艾瑞里 / 赵德亮 / 中信出版社 / 2010-1-9 / 42.00元

《怪诞行为学2:非理性的积极力量》编辑推荐:尝试用“非理性”的决策方式,彻底颠覆工作和生活中的“逻辑”,你将获得意想不到的成就感与幸福感!畅销书《怪诞行为学》作者卷土重来,掀起新一轮“非理性”狂潮。 《写给中国人的经济学》作者王福重、著名行为经济学家董志勇倾情作序。 诺贝尔经济学奖得主阿克尔洛夫、《免费》作者安德森高度评价。 《纽约时报》《哈佛商业评论》《波士顿环球报》等全球顶级......一起来看看 《怪诞行为学2》 这本书的介绍吧!

Markdown 在线编辑器
Markdown 在线编辑器

Markdown 在线编辑器

UNIX 时间戳转换
UNIX 时间戳转换

UNIX 时间戳转换

RGB CMYK 转换工具
RGB CMYK 转换工具

RGB CMYK 互转工具