Biases in Machine Learning

栏目: IT技术 · 发布时间: 5年前

内容简介:Whether you like it or not, the impact of machine learning on your life is growing very rapidly. Machine learning algorithms determine whether you would get the mortgage for your dream home, or if your resume would be shortlisted for your next job. It is a

Most common reasons why biases get introduced in ML models

Whether you like it or not, the impact of machine learning on your life is growing very rapidly. Machine learning algorithms determine whether you would get the mortgage for your dream home, or if your resume would be shortlisted for your next job. It is also changing our workforce rapidly. Robots are taking over warehouses and factories, and self driving cars are threatening to disrupt the jobs of millions of professional drivers across the world. Even law enforcement agencies are increasingly using machine learning to screen for potential criminal leads and assess risks.

Biases in Machine Learning

By Lenny Kuhne on Unsplash

Unfortunately, all these advancements in technology may be perpetuating and exacerbating the biases ailing our society. In one of the early examples of algorithmic bias, 60 women and ethnic minorities were denied entry to St. George’s Hospital Medical School per year from 1982 to 1986, because of a new computer-guidance assessment system that denied entry to women and men with “foreign-sounding names” based on historical trends in admissions. Or more recently, in 2016, TayTweets, a chat bot trained by Microsoft on Twitter data started spouting racist tweets.

Biases in Machine Learning

All these advancements are raising very valid questions about how machine learning practitioners can ensure fairness in their algorithms. What is fair is an age old question. Thankfully, a lot of research has been going on in this area. In this post, I am going to talk about the most common set of problems you might run when trying to ensure that your machine learning model is bias free.

Underrepresentation

One of the most common causes of bias in machine learning algorithms is that the training data is missing samples for underrepresented groups/categories. This is the reason why Siri frequently has a hard time understanding people with accents. This is also what caused the famous Google photos incident where black people were tagged as gorillas. So it is really important to make sure the training data has representation from all the underrepresented groups. Another way to easily detect this early is to deploy a second algorithm which predicts whether the data in production is close to the training data and intervene early if that is not the case.


以上所述就是小编给大家介绍的《Biases in Machine Learning》,希望对大家有所帮助,如果大家有任何疑问请给我留言,小编会及时回复大家的。在此也非常感谢大家对 码农网 的支持!

查看所有标签

猜你喜欢:

本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

概率

概率

[俄]施利亚耶夫 / 周概容 / 高等教育出版社 / 2008-1 / 48.00元

《概率(第2卷)(修订和补充第3版)》是俄国著名数学家A.H.施利亚耶夫的力作。施利亚耶夫是现代概率论奠基人、前苏联科学院院士、著名数学家A.H.柯尔莫戈洛夫的学生,在概率统计界和金融数学界影响极大。《概率(第2卷)(修订和补充第3版)》作为莫斯科大学最为出色的概率教材之一。分为一、二两卷,并配有习题集。第二卷《概率(第2卷)(修订和补充第3版)》是离散时间随机过程(随机序列)的内容。重点讲述(强......一起来看看 《概率》 这本书的介绍吧!

MD5 加密
MD5 加密

MD5 加密工具

SHA 加密
SHA 加密

SHA 加密工具

HEX CMYK 转换工具
HEX CMYK 转换工具

HEX CMYK 互转工具