内容简介:Whether you like it or not, the impact of machine learning on your life is growing very rapidly. Machine learning algorithms determine whether you would get the mortgage for your dream home, or if your resume would be shortlisted for your next job. It is a
Most common reasons why biases get introduced in ML models
Jun 25 ·6min read
Whether you like it or not, the impact of machine learning on your life is growing very rapidly. Machine learning algorithms determine whether you would get the mortgage for your dream home, or if your resume would be shortlisted for your next job. It is also changing our workforce rapidly. Robots are taking over warehouses and factories, and self driving cars are threatening to disrupt the jobs of millions of professional drivers across the world. Even law enforcement agencies are increasingly using machine learning to screen for potential criminal leads and assess risks.
Unfortunately, all these advancements in technology may be perpetuating and exacerbating the biases ailing our society. In one of the early examples of algorithmic bias, 60 women and ethnic minorities were denied entry to St. George’s Hospital Medical School per year from 1982 to 1986, because of a new computer-guidance assessment system that denied entry to women and men with “foreign-sounding names” based on historical trends in admissions. Or more recently, in 2016, TayTweets, a chat bot trained by Microsoft on Twitter data started spouting racist tweets.
All these advancements are raising very valid questions about how machine learning practitioners can ensure fairness in their algorithms. What is fair is an age old question. Thankfully, a lot of research has been going on in this area. In this post, I am going to talk about the most common set of problems you might run when trying to ensure that your machine learning model is bias free.
Underrepresentation
One of the most common causes of bias in machine learning algorithms is that the training data is missing samples for underrepresented groups/categories. This is the reason why Siri frequently has a hard time understanding people with accents. This is also what caused the famous Google photos incident where black people were tagged as gorillas. So it is really important to make sure the training data has representation from all the underrepresented groups. Another way to easily detect this early is to deploy a second algorithm which predicts whether the data in production is close to the training data and intervene early if that is not the case.
以上所述就是小编给大家介绍的《Biases in Machine Learning》,希望对大家有所帮助,如果大家有任何疑问请给我留言,小编会及时回复大家的。在此也非常感谢大家对 码农网 的支持!
猜你喜欢:本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们。
计算机程序设计艺术(第2卷)
高德纳 / 机械工业出版社 / 2008-1 / 109.00元
《计算机程序设计艺术:半数值算法(第2卷)(英文版)(第3版)》主要内容:关于算法分析的这多卷论著已经长期被公认为经典计算机科学的定义性描述。迄今已出版的完整的三卷已经组成了程序设计理论和实践的惟一的珍贵资源,无数读者都赞扬Knuth的著作对个人的深远影响,科学家们为他的分析的美丽和优雅所惊叹,而从事实践的程序员已经成功地将他的“菜谱式”的解应用到日常问题上,所有人都由于Knuth在书中表现出的博......一起来看看 《计算机程序设计艺术(第2卷)》 这本书的介绍吧!