内容简介:Whether you like it or not, the impact of machine learning on your life is growing very rapidly. Machine learning algorithms determine whether you would get the mortgage for your dream home, or if your resume would be shortlisted for your next job. It is a
Most common reasons why biases get introduced in ML models
Jun 25 ·6min read
Whether you like it or not, the impact of machine learning on your life is growing very rapidly. Machine learning algorithms determine whether you would get the mortgage for your dream home, or if your resume would be shortlisted for your next job. It is also changing our workforce rapidly. Robots are taking over warehouses and factories, and self driving cars are threatening to disrupt the jobs of millions of professional drivers across the world. Even law enforcement agencies are increasingly using machine learning to screen for potential criminal leads and assess risks.
Unfortunately, all these advancements in technology may be perpetuating and exacerbating the biases ailing our society. In one of the early examples of algorithmic bias, 60 women and ethnic minorities were denied entry to St. George’s Hospital Medical School per year from 1982 to 1986, because of a new computer-guidance assessment system that denied entry to women and men with “foreign-sounding names” based on historical trends in admissions. Or more recently, in 2016, TayTweets, a chat bot trained by Microsoft on Twitter data started spouting racist tweets.
All these advancements are raising very valid questions about how machine learning practitioners can ensure fairness in their algorithms. What is fair is an age old question. Thankfully, a lot of research has been going on in this area. In this post, I am going to talk about the most common set of problems you might run when trying to ensure that your machine learning model is bias free.
Underrepresentation
One of the most common causes of bias in machine learning algorithms is that the training data is missing samples for underrepresented groups/categories. This is the reason why Siri frequently has a hard time understanding people with accents. This is also what caused the famous Google photos incident where black people were tagged as gorillas. So it is really important to make sure the training data has representation from all the underrepresented groups. Another way to easily detect this early is to deploy a second algorithm which predicts whether the data in production is close to the training data and intervene early if that is not the case.
以上所述就是小编给大家介绍的《Biases in Machine Learning》,希望对大家有所帮助,如果大家有任何疑问请给我留言,小编会及时回复大家的。在此也非常感谢大家对 码农网 的支持!
猜你喜欢:本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们。
Learning Processing
Daniel Shiffman / Morgan Kaufmann / 2008-08-15 / USD 49.95
Book Description Teaches graphic artists the fundamentals of computer programming within a visual playground! Product Description This book introduces programming concepts in the context of c......一起来看看 《Learning Processing》 这本书的介绍吧!