Systematic Bias in Artificial Intelligence

栏目: IT技术 · 发布时间: 4年前

内容简介:Recently, Dr. Jennifer Lincoln made aAs AI becomes increasingly integrated into current systems, systematic bias is an important risk that cannot be overlooked. When models are fed data where a bias towards a certain ethnicity or gender exists, they aren’t

Shining a light on one of the biggest problems of tomorrow

Systematic Bias in Artificial Intelligence

Does data really have a better idea? credit: Franki Chamaki on Unsplash

Recently, Dr. Jennifer Lincoln made a TikTok highlighting the multitude of ways that African Americans face discrimination in healthcare such as receiving less pain medication and waiting longer in emergency rooms. The video, based on this study published in the Proceedings of the National Academy of Sciences (PNAS), has gone viral with 400k+ views on TikTok and nearly 8 million views on Twitter. If an AI model was trained on healthcare records to predict painkiller dosages for patients, it may recommend lower dosages for African American patients because it was trained on a dataset where African American patients received lower dosages. Clearly, this would become very problematic since this hypothetical use case of AI would further institutionalize racism.

Consequences of Bias

As AI becomes increasingly integrated into current systems, systematic bias is an important risk that cannot be overlooked. When models are fed data where a bias towards a certain ethnicity or gender exists, they aren’t able to serve their intended purpose effectively. A model evaluated on a metric such as accuracy, profit etc would attempt to maximize said metric without any regard for biases it forms. If steps aren’t taken to combat this issue, humans, or regulators, more importantly, might lose faith in AI, preventing us from unlocking the potential of the technology. To comprehend the severity of this problem, here are two more frightening examples of bias in AI.

  • As outlined in the paper “ The Risk of Racial Bias in Hate Speech Detection ”, Researchers at the University of Washington tested Google’s AI hate speech detector on over 5 million tweets and discovered that tweets from African Americans were twice as liable to be classified as toxic speech compared to people of other races.
  • COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) is an algorithm used by New York, California and other states to predict the risk of released prisoners committing another crime. In the research article “ The accuracy, fairness, and limits of predicting recidivism ”, researchers at Dartmouth concluded that “Black defendants who did not recidivate were incorrectly predicted to reoffend at a rate of 44.9%, nearly twice as high as their white counterparts at 23.5%; and white defendants who did recidivate were incorrectly predicted to not reoffend at a rate of 47.7%, nearly twice as high as their black counterparts at 28.0%.” This is extremely troubling given that COMPAS scores can influence the length of a defendant’s sentence.

Fighting Bias

A concern that many critics of AI are vocal about is the “black box” nature of artificial neural networks: a machine learning model can provide an answer to the question we ask, but we can’t understand how the model arrived at that answer due to the complexity of the calculations involved. This opaqueness allows bias to creep in unnoticed. Even beyond bias, consumers/businesses are interested in understanding how AI arrives at its conclusions.

One potential solution for elucidating how AI makes high stakes decisions is interpretable machine learning. As the name suggests, interpretable machine learning involves creating models whose decisions making process is more understandable than black box models. To become interpretable, these models are designed with factors such as additional constraints and the input of domain experts. For example, an additional constraint to prevent bias in loan applications would be compliance: the model must adhere to fair lending laws by not discriminating against consumers of a certain race.

While interpretable machine learning models are more time consuming and expensive to develop due to their increased complexity, the layer of interpretability is definitely worth it for applications like autonomous vehicles, healthcare, or criminal justice where errors have serious repercussions. Human nature makes society resistant to change but more transparent models can begin to make the public/government more receptive to widespread adoption of AI.

Other potential solutions focus on the data the model uses instead of how the model uses data. One proposed method involves taking large datasets that would typically remain confidential (ex. medical data) and release them to the public after removing personally identifiable information. The idea is that bias will be filtered out in these anonymous datasets. Yet, this tactic comes with its own risks as hackers can cross reference data to make it past the layer of anonymity. At the same time, consciously including underrepresented populations would bolster datasets which lack diversity. Finally, fostering diversity among the engineers designing these algorithms should help fight bias.

As AI rapidly advances, proactively combating bias has to remain a priority.


以上所述就是小编给大家介绍的《Systematic Bias in Artificial Intelligence》,希望对大家有所帮助,如果大家有任何疑问请给我留言,小编会及时回复大家的。在此也非常感谢大家对 码农网 的支持!

查看所有标签

猜你喜欢:

本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

JavaScript语言精粹

JavaScript语言精粹

道格拉斯•克罗克福德 (Douglas Crockford) / 赵泽欣、鄢学鹍 / 电子工业出版社 / 2012-9-1 / 49.00元

JavaScript 曾是“世界上最被误解的语言”,因为它担负太多的特性,包括糟糕的交互和失败的设计,但随着Ajax 的到来,JavaScript“从最受误解的编程语言演变为最流行的语言”,这除了幸运之外,也证明了它其实是一门优秀的语言。Douglas Crockford 在本书中剥开了JavaScript 沾污的外衣,抽离出一个具有更好可靠性、可读性和可维护性的JavaScript 子集,让你看......一起来看看 《JavaScript语言精粹》 这本书的介绍吧!

URL 编码/解码
URL 编码/解码

URL 编码/解码

Markdown 在线编辑器
Markdown 在线编辑器

Markdown 在线编辑器

RGB CMYK 转换工具
RGB CMYK 转换工具

RGB CMYK 互转工具