Editor’s note: The Towards Data Science podcast’s “Climbing the Data Science Ladder” series is hosted by Jeremie Harris. Jeremie helps run a data science mentorship startup called SharpestMinds . You can listen to the podcast below:
Most of us believe that decisions that affect us should be reached by following a reasoning process that combines data we trust with a logic that we find acceptable.
As long as human beings are making these decisions, we can probe at that reasoning to find out whether we agree with it. We can ask why we were denied that bank loan, or why a judge handed down a particular sentence, for example.
But today, machine learning is automating away more and more of these important decisions. Our lives are increasingly governed by decision-making processes that we can’t interrogate or understand. Worse, machine learning algorithms can exhibit bias or make serious mistakes, so a world run by algorithms risks becoming a dystopian black-box-ocracy, potentially a worse outcome than even the most imperfect human-designed systems we have today.
That’s why AI ethics and AI safety have drawn so much attention in recent years, and why I was so excited to talk to Alayna Kennedy, a data scientist at IBM whose work is focused on the ethics of machine learning, and the risks associated with ML-based decision-making. Alayna has consulted with key players in the US government’s AI effort, and has expertise applying machine learning in industry as well, through previous work on neural network modelling and fraud detection.
Here were some of my biggest take-homes from the conversation:
- Machine learning models often come with a handful of “standard” loss functions that everyone has agreed “work pretty well” (e.g. accuracy, AUC score, categorical cross-entropy, etc). Unfortunately the fact that we’ve settled on these standard metrics can make it tempting to stop thinking critically about what’s being optimized. Sometimes the model with the best accuracy or best F1 score only gets reaches that level of performance by sacrificing other things that we should care about too. Our tendency to go on autopilot and accept “standard” metrics because they’re standard can lead to dangerous outcomes.
- One of the biggest challenges with AI ethics is that we haven’t even come close to working out human ethics yet. That means we’re having to hard-code rules that we can’t even agree on into models whose reasoning we can’t even audit.
- Despite the lack of broad consensus on key ethical questions, many national governments have worked out ethical frameworks that are remarkably consistent.
- An area of AI safety that’s much less emphasized today is the risk of runaway artificial general intelligence; most of our attention on AI safety is directed at more immediate and practical concerns. Alayna and I disagreed about whether or not this is a good thing. Where you stand on this question depends on how likely you think AGI is to be developed in the near- or medium-term (I think it’s uncomfortably probable, while Alayna disagrees).
You can follow Alayna on Twitter here and you can follow me on Twitter here .
以上就是本文的全部内容,希望本文的内容对大家的学习或者工作能带来一定的帮助,也希望大家多多支持 码农网
猜你喜欢:本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们。
SOA & Web 2.0 -- 新商业语言
(美)Sandy Carter / 袁月杨、麻丽莉 / 清华大学出版社 / 2007 / 29.80元
在《SOA&Web 2.0:新商业语言》中,Sandy Calter示范了将企业解构为“组件化”业务模式的方法,然后用相互连接的、可重复的并且能快速、轻松、经济地适应各种变化的IT服务来支持该模式。这些技术将帮助IT专家和业务经理达到崭新的运营水平,以开展着眼于市场的创新,这才是最重要的。总而言之,企业必须实现灵活应对。直到最近,技术都一直阻碍着这些目标的实现。正是由于面向服务架构(SOA)、We......一起来看看 《SOA & Web 2.0 -- 新商业语言》 这本书的介绍吧!