The Four Components of Trusted Artificial Intelligence

栏目: IT技术 · 发布时间: 4年前

内容简介:Building trust into AI systems is hard. How about establishing a fact sheet for AI systems?Trust and transparency are at the forefront of conversations related to artificial intelligence(AI) these days. While we intuitively understand the idea of trusting

The Four Components of Trusted Artificial Intelligence

Building trust into AI systems is hard. How about establishing a fact sheet for AI systems?

Trust and transparency are at the forefront of conversations related to artificial intelligence(AI) these days. While we intuitively understand the idea of trusting AI agents, we are still trying to figure out the specific mechanics to translate trust and transparency into programmatic constructs. After all, what does trust means in the context of an AI system?

Trust is a foundational building block of human socio-economic dynamics. In software development, during the last few decades, we steadily built mechanisms for asserting trust on specific applications. When we get on planes that fly on auto-pilot or cars completely driven by robots we are intrinsically expressing trust on the creators of a specific software application. In software, trust mechanisms are fundamentally based on the deterministic nature of most software applications in which their behavior is uniquely determine by the code workflow which makes it intrinsically predictable. The non-deterministic nature of artificial intelligence(AI) systems breaks the pattern of traditional software applications and introduces new dimensions to enable trust in AI agents. One of the most viable ideas that have been proposed for establishing trust in AI systems, came from IBM research in a famous research paper published over a year ago .

Trust is a dynamic derived from the process of minimizing risk. In software development, trust is built through mechanisms such as testability, auditability, documentation and many other elements that help establish the reputation of a piece of software. While all those mechanisms are relevant to AI systems, they are notoriously difficult to implement. In traditional software applications, their behavior is dictated by explicit rules expressed in the code; in the case of AI agents, their behavior is based on knowledge that evolves over time. The former approach is deterministic and predictable, the latter is non-deterministic and difficult to understand.

If we accept that AI is going to be a relevant part of our future, it is important to establish the foundations of trust in AI systems. Today, we regularly rely on AI models without having a clear understanding of their capabilities, knowledge or training processes. The concept of trust in AI systems remains highly subjective and hasn’t been incorporated as part of popular machine learning frameworks or platforms. What is AI trust and how can we measure it?

The Four Pillars of Trusted AI

Trust in human interaction is not only based on our interpretation of specific actions but it considers social knowledge built throughout centuries. We understand that a behavior is discriminatory not only by judging it on real time by also by factoring in a socially-accepted concept that discrimination is derogatory to human beings. How can we extrapolate these ideas to the world of artificial intelligence(AI). In their paper , the IBM team proposed four fundamental pillars to trusted AI:

· Fairness:AI systems should use training data and models that are free of bias, to avoid unfair treatment of certain groups .

· Robustness:AI systems should be safe and secure, not vulnerable to tampering or compromising the data they are trained on.

· Explainability:AI systems should provide decisions or suggestions that can be understood by their users and developers.

· Lineage:AI systems should include details of their development, deployment, and maintenance so they can be audited throughout their lifecycle.

Fairness

AI fairness is typically associated with the minimization of bias in AI agents. Bias can be described as the mismatch between the training data distribution and a desired fair distribution. Unwanted bias in training data can result on unfair results. Establishing tests for identifying, curating and minimizing bias in training datasets should be a key element to establish fairness in AI systems. Obviously, fairness is more relevant in AI apps with a tangible social impact such as credit or legal applications.

Explainability

Understanding how AI models arrive to specific decisions is another key principle of trusted AI. Arriving to meaningful explanations about the knowledge of AI models reduces uncertainty and helps to quantify their accuracy. While explainability might be seen as an obvious factor to improve the trust in AI systems, its implementation is far from trivial. There is a natural tradeoff between the explainability of AI models and their accuracy. Highly explainable AI models tend to be very simple and, therefore, not incredibly accurate. From that perspective, establishing the right balance between explainability and accuracy is essential to improve the trust on an AI model.

Robustness

The concept of AI robustness is determined by two underlying factors: safety and security.

Safety

An AI system might be fair and explainable but still unsafe to use. AI safety is typically associated with the ability of an AI model to build knowledge that incorporates societal norms, policies, or regulations that correspond to well-established safe behaviors. Increasing the safety of AI models is another key element of trusted AI systems.

Security

AI models are highly susceptible to all sorts of attacks including many based on adversarial AI methods. The accuracy of AI models is directly correlated to their vulnerability to small perturbations on the input dataset. That relationship is often exploited by malicious actors that can try to alter specific datasets in order to alter/influence the behavior of an AI models. Testing and benchmarking AI models against adversarial attacks is key to establish trust in AI systems. IBM has been doing some interesting work in this area.

Lineage

AI models are constantly evolving making it challenging to trace its history. Establishing and tracking the provenance of training datasets, hyperparameter configurations and other metadata artifacts overtime is important to establish the lineage of an AI model. Understanding the lineage of AI models helps us establish trust from a historical perspective that is different to achieve by just factoring fairness, explainability and robustness alone.

A Factsheet for AI Systems

The subject of disclosures and transparency in AI systems is a very nascent area of research but one that is key to the mainstream adoption of AI. Just like we use information sheets for hardware appliances or nutrition labels in foods, we should consider establishing a factsheet for AI models. In their paper, IBM proposes a Supplier’s Declaration of Conformity (SDoC, or factsheet, for short) that helps to provide information about the four key pillars of trusted AI. IBM’s SDoC methodology should help answer basic questions about AI models such as the following:

· Does the dataset used to train the service have a datasheet or data statement?

· Was the dataset and model checked for biases? If “yes” describe bias policies that were checked, bias checking methods, and results.

· Was any bias mitigation performed on the dataset? If “yes” describe the mitigation method.

· Are algorithm outputs explainable/interpretable? If yes, explain how is explainability achieved (e.g. directly explainable algorithm, local explainability, explanations via examples).

· Describe the testing methodology.

· Was the service checked for robustness against adversarial attacks? If “yes” describe robustness policies that were checked, checking methods, and results.

· Is usage data from service operations retained/stored/kept?

The idea of establishing a factsheet for AI models is as simple as it is relevant to establish trusted AI systems. Some deep learning frameworks have been exploring enabling the building blocks of SDoC as part of their programming model in order to enable better levels of transparency. Enabling trust in deep learning systems is going to be a long journey but concepts such as IBM’s SDoC are a welcomed step in the right direction


以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持 码农网

查看所有标签

猜你喜欢:

本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

互联网的基因

互联网的基因

人民邮电出版社 / 2016-9-21 / 48.00元

《互联网的基因》是一本从电信看互联网创新,从互联网看电信创新的力作。作者何宝宏博士长期在电信行业从事互联网领域研究,是极为少有的“既懂IP又懂电信”的专家。该书借以电信和互联网技术创新的大脉络,用轻松、诙谐、幽默的语言,结合经济学、社会学、哲学、人类学甚至心理学理论,揭示互联网、云计算、大数据以及目前最热门的区块链等技术发展背后的规律。作者在该书中明确表示,互联网是新的技术物种,互联网有基因,互联......一起来看看 《互联网的基因》 这本书的介绍吧!

XML 在线格式化
XML 在线格式化

在线 XML 格式化压缩工具

RGB HSV 转换
RGB HSV 转换

RGB HSV 互转工具