Knowledge Graphs For eXplainable AI

栏目: IT技术 · 发布时间: 4年前

内容简介:On the Integration of Semantic Technologies and Symbolic Systems into Deep Learning Models for a More Comprehensible Artificial IntelligenceMost of the available approaches to implement

On the Integration of Semantic Technologies and Symbolic Systems into Deep Learning Models for a More Comprehensible Artificial Intelligence

Knowledge Graphs For eXplainable AI

Schematic representation of an eXplainable AI system that integrates semantic technologies into deep learning models. The traditional pipeline of an AI system is depicted with the blue color. The Knowledge Matching process of deep learning components with Knowledge Graphs (KGs) and ontologies is depicted with orange color. Cross-Disciplinary and Interactive Explanations enabled by query and reasoning mechanisms are depicted with the red color.

D eep learning models contributed to reaching unprecedented results in prediction and classification tasks of Artificial Intelligence (AI) systems. However, alongside this notable progress, they do not provide human-understandable insights on how a specific result was achieved. In contexts where the impact of AI on human life is relevant (e.g., recruitment tools, medical diagnoses, etc.), explainability is not only a desirable property, but it is -or, in some cases, it will be soon-a legal requirement.

Most of the available approaches to implement eXplainable Artificial Intelligence (XAI) focus on technical solutions usable only by experts able to manipulate the mathematical functions in deep learning algorithms. A complementary approach is represented by symbolic AI , where symbols are elements of a lingua franca between humans and deep learning. In this context, Knowledge Graphs (KGs) and their underlying semantic technologies are the modern implementations of symbolic AI — while being less flexible and robust to noise compared to deep learning models, KGs are natively developed to be explainable.

Limits of current XAI and the opportunity of KGs

XAI is the field of research where mathematicians, computer scientists, and software engineers design, develop and test techniques for making AI systems more transparent and comprehensible by its stakeholders. Most of the approaches developed in this field require very specific technical expertise to manipulate algorithms that implement the mathematical functions at the roots of deep learning. Moreover, understanding this mathematical scaffolding is not enough to get insights into internal working models. In fact, in order to be more understandable, deep-learning-based systems should be able to emit and manipulate symbols , enabling user explanations on how a specific result is achieved.

In the context of symbolic systems, KGs and their underlying semantic technologies are a promising solution for the issue of understandability. In fact, these large networks of semantic entities and relationships provide a useful backbone for several reasoning mechanisms , ranging from consistency checking to causal inference . These reasoning procedures are enabled by ontologies , which provide a formal representation of semantic entities and relationships relevant to a specific sphere of knowledge.

The role of KGs for a better XAI

The implementations of symbolic systems based on semantic technologies are suitable to improve explanations for non-insiders. Input features, hidden layers and computational units, and predicted output of deep learning models can be mapped into entities of KGs or concepts and relationships of ontologies ( knowledge matching ). Traditionally, these ontology artifacts are the results of conceptualizations and practices adopted by experts from various disciplines, such as biology, finance, and law. As a consequence, they are very comprehensible to people with expertise in a specific domain ( cross-disciplinary explanations ), even if they do not have skills in AI technologies. Moreover, in the context of semantic technologies, KGs and ontologies are natively built to be queried and therefore they are able to provide answers to user requests ( interactive explanations ) and to provide a symbolic level to interpret the behavior and the results of a deep learning model.

Starting from these points there are specific trajectories for future work on XAI, including the exploitation of symbolic techniques to design novel deep neural architectures to natively encode explanations ; the development of multi-modal explanation models that are able to provide insights from different perspectives, combining visual and textual artifacts; the definition of a common explanation framework for the deep learning model comparison, based on KGs and ontologies, to enable proper validation strategies.

Reference

More information on this topic is available on our Journal article entitled “ On the Integration of Knowledge Graphs into Deep Learning Models for a More Comprehensible AI — Three Challenges for Future Research ”.


以上所述就是小编给大家介绍的《Knowledge Graphs For eXplainable AI》,希望对大家有所帮助,如果大家有任何疑问请给我留言,小编会及时回复大家的。在此也非常感谢大家对 码农网 的支持!

查看所有标签

猜你喜欢:

本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

让创意更有黏性

让创意更有黏性

[美] 奇普·希思、[美] 丹·希思 / 姜奕晖 / 中信出版社 / 2014-1-8 / 49.00元

你或许相信在太空中唯一能看到的人工建筑就是万里长城,可乐能腐蚀人体骨骼,我们的大脑使用了10%;与此同时,你却记不得上周例会上领导的安排,昨天看过的那本书里写了什么,上次参加培训的主要内容…… 为什么? 这就引发出《让创意更有黏性》的核心问题:什么样的观点或创意具有强有力的黏性,能被他人牢牢记住? 国际知名行为心理学家希思兄弟根据大量的社会心理学研究案例,揭示了让创意或观点具有黏......一起来看看 《让创意更有黏性》 这本书的介绍吧!

随机密码生成器
随机密码生成器

多种字符组合密码

HTML 编码/解码
HTML 编码/解码

HTML 编码/解码

RGB HSV 转换
RGB HSV 转换

RGB HSV 互转工具