Knowledge Graphs For eXplainable AI

栏目: IT技术 · 发布时间: 4年前

内容简介:On the Integration of Semantic Technologies and Symbolic Systems into Deep Learning Models for a More Comprehensible Artificial IntelligenceMost of the available approaches to implement

On the Integration of Semantic Technologies and Symbolic Systems into Deep Learning Models for a More Comprehensible Artificial Intelligence

Knowledge Graphs For eXplainable AI

Schematic representation of an eXplainable AI system that integrates semantic technologies into deep learning models. The traditional pipeline of an AI system is depicted with the blue color. The Knowledge Matching process of deep learning components with Knowledge Graphs (KGs) and ontologies is depicted with orange color. Cross-Disciplinary and Interactive Explanations enabled by query and reasoning mechanisms are depicted with the red color.

D eep learning models contributed to reaching unprecedented results in prediction and classification tasks of Artificial Intelligence (AI) systems. However, alongside this notable progress, they do not provide human-understandable insights on how a specific result was achieved. In contexts where the impact of AI on human life is relevant (e.g., recruitment tools, medical diagnoses, etc.), explainability is not only a desirable property, but it is -or, in some cases, it will be soon-a legal requirement.

Most of the available approaches to implement eXplainable Artificial Intelligence (XAI) focus on technical solutions usable only by experts able to manipulate the mathematical functions in deep learning algorithms. A complementary approach is represented by symbolic AI , where symbols are elements of a lingua franca between humans and deep learning. In this context, Knowledge Graphs (KGs) and their underlying semantic technologies are the modern implementations of symbolic AI — while being less flexible and robust to noise compared to deep learning models, KGs are natively developed to be explainable.

Limits of current XAI and the opportunity of KGs

XAI is the field of research where mathematicians, computer scientists, and software engineers design, develop and test techniques for making AI systems more transparent and comprehensible by its stakeholders. Most of the approaches developed in this field require very specific technical expertise to manipulate algorithms that implement the mathematical functions at the roots of deep learning. Moreover, understanding this mathematical scaffolding is not enough to get insights into internal working models. In fact, in order to be more understandable, deep-learning-based systems should be able to emit and manipulate symbols , enabling user explanations on how a specific result is achieved.

In the context of symbolic systems, KGs and their underlying semantic technologies are a promising solution for the issue of understandability. In fact, these large networks of semantic entities and relationships provide a useful backbone for several reasoning mechanisms , ranging from consistency checking to causal inference . These reasoning procedures are enabled by ontologies , which provide a formal representation of semantic entities and relationships relevant to a specific sphere of knowledge.

The role of KGs for a better XAI

The implementations of symbolic systems based on semantic technologies are suitable to improve explanations for non-insiders. Input features, hidden layers and computational units, and predicted output of deep learning models can be mapped into entities of KGs or concepts and relationships of ontologies ( knowledge matching ). Traditionally, these ontology artifacts are the results of conceptualizations and practices adopted by experts from various disciplines, such as biology, finance, and law. As a consequence, they are very comprehensible to people with expertise in a specific domain ( cross-disciplinary explanations ), even if they do not have skills in AI technologies. Moreover, in the context of semantic technologies, KGs and ontologies are natively built to be queried and therefore they are able to provide answers to user requests ( interactive explanations ) and to provide a symbolic level to interpret the behavior and the results of a deep learning model.

Starting from these points there are specific trajectories for future work on XAI, including the exploitation of symbolic techniques to design novel deep neural architectures to natively encode explanations ; the development of multi-modal explanation models that are able to provide insights from different perspectives, combining visual and textual artifacts; the definition of a common explanation framework for the deep learning model comparison, based on KGs and ontologies, to enable proper validation strategies.

Reference

More information on this topic is available on our Journal article entitled “ On the Integration of Knowledge Graphs into Deep Learning Models for a More Comprehensible AI — Three Challenges for Future Research ”.


以上所述就是小编给大家介绍的《Knowledge Graphs For eXplainable AI》,希望对大家有所帮助,如果大家有任何疑问请给我留言,小编会及时回复大家的。在此也非常感谢大家对 码农网 的支持!

查看所有标签

猜你喜欢:

本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

人月神话

人月神话

弗雷德里克.布鲁克斯 / UMLChina翻译组、汪颖 / 清华大学出版社 / 2007-9 / 48.00元

在软件领域,很少能有像《人月神话》一样具有深远影响力和畅销不衰的著作。Brooks博士为人们管理复杂项目提供了最具洞察力的见解,既有很多发人深省的观点,又有大量软件工程的实践。本书内容来自Brooks博士在IBM公司SYSTEM/360家族和OS/360中的项目管理经验,该项目堪称软件开发项目管理的典范。该书英文原版一经面世,即引起业内人士的强烈反响,后又译为德、法、日、俄、中、韩等多种文字,全球......一起来看看 《人月神话》 这本书的介绍吧!

HTML 编码/解码
HTML 编码/解码

HTML 编码/解码

Base64 编码/解码
Base64 编码/解码

Base64 编码/解码

正则表达式在线测试
正则表达式在线测试

正则表达式在线测试