内容简介:On the Integration of Semantic Technologies and Symbolic Systems into Deep Learning Models for a More Comprehensible Artificial IntelligenceMost of the available approaches to implement
On the Integration of Semantic Technologies and Symbolic Systems into Deep Learning Models for a More Comprehensible Artificial Intelligence
Mar 2 ·3min read
D eep learning models contributed to reaching unprecedented results in prediction and classification tasks of Artificial Intelligence (AI) systems. However, alongside this notable progress, they do not provide human-understandable insights on how a specific result was achieved. In contexts where the impact of AI on human life is relevant (e.g., recruitment tools, medical diagnoses, etc.), explainability is not only a desirable property, but it is -or, in some cases, it will be soon-a legal requirement.
Most of the available approaches to implement eXplainable Artificial Intelligence (XAI) focus on technical solutions usable only by experts able to manipulate the mathematical functions in deep learning algorithms. A complementary approach is represented by symbolic AI , where symbols are elements of a lingua franca between humans and deep learning. In this context, Knowledge Graphs (KGs) and their underlying semantic technologies are the modern implementations of symbolic AI — while being less flexible and robust to noise compared to deep learning models, KGs are natively developed to be explainable.
Limits of current XAI and the opportunity of KGs
XAI is the field of research where mathematicians, computer scientists, and software engineers design, develop and test techniques for making AI systems more transparent and comprehensible by its stakeholders. Most of the approaches developed in this field require very specific technical expertise to manipulate algorithms that implement the mathematical functions at the roots of deep learning. Moreover, understanding this mathematical scaffolding is not enough to get insights into internal working models. In fact, in order to be more understandable, deep-learning-based systems should be able to emit and manipulate symbols , enabling user explanations on how a specific result is achieved.
In the context of symbolic systems, KGs and their underlying semantic technologies are a promising solution for the issue of understandability. In fact, these large networks of semantic entities and relationships provide a useful backbone for several reasoning mechanisms , ranging from consistency checking to causal inference . These reasoning procedures are enabled by ontologies , which provide a formal representation of semantic entities and relationships relevant to a specific sphere of knowledge.
The role of KGs for a better XAI
The implementations of symbolic systems based on semantic technologies are suitable to improve explanations for non-insiders. Input features, hidden layers and computational units, and predicted output of deep learning models can be mapped into entities of KGs or concepts and relationships of ontologies ( knowledge matching ). Traditionally, these ontology artifacts are the results of conceptualizations and practices adopted by experts from various disciplines, such as biology, finance, and law. As a consequence, they are very comprehensible to people with expertise in a specific domain ( cross-disciplinary explanations ), even if they do not have skills in AI technologies. Moreover, in the context of semantic technologies, KGs and ontologies are natively built to be queried and therefore they are able to provide answers to user requests ( interactive explanations ) and to provide a symbolic level to interpret the behavior and the results of a deep learning model.
Starting from these points there are specific trajectories for future work on XAI, including the exploitation of symbolic techniques to design novel deep neural architectures to natively encode explanations ; the development of multi-modal explanation models that are able to provide insights from different perspectives, combining visual and textual artifacts; the definition of a common explanation framework for the deep learning model comparison, based on KGs and ontologies, to enable proper validation strategies.
Reference
More information on this topic is available on our Journal article entitled “ On the Integration of Knowledge Graphs into Deep Learning Models for a More Comprehensible AI — Three Challenges for Future Research ”.
以上所述就是小编给大家介绍的《Knowledge Graphs For eXplainable AI》,希望对大家有所帮助,如果大家有任何疑问请给我留言,小编会及时回复大家的。在此也非常感谢大家对 码农网 的支持!
猜你喜欢:本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们。