Biasing Rule-Based Explanations Towards User Preferences

With the growing prevalence of Explainable AI (XAI), the effectiveness, transparency, usefulness, and trustworthiness of explanations have come into focus. However, recent work in XAI often still falls short in terms of integrating human knowledge and preferences into the explanatory process. In thi...

Full description

Saved in:
Bibliographic Details
Main Authors: Parisa Mahya, Johannes Fürnkranz
Format: Article
Language:English
Published: MDPI AG 2025-06-01
Series:Information
Subjects:
Online Access:https://www.mdpi.com/2078-2489/16/7/535
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:With the growing prevalence of Explainable AI (XAI), the effectiveness, transparency, usefulness, and trustworthiness of explanations have come into focus. However, recent work in XAI often still falls short in terms of integrating human knowledge and preferences into the explanatory process. In this paper, we aim to bridge this gap by proposing a novel method, which personalizes rule-based explanations to the needs of different users based on their expertise and background knowledge, formalized as a set of weighting functions over a knowledge graph. While we assume that user preferences are provided as a weighting function, our focus is on generating explanations tailored to the user’s background knowledge. The method transforms rule-based interpretable models into personalized explanations considering user preferences in terms of the granularity of knowledge. Evaluating our approach on multiple datasets demonstrates that the generated explanations are highly aligned with simulated user preferences compared to non-personalized explanations.
ISSN:2078-2489