Argumentative Conversational Agents for Explainable Artificial Intelligence

  1. Stepin, Ilia
Dirixida por:
  1. José M. Alonso Moral Director
  2. Alejandro Catalá Bolós Director

Universidade de defensa: Universidade de Santiago de Compostela

Fecha de defensa: 22 de setembro de 2023

Tribunal:
  1. Corrado Mencar Presidente/a
  2. Alberto José Bugarín Diz Secretario
  3. Zoe Falomir Llansola Vogal
Departamento:
  1. Departamento de Electrónica e Computación

Tipo: Tese

Resumo

Recent years have witnessed a striking rise of artificial intelligence algorithms that are able to show outstanding performance. However, such good performance is oftentimes achieved at the expense of explainability. Not only can the lack of algorithmic explainability undermine the user's trust in the algorithmic output, but it can also cause adverse consequences. In this thesis, we advocate the use of interpretable rule-based models that can serve both as stand-alone applications and proxies for black-box models. More specifically, we design an explanation generation framework that outputs contrastive, selected, and social explanations for interpretable (decision trees and rule-based) classifiers. We show that the resulting explanations enhance the effectiveness of AI algorithms while preserving their transparent structure.