Argumentative Conversational Agents for Explainable Artificial Intelligence

  1. Stepin, Ilia
Dirigida por:
  1. José M. Alonso Moral Director
  2. Alejandro Catalá Bolós Director

Universidad de defensa: Universidade de Santiago de Compostela

Fecha de defensa: 22 de septiembre de 2023

Tribunal:
  1. Corrado Mencar Presidente/a
  2. Alberto José Bugarín Diz Secretario
  3. Zoe Falomir Llansola Vocal
Departamento:
  1. Departamento de Electrónica y Computación

Tipo: Tesis

Resumen

Recent years have witnessed a striking rise of artificial intelligence algorithms that are able to show outstanding performance. However, such good performance is oftentimes achieved at the expense of explainability. Not only can the lack of algorithmic explainability undermine the user's trust in the algorithmic output, but it can also cause adverse consequences. In this thesis, we advocate the use of interpretable rule-based models that can serve both as stand-alone applications and proxies for black-box models. More specifically, we design an explanation generation framework that outputs contrastive, selected, and social explanations for interpretable (decision trees and rule-based) classifiers. We show that the resulting explanations enhance the effectiveness of AI algorithms while preserving their transparent structure.