


default search action
xAI 2023: Lisbon, Portugal
- Luca Longo
:
Explainable Artificial Intelligence - First World Conference, xAI 2023, Lisbon, Portugal, July 26-28, 2023, Proceedings, Part III. Communications in Computer and Information Science 1903, Springer 2023, ISBN 978-3-031-44069-4
xAI for Time Series and Natural Language Processing
- Mohamad Ballout, Ulf Krumnack, Gunther Heidemann, Kai-Uwe Kühnberger:
Opening the Black Box: Analyzing Attention Weights and Hidden States in Pre-trained Language Models for Non-language Tasks. 3-25 - Milan Bhan, Nina Achache, Victor Legrand, Annabelle Blangero, Nicolas Chesneau:
Evaluating Self-attention Interpretability Through Human-Grounded Experimental Protocol. 26-46 - Sargam Yadav
, Abhishek Kaushik
, Kevin McDaid
:
Understanding Interpretability: Explainable AI Approaches for Hate Speech Classifiers. 47-70 - Van Bach Nguyen
, Jörg Schlötterer
, Christin Seifert
:
From Black Boxes to Conversations: Incorporating XAI in a Conversational Agent. 71-96 - Muhammad Deedahwar Mazhar Qureshi
, Muhammad Atif Qureshi
, Wael Rashwan
:
Toward Inclusive Online Environments: Counterfactual-Inspired XAI for Detecting and Interpreting Hateful and Offensive Tweets. 97-119 - Amir Miraki
, Austeja Dapkute, Vytautas Siozinys, Martynas Jonaitis, Reza Arghandeh
:
Causal-Based Spatio-Temporal Graph Neural Networks for Industrial Internet of Things Multivariate Time Series Forecasting. 120-130 - Carlos Gómez-Tapia
, Bojan Bozic
, Luca Longo
:
Investigating the Effect of Pre-processing Methods on Model Decision-Making in EEG-Based Person Identification. 131-152 - Yiran Huang
, Chaofan Li
, Hansen Lu, Till Riedel
, Michael Beigl
:
State Graph Based Explanation Approach for Black-Box Time Series Model. 153-164 - Udo Schlegel
, Daniel A. Keim
:
A Deep Dive into Perturbations as Evaluation Technique for Time Series XAI. 165-180
Human-Centered Explanations and xAI for Trustworthy and Responsible AI
- Ivania Donoso-Guzmán
, Jeroen Ooge
, Denis Parra
, Katrien Verbert
:
Towards a Comprehensive Human-Centred Evaluation Framework for Explainable AI. 183-204 - Giulia Vilone
, Luca Longo
:
Development of a Human-Centred Psychometric Test for the Evaluation of Explanations Produced by XAI Methods. 205-232 - Lucie Charlotte Magister
, Pietro Barbiero, Dmitry Kazhdan
, Federico Siciliano
, Gabriele Ciravegna
, Fabrizio Silvestri
, Mateja Jamnik
, Pietro Liò:
Concept Distillation in Graph Neural Networks. 233-255 - Lutz Terfloth
, Michael Erol Schaffer
, Heike M. Buhl
, Carsten Schulte
:
Adding Why to What? Analyses of an Everyday Explanation. 256-279 - Ulrike Kuhl
, André Artelt
, Barbara Hammer
:
For Better or Worse: The Impact of Counterfactual Explanations' Directionality on User Behavior in xAI. 280-300 - Tobias M. Peters
, Roel W. Visser
:
The Importance of Distrust in AI. 301-317 - Giacomo De Bernardi, Sara Narteni
, Enrico Cambiaso
, Marco Muselli, Maurizio Mongelli:
Weighted Mutual Information for Out-Of-Distribution Detection. 318-331 - Alessandro Castelnovo, Nicole Inverardi, Lorenzo Malandri, Fabio Mercorio
, Mario Mezzanzanica, Andrea Seveso:
Leveraging Group Contrastive Explanations for Handling Fairness. 332-345 - Andres Algaba
, Carmen Mazijn
, Carina Prunkl
, Jan Danckaert, Vincent Ginis:
LUCID-GAN: Conditional Generative Models to Locate Unfairness. 346-367
Explainable and Interpretable AI with Argumentation, Representational Learning and Concept Extraction for xAI
- Nicoletta Prentzas, Constantinos S. Pattichis, Antonis C. Kakas:
Explainable Machine Learning via Argumentation. 371-398 - Lucas Rizzo
:
A Novel Structured Argumentation Framework for Improved Explainability of Classification Tasks. 399-414 - Stephan Wäldchen
:
Hardness of Deceptive Certificate Selection. 415-427 - Alexandre Goossens
, Jan Vanthienen
:
Integrating GPT-Technologies with Decision Models for Explainability. 428-448 - Eric Yeh
, Pedro Sequeira
, Jesse Hostetler, Melinda T. Gervasio
:
Outcome-Guided Counterfactuals from a Jointly Trained Generative Latent Space. 449-469 - Anastasia Natsiou, Seán O'Leary, Luca Longo
:
An Exploration of the Latent Space of a Convolutional Variational Autoencoder for the Generation of Musical Instrument Tones. 470-486 - Daisuke Yasui, Hiroshi Sato, Masao Kubo:
Improving Local Fidelity of LIME by CVAE. 487-511 - Andres Felipe Posada-Moreno
, Kai Müller, Florian Brillowski
, Friedrich Solowjow
, Thomas Gries
, Sebastian Trimpe
:
Scalable Concept Extraction in Industry 4.0. 512-535

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.