News

Combining XAI and semiotics to interpret hallucinations in deep generative models: a new AI4CCAM paper

Clément Arlotti and Kevin Pasini, IRT SystemX, will present the AI4CCAM paper on “Combining XAI and semiotics to interpret hallucinations in deep generative models” on 19 September at the Human and Artificial Rationalities (HAR) conference, focused on comparing human and artificial rationalities, investigating how they interact together in a practical sense, but also on the theoretical and ethical aspects behind rationality from three main perspectives: Philosophy, Psychology, and Computer Sciences.
About the paper
Deep Generative Models (DGMs) are increasingly used in many applicative sectors as they offer the possibility of automating the production of image, text or video content. However, their operation suffers from a major drawback as they are prone to so-called “hallucinations” i.e. they may generate plausible yet factually incoherent outputs, that lack proper context understanding. Hence, characterizing and mitigating hallucinations is essential for DGM deployment but presents an important pitfall: assessing whether an output is coherent and compliant with a given context is a non-univocal, ambiguous task, that remains open to interpretation. As a consequence, existing hallucinations taxonomies are application-dependant and model-specific.
The paper is part of the AI4CCAM WP2 where the data augmentation question was tackled to enrich driving simulations. However, when new synthetic data are produced with deep generative models (DGMs), the question of hallucinations is raised: how can one validate or discard these new data, knowing that DGMs can produce plausible but unfaithful data? This work is about developing technical and conceptual tools to draw the line between relevant generated data and hallucinations.
Check AI4CCAM library and read the full paper!