News
AI4CCAM at ATRACC 2025!
AI4CCAM will be at the AI Trustworthiness and Risk Assessment for Challenged Contexts (ATRACC) 2025, held 6-8 November 2025 in Arlington, USA.
AI systems, including those built on large language and foundational/multi-modal models, have proven their value in all aspects of human society, rapidly transforming traditional robotics and computational systems into intelligent systems with emergent, and often unanticipated, beneficial behaviors. However, the rapid embrace of AI-based critical systems introduces new dimensions of errors that induce increased levels of risk, limiting trustworthiness. The design of AI-based critical systems requires proving their trustworthiness. Thus, AI-based critical systems must be assessed across many dimensions by different parties (researchers, developers, regulators, customers, insurance companies, end-users, etc.) for different reasons. Assessment of trustworthiness should be made at both, the full system level and at the level of individual AI components. At the theoretical and foundational level, such methods must go beyond explainability to deliver uncertainty estimations and formalisms that can bound the limits of the AI, provide traceability, and quantify risk.
The focus of this symposium is on AI trustworthiness broadly and methods that help provide bounds for fairness, reproducibility, reliability, and accountability in the context of quantifying AI-system risk, spanning the entire AI lifecycle from theoretical research formulations all the way to system implementation, deployment, and operation. This symposium will bring together industry, academia, and government researchers and practitioners who are vested stakeholders in addressing these challenges in applications where a priori understanding of risk is critical.
AI4CCAM will be presenting the paper “Rashomon in the Streets: Explanation Ambiguity in Scene Understanding“: explainable AI (XAI) is essential for validating and trusting models in safety-critical applications like autonomous driving. However, the reliability of XAI is challenged by the Rashomon effect, where multiple, equally accurate models can offer divergent explanations for the same prediction. This paper provides the first empirical quantification of this effect for the task of action prediction in real-world driving scenes. Using Qualitative Explainable Graphs (QXGs) as a symbolic scene representation, we train Rashomon sets of two distinct model classes, and findings suggest that explanation ambiguity is an inherent property of the problem, not just a modeling artifact.

