The AI4CCAM Participatory Space continues its work and the focus moves now to VRU–CAV Virtual Reality Interaction Experiment on User Acceptance.

The User Acceptance Questionnaire for Use Case 3 in the AI4CCAM project is a comprehensive tool designed to evaluate participants’ perceptions, comfort, and trust regarding Connected and Automated Vehicles (CAVs) in Virtual Reality (VR) environments. It focuses on realistic urban scenarios, including standard and adverse conditions such as T-junctions, crossroads, roundabouts, lane closures, and busy commercial streets. These scenarios are crafted to assess key factors influencing user acceptance: risk perception, comfort level, and realism of the VR experience.

The questionnaire is structured into baseline and post-experiment segments for each scenario, enabling researchers to capture participants’ expectations before the VR experience and their reflections afterward. It addresses demographic details, familiarity with technology, and specific attitudes toward CAVs, incorporating scenarios with varying complexity, weather conditions, and visibility. The VR setup aims to replicate real-world interactions with immersive detail, enhancing participants’ ability to evaluate CAV behavior and their own comfort levels in a simulated but realistic environment.

Through this detailed approach, the questionnaire gathers critical data to guide the development of CAV technology that meets the needs of Vulnerable Road Users (VRUs) a category comprising pedestrians, cyclists, e-scooter riders and those with reduced mobility and builds trust in automated systems. The experiment combines questionnaires, VR-based scenarios, and physiological measurements to evaluate three key metrics: risk perception, comfort level, and realism of the experience.

Participants first complete baseline questionnaires to capture socio-demographics, attitudes, and intentions toward CAVs. They are then exposed to immersive VR scenarios simulating CAV–VRU interactions in realistic settings, T-junctions, crossroads, roundabouts, lane closures, and high commercial street areas. Additionally, adverse conditions (e.g., poor lighting, rain, low visibility) are specifically applied only to T-junction and roundabout scenarios. During these simulations, physiological responses are monitored to assess stress and comfort.

Finally, post-experiment questionnaires compare pre- and post-interaction results, allowing the project to measure acceptability (initial willingness to use) and acceptance (approval after actual experience) of CAV technologies among diverse user groups across France and Italy.

Read more and be part of the Participatory Space activities!

AI4CCAM will be at the AI Trustworthiness and Risk Assessment for Challenged Contexts (ATRACC) 2025, held 6-8 November 2025 in Arlington, USA.

AI systems, including those built on large language and foundational/multi-modal models, have proven their value in all aspects of human society, rapidly transforming traditional robotics and computational systems into intelligent systems with emergent, and often unanticipated, beneficial behaviors. However, the rapid embrace of AI-based critical systems introduces new dimensions of errors that induce increased levels of risk, limiting trustworthiness. The design of AI-based critical systems requires proving their trustworthiness. Thus, AI-based critical systems must be assessed across many dimensions by different parties (researchers, developers, regulators, customers, insurance companies, end-users, etc.) for different reasons. Assessment of trustworthiness should be made at both, the full system level and at the level of individual AI components. At the theoretical and foundational level, such methods must go beyond explainability to deliver uncertainty estimations and formalisms that can bound the limits of the AI, provide traceability, and quantify risk.

The focus of this symposium is on AI trustworthiness broadly and methods that help provide bounds for fairness, reproducibility, reliability, and accountability in the context of quantifying AI-system risk, spanning the entire AI lifecycle from theoretical research formulations all the way to system implementation, deployment, and operation. This symposium will bring together industry, academia, and government researchers and practitioners who are vested stakeholders in addressing these challenges in applications where a priori understanding of risk is critical.

AI4CCAM will be presenting the paper “Rashomon in the Streets: Explanation Ambiguity in Scene Understanding“: explainable AI (XAI) is essential for validating and trusting models in safety-critical applications like autonomous driving. However, the reliability of XAI is challenged by the Rashomon effect, where multiple, equally accurate models can offer divergent explanations for the same prediction. This paper provides the first empirical quantification of this effect for the task of action prediction in real-world driving scenes. Using Qualitative Explainable Graphs (QXGs) as a symbolic scene representation, we train Rashomon sets of two distinct model classes, and findings suggest that explanation ambiguity is an inherent property of the problem, not just a modeling artifact.

Read more about the event!

AI4CCAM will be attending the Automated Transportation Symposium 2025 – November 3-6, 2025 at Tempe, Arizona.

The Automated Transportation Symposium, ATS25, (formerly ARTS) is the leading global forum for advancing automated vehicle development at SAE Levels 4 and 5. Now produced by SAE International, brings together innovators from industry, government, and academia to address the most pressing technical, regulatory, and policy challenges shaping the future of automated mobility. 

Over three days of dynamic plenaries, interactive workshops, poster sessions, and high-value networking, ATS delivers cutting-edge insights into the latest R&D breakthroughs, real-world deployment data, and evolving standards. The event places a strong emphasis on issues affecting U.S. and international transportation agencies, providing a platform for shaping the future of AV regulation and deployment strategies.

On Wednesday, Novemebr 5th, 1:30-5:00 PM, Arnaud Gotlieb, Simula Research Laboratory, AI4CCAM coordinator, will be speaking at the Session “Cover Your AI – ODD Coverage and Validation, Challenge for AI-Centric AVs“, organized by Sagar Behere, VP Safety and Gil Amid, Chief Regulatory Affairs Office, Foretellix Inc.

AI is becoming more and more prominent in highly automated driving systems (ADS). Many new ADS architectures are pursuing the “end-to-end” approach, from sensors to vehicle movement. For such AI-centric (or AI-heavy) ADS, the validation challenges multiply beyond the traditional ones. For example, it is important to achieve sufficient ODD coverage not just for validation, but also for the training data, to ensure it has sufficient breadth, diversity, and complexity for a given ODD. Furthermore, there are challenges pertaining to validation of bug fixes, changes, and other enhancements. The “traditional” AI challenges of hallucinations, unpredictability, lack of explainability etc. also remain.

This session focuses on challenges and approaches to effective and efficient validation of safe AI-centric ADS. It brings together perspectives from national and international govt. authorities, solution providers from the industry, and academic research. The goal is to share relevant approaches and requirements while providing a thoughtful way forward. Let’s learn more on:

  • Unique characteristics of AI centric AVs
  • The Challenges of validating AI centric AV and ensuring their safety
  • Proposed solutions to those challenges
  • How to get to sufficient ODD coverage for validation and training, given AI centric AV

Read more about the event!

The AI4CCAM Participatory Space continues its work on the creation of a glossary of capital and common terms used in the project.

This participatory process is aimed to create a glossary of terms with the involvement of different CCAM stakeholders. The process consists of proposals of terms to be included along their definitions. Later on, discussions about the correctness of these definitions take place and a final survey to decide the best option is performed. Results are monitored and agreed definitions are incorporated to a document that represents the glossary.

A third release of terms and definitions related to Ethics and Governance is now available: participants will be asked to share feedback on the correctness and understandability of the proposed definitions

Join the discussion!

The AI4CCAM project coordinator, Arnaud Gotlieb, Simula Research Lab, will be speaking at the
HIDDEN online workshop on CCAM Insights on 21 October 2025.

HIDDEN is a new, EU Research & Innovation Action project focused on advancing urban mobility through safer, smarter, and more ethical automation. The project develops collective awareness systems for connected and automated vehicles, using hybrid intelligence (AI combined with human intelligence) to detect occluded objects and support advanced, ethically aligned decision-making. The project kicked-off in July 2025, and although still at its onset, it can significantly benefit from the experience and work performed in relevant EU projects.

Therefore, AI4CCAM, along with the EU projects BERTHA, i4Driving and AIthena will share its insights and expertise and discuss implementation challenges in CCAM technologies to provide valuable input to HIDDEN and help boost the quality of its developments.

The workshop results will allow HIDDEN to explore challenges and results together, supporting the consortium in defining system requirements, final use cases, and the ethical and legal framework.

AI4CCAM will especially focus on VRU simulation environment including ethical dilemmas.

On 29 September, at Institut Mines Télécom (IMT) – Télécom SudParis, AI4CCAM organized a workshop on data poisoning attacks.

The widespread adoption of 3D point-cloud deep learning has greatly improved Connected and Autonomous Vehicles’ (CAVs) ability to perceive, classify, and react to road scenes. Validation of these systems relies on large simulated environments built from massive datasets. However, there is a shortage of such datasets, which is the reason practitioners commonly resort to data augmentation techniques such as Generative Adversarial Networks (GAN) to expand training corpora.

AI4CCAM believes that this reliance on shared datasets and augmentation pipelines creates a critical attack surface, where a malicious actor who introduces poisoned samples into the dataset ecosystem can have their influence amplified by augmentation, producing highly compromised scenarios and degraded downstream behavior.

The workshop had two primary objectives through practical sessions:
– Experimentally evaluate whether common augmentation techniques exacerbate poisoning attacks on 3D point-cloud data.
– Quantify the impact of poisoning attacks on CAV perception and downstream decision-making.

Go to photo gallery!

The AI4CCAM Participatory Space continues its work on the creation of a glossary of capital and common terms used in the project.

This participatory process is aimed to create a glossary of terms with the involvement of different CCAM stakeholders. The process consists of proposals of terms to be included along their definitions. Later on, discussions about the correctness of these definitions take place and a final survey to decide the best option is performed. Results are monitored and agreed definitions are incorporated to a document that represents the glossary.

Before September ends, here is the third release of terms and definitions related to Artificial IntelIigence. Participants will be asked to share feedback on the correctness and understandability of the proposed definitions.

What do you think about Explainability, Auditability, Opacity of an AI system?

Be part of the discussion!

The AI4CCAM third newsletter is out!

This is the last AI4CCAM newsletter, a moment for the project coordinator, Arnaud Gotlieb, to share some thoughts on what we have achieved together. Over the past three years, AI4CCAM has worked at the forefront of research on trustworthy AI for automated driving, contributing to the wider CCAM ecosystem with results that will endure well beyond the project’s lifetime.

Read the newsletter and learn more about:
AI4CCAM final event in November and to register!
AI4CCAM demos and watch the videos!
AI4CCAM latest outcomes
– The first EU Road Safety Cluster webinar and watch in case you missed it

Read it and subscribe!

On 16 September, Arnaud Gotlieb, Simula Research Lab, Project Coordinator, will be presenting AI4CCAM at the Trustworthy AI Summit, in Paris.

The Trustworthy AI Summit is Europe’s flagship event dedicated to trustworthy and industrial AI. It brings together global leaders – from industry, research, and regulation – to explore the latest breakthroughs, address key challenges, and share practical tools to drive adoption. Building on the legacy of Confiance.ai Day, the Trustworthy AI Summit marks a new milestone in the European AI landscape. The event, powered by the European Trustworthy AI Association, aims to shape the future of responsible and industrial AI.

Arnaud will be speaking at the session “From research challenges to outcomes: a closer look at available assets from European Trustworthy AI Association and other initiatives”, to learn more about a number of technologies and methodologies already available for the development of trustworthy AI systems and components, presented by their creators from academia and industry.