The AI4CCAM Participatory Space continues its work on the creation of a glossary of capital and common terms used in the project.

This participatory process is aimed to create a glossary of terms with the involvement of different CCAM stakeholders. The process consists of proposals of terms to be included along their definitions. Later on, discussions about the correctness of these definitions take place and a final survey to decide the best option is performed. Results are monitored and agreed definitions are incorporated to a document that represents the glossary.

A third release of terms and definitions related to Ethics and Governance is now available: participants will be asked to share feedback on the correctness and understandability of the proposed definitions

Join the discussion!

The AI4CCAM project coordinator, Arnaud Gotlieb, Simula Research Lab, will be speaking at the
HIDDEN online workshop on CCAM Insights on 21 October 2025.

HIDDEN is a new, EU Research & Innovation Action project focused on advancing urban mobility through safer, smarter, and more ethical automation. The project develops collective awareness systems for connected and automated vehicles, using hybrid intelligence (AI combined with human intelligence) to detect occluded objects and support advanced, ethically aligned decision-making. The project kicked-off in July 2025, and although still at its onset, it can significantly benefit from the experience and work performed in relevant EU projects.

Therefore, AI4CCAM, along with the EU projects BERTHA, i4Driving and AIthena will share its insights and expertise and discuss implementation challenges in CCAM technologies to provide valuable input to HIDDEN and help boost the quality of its developments.

The workshop results will allow HIDDEN to explore challenges and results together, supporting the consortium in defining system requirements, final use cases, and the ethical and legal framework.

AI4CCAM will especially focus on VRU simulation environment including ethical dilemmas.

On 29 September, at Institut Mines Télécom (IMT) – Télécom SudParis, AI4CCAM organized a workshop on data poisoning attacks.

The widespread adoption of 3D point-cloud deep learning has greatly improved Connected and Autonomous Vehicles’ (CAVs) ability to perceive, classify, and react to road scenes. Validation of these systems relies on large simulated environments built from massive datasets. However, there is a shortage of such datasets, which is the reason practitioners commonly resort to data augmentation techniques such as Generative Adversarial Networks (GAN) to expand training corpora.

AI4CCAM believes that this reliance on shared datasets and augmentation pipelines creates a critical attack surface, where a malicious actor who introduces poisoned samples into the dataset ecosystem can have their influence amplified by augmentation, producing highly compromised scenarios and degraded downstream behavior.

The workshop had two primary objectives through practical sessions:
– Experimentally evaluate whether common augmentation techniques exacerbate poisoning attacks on 3D point-cloud data.
– Quantify the impact of poisoning attacks on CAV perception and downstream decision-making.

Go to photo gallery!

The AI4CCAM Participatory Space continues its work on the creation of a glossary of capital and common terms used in the project.

This participatory process is aimed to create a glossary of terms with the involvement of different CCAM stakeholders. The process consists of proposals of terms to be included along their definitions. Later on, discussions about the correctness of these definitions take place and a final survey to decide the best option is performed. Results are monitored and agreed definitions are incorporated to a document that represents the glossary.

Before September ends, here is the third release of terms and definitions related to Artificial IntelIigence. Participants will be asked to share feedback on the correctness and understandability of the proposed definitions.

What do you think about Explainability, Auditability, Opacity of an AI system?

Be part of the discussion!

The AI4CCAM third newsletter is out!

This is the last AI4CCAM newsletter, a moment for the project coordinator, Arnaud Gotlieb, to share some thoughts on what we have achieved together. Over the past three years, AI4CCAM has worked at the forefront of research on trustworthy AI for automated driving, contributing to the wider CCAM ecosystem with results that will endure well beyond the project’s lifetime.

Read the newsletter and learn more about:
AI4CCAM final event in November and to register!
AI4CCAM demos and watch the videos!
AI4CCAM latest outcomes
– The first EU Road Safety Cluster webinar and watch in case you missed it

Read it and subscribe!

On 16 September, Arnaud Gotlieb, Simula Research Lab, Project Coordinator, will be presenting AI4CCAM at the Trustworthy AI Summit, in Paris.

The Trustworthy AI Summit is Europe’s flagship event dedicated to trustworthy and industrial AI. It brings together global leaders – from industry, research, and regulation – to explore the latest breakthroughs, address key challenges, and share practical tools to drive adoption. Building on the legacy of Confiance.ai Day, the Trustworthy AI Summit marks a new milestone in the European AI landscape. The event, powered by the European Trustworthy AI Association, aims to shape the future of responsible and industrial AI.

Arnaud will be speaking at the session “From research challenges to outcomes: a closer look at available assets from European Trustworthy AI Association and other initiatives”, to learn more about a number of technologies and methodologies already available for the development of trustworthy AI systems and components, presented by their creators from academia and industry.

Clément Arlotti and Kevin Pasini, IRT SystemX, will present the AI4CCAM paper on “Combining XAI and semiotics to interpret hallucinations in deep generative models” on 19 September at the Human and Artificial Rationalities (HAR) conference, focused on comparing human and artificial rationalities, investigating how they interact together in a practical sense, but also on the theoretical and ethical aspects behind rationality from three main perspectives: Philosophy, Psychology, and Computer Sciences.

About the paper

Deep Generative Models (DGMs) are increasingly used in many applicative sectors as they offer the possibility of automating the production of image, text or video content. However, their operation suffers from a major drawback as they are prone to so-called “hallucinations” i.e. they may generate plausible yet factually incoherent outputs, that lack proper context understanding. Hence, characterizing and mitigating hallucinations is essential for DGM deployment but presents an important pitfall: assessing whether an output is coherent and compliant with a given context is a non-univocal, ambiguous task, that remains open to interpretation. As a consequence, existing hallucinations taxonomies are application-dependant and model-specific.

The paper is part of the AI4CCAM WP2 where the data augmentation question was tackled to enrich driving simulations. However, when new synthetic data are produced with deep generative models (DGMs), the question of hallucinations is raised: how can one validate or discard these new data, knowing that DGMs can produce plausible but unfaithful data? This work is about developing technical and conceptual tools to draw the line between relevant generated data and hallucinations.

Check AI4CCAM library and read the full paper!

Let’s meet in Brussels on 17 and 18 November 2025, at Autoworld (Parc du Cinquantenaire, 11) and experience Trustworthiness in AI-driven Automated Vehicle User Interactions!

Through collaborative research, AI4CCAM contributed to the CCAM ecosystem by addressing trustworthy AI in automated driving.

What to expect over the two days?

17 November

AI4CCAM Stakeholder Forum
The AI4CCAM Stakeholder Forum #2 is a side event focusing on challenges & opportunities for CCAM integration into public transport and shared mobility: from technological development to service deployment, the aim is to connect advancements with real-world applications to ensure research outcomes meet industry needs and help operators to deploy AI-driven CCAM. This will be a half day in-person event, accessible only by invitation and with the participation confirmed at a further stage.

Networking drink
From 18 to 19.30 those who have registered for the AI4CCAM final event on the day after, will meet for some nice networking to warm up the engines (well, we will be at the Autoworld!) for the next day!

18 November

AI4CCAM final event
The AI4CCAM final event will be the perfect day to discover project insights and the demonstrations developed from the very beginning up to now, highlighting key innovations in the use of artificial intelligence for safer, more ethical automated mobility, and also reflecting the project’s multidisciplinary approach and commitment to open-source tools, scalable validation environments, and human-machine interaction research:

• A session on AI4CCAM user acceptance, focusing on the project Participatory Space, on levels and barriers of automated vehicle user acceptance, and on the interoperable data-driven digital framework
• A panel discussion on Advancing the state of the art in research in trustworthy AI for CCAM
• An Innovation Corner
• An exhibition with demo booths to experience virtual reality

will accompany you to discover progress AI4CCAM brought to the CCAM sector!

Registrations are open! Book your place here!

Get the practical information here!

The AI4CCAM Participatory Space continues its work on the creation of a glossary of capital and common terms used in the project.

This participatory process is aimed to create a glossary of terms with the involvement of different CCAM stakeholders. The process consists of proposals of terms to be included along their definitions. Later on, discussions about the correctness of these definitions take place and a final survey to decide the best option is performed. Results are monitored and agreed definitions are incorporated to a document that represents the glossary.

A first release of terms and definitions related to Model Development is now available: participants will be asked to share feedback on the correctness and understandability of the proposed definitions.

Be part of the discussion!