The AI4CCAM consortium continues to work closely towards delivering innovative AI solutions in CCAM. During AKKODIS’ internal workshop, partners exchanged insights on GSGFormer and Pollux, two of the key technological developments driving the project results forward.

“Pauses Techniques” are weekly webinars internal to Akkodis, where projects are presented from the Akkodis Research department to the rest of the company.

Pavan Vasishta and Bastien Brugger presented focusing on some key questions:

  • What is AI4CCAM and its objective?
  • What is “trustworthy AI” and why is it crucial?
  • What are the roles of GSGFormer and Pollux?

GSGFormer is an AI developed by Akkodis to predict pedestrian trajectories in dense urban areas via an aerial view. While Pollux is a digital twin capable of building a virtual replica of any city and integrating trajectories captured in virtual reality.

Stay tuned for more achievements from the AI4CCAM !

AI4CCAM was selected as a project in the spotlight on the CCAM Association website!

In the evolving landscape of connected and automated mobility, artificial intelligence (AI) plays a decisive role in shaping the future of transportation. Yet, as vehicles become smarter, questions of trust, transparency, and ethics come to the forefront.

The CCAM Association article explains how AI4CCAM addresses these challenges head-on, developing trustworthy and explainable AI models designed to make roads safer for everyone, especially Vulnerable Road Users (VRUs) such as pedestrians and cyclists.

The article focuses on:

  • the challenge of Trustworthy AI
  • the project Dual Approach to Trustworthiness
  • the Participatory Space
  • breakthrough Results such as a suite of innovative AI models addressing critical CCAM needs.

Read the full article!

The AI4CCAM consortium gathered transport operators, policymakers, researchers and industry partners for two days of dialogue and demonstration on the future of trustworthy artificial intelligence in automated mobility. Held at Autoworld in Brussels, the programme combined a participatory Stakeholder Forum followed by a full day Final Event, offering both a societal reflection on user needs and a technical overview of the tools and research developed throughout the project.
Over three years, AI4CCAM developed a suite of AI based tools and methodologies to enhance safety, transparency and human acceptance of connected and automated mobility. These outcomes formed the core of discussions and demonstrations during the event.

The Stakeholder Forum began on 17 November with an introduction by Arnaud Gotlieb, the project coordinator, who outlined the project’s objectives and the value of scenario modelling, digital twins and explainable AI in developing safer and more transparent automated mobility systems.
Discussions of the day explored human trajectory prediction, scene understanding and qualitative reasoning, alongside important considerations related to data privacy, cybersecurity and risks such as data poisoning in V2X communication. The Forum also featured a public discussion guided by Guido di Pasquale of PAVE Europe, who led participants through key questions on public trust, user acceptance and the human side of automation. Moreover, the forum examined public trust and user acceptance, reflecting on why some citizens may be hesitant about automated vehicles and how communication, exposure and inclusive design can help strengthen public trust. Key takeaways from these discussions highlighted the need for clearer user communication, stronger safeguards for data driven systems and closer cooperation between public transport authorities and technology developers.
The day ended with an interactive World Café session, hosted by Jacques Ferrière of UTFP, where participants exchanged views on road space sharing, AI adoption, collaboration among operators and authorities, the evolving roles of operators and staff and the need for closer engagement between public transport authorities and AI developers. These exchanges reaffirmed the importance of developing AI systems that respond directly to operational realities and societal expectations.

The Final Event on 18 November placed a stronger emphasis on results, demonstrations and future policy direction. The opening session featured remarks from Kristóf Almásy of the European Commission’s DG CONNECT, who outlined Europe’s ambitions to accelerate safe, responsible and competitive AI development. Arnaud Gotlieb returned on stage to present the consolidated outcomes of the project, reaffirming that AI offers significant opportunities for automated mobility when supported by transparent methods, rigorous testing environments and a strong human centred approach. This included the introduction of AI4CCAM’s core tools, such as advanced simulation frameworks, digital twin environments and explainable AI models developed collectively across the consortium. Throughout the day, experts and partners presented the project’s simulation tools, digital twin environments and AI models, with discussions covering scene understanding, uncertainty detection, interactions with vulnerable road users and the robustness of automated systems. Panel discussions underscored three central messages: the importance of transparent and traceable AI decision making, the need for robust testing environments that reflect real world complexity and the continuing relevance of human oversight in automated mobility.
A noteworthy session dedicated to user acceptance included insights from a related EU funded initiative, with John Paddington from the SINFONICA project sharing learned experiences on public engagement and perceptions of autonomous shuttles. While external perspectives enriched the discussion, the event placed strong focus on AI4CCAM’s own achievements and the contributions of consortium partners throughout the project lifetime. Following this, demonstrations and exhibition booths allowed participants to experience the project’s output firsthand, from virtual reality environments to prototype interaction systems designed to enhance safety and predictability around vulnerable road users. The extensive work invested by partners in preparing these demonstrations, including fully functioning technical prototypes and immersive testing platforms, was widely recognised by attendees and illustrated the depth of the consortium’s expertise.
The two-day event concluded with closing reflections from Pedro Alfonso Pérez Losa of the European Commission’s DG CINEA, who underlined the importance of continued European collaboration in advancing trustworthy and inclusive AI for mobility.

The AI4CCAM consortium extends its sincere thanks to the entire organising team, all presenters, guest speakers, moderators and every attendee whose engagement, expertise and contributions made the Stakeholder Forum and Final Event a success. Together, they helped showcase how multidisciplinary collaboration can support the development of safer, more transparent and more inclusive automated mobility solutions across Europe. Above all, the event demonstrated the substantial results achieved by the AI4CCAM partnership and the strong foundation it has laid for future work in trustworthy AI for mobility.

Go to our photo gallery!

The AI4CCAM Participatory Space continues its work on the creation of a glossary of capital and common terms used in the project.

This participatory process is aimed to create a glossary of terms with the involvement of different CCAM stakeholders. The process consists of proposals of terms to be included along their definitions. Later on, discussions about the correctness of these definitions take place and a final survey to decide the best option is performed. Results are monitored and agreed definitions are incorporated to a document that represents the glossary.

This is the fourth release of terms and definitions related to Artificial Intelligence. Participants will be asked to share feedback on the correctness and understandability of the proposed definitions.
Let’s focus on:

  • Generative adversarial network (GAN)
  • Large language model (LLM)
  • Neural network

Have your say!

The AI4CCAM Participatory Space continues its work and the focus moves now to VRU–CAV Virtual Reality Interaction Experiment on User Acceptance.

The User Acceptance Questionnaire for Use Case 3 in the AI4CCAM project is a comprehensive tool designed to evaluate participants’ perceptions, comfort, and trust regarding Connected and Automated Vehicles (CAVs) in Virtual Reality (VR) environments. It focuses on realistic urban scenarios, including standard and adverse conditions such as T-junctions, crossroads, roundabouts, lane closures, and busy commercial streets. These scenarios are crafted to assess key factors influencing user acceptance: risk perception, comfort level, and realism of the VR experience.

The questionnaire is structured into baseline and post-experiment segments for each scenario, enabling researchers to capture participants’ expectations before the VR experience and their reflections afterward. It addresses demographic details, familiarity with technology, and specific attitudes toward CAVs, incorporating scenarios with varying complexity, weather conditions, and visibility. The VR setup aims to replicate real-world interactions with immersive detail, enhancing participants’ ability to evaluate CAV behavior and their own comfort levels in a simulated but realistic environment.

Through this detailed approach, the questionnaire gathers critical data to guide the development of CAV technology that meets the needs of Vulnerable Road Users (VRUs) a category comprising pedestrians, cyclists, e-scooter riders and those with reduced mobility and builds trust in automated systems. The experiment combines questionnaires, VR-based scenarios, and physiological measurements to evaluate three key metrics: risk perception, comfort level, and realism of the experience.

Participants first complete baseline questionnaires to capture socio-demographics, attitudes, and intentions toward CAVs. They are then exposed to immersive VR scenarios simulating CAV–VRU interactions in realistic settings, T-junctions, crossroads, roundabouts, lane closures, and high commercial street areas. Additionally, adverse conditions (e.g., poor lighting, rain, low visibility) are specifically applied only to T-junction and roundabout scenarios. During these simulations, physiological responses are monitored to assess stress and comfort.

Finally, post-experiment questionnaires compare pre- and post-interaction results, allowing the project to measure acceptability (initial willingness to use) and acceptance (approval after actual experience) of CAV technologies among diverse user groups across France and Italy.

Read more and be part of the Participatory Space activities!

AI4CCAM will be at the AI Trustworthiness and Risk Assessment for Challenged Contexts (ATRACC) 2025, held 6-8 November 2025 in Arlington, USA.

AI systems, including those built on large language and foundational/multi-modal models, have proven their value in all aspects of human society, rapidly transforming traditional robotics and computational systems into intelligent systems with emergent, and often unanticipated, beneficial behaviors. However, the rapid embrace of AI-based critical systems introduces new dimensions of errors that induce increased levels of risk, limiting trustworthiness. The design of AI-based critical systems requires proving their trustworthiness. Thus, AI-based critical systems must be assessed across many dimensions by different parties (researchers, developers, regulators, customers, insurance companies, end-users, etc.) for different reasons. Assessment of trustworthiness should be made at both, the full system level and at the level of individual AI components. At the theoretical and foundational level, such methods must go beyond explainability to deliver uncertainty estimations and formalisms that can bound the limits of the AI, provide traceability, and quantify risk.

The focus of this symposium is on AI trustworthiness broadly and methods that help provide bounds for fairness, reproducibility, reliability, and accountability in the context of quantifying AI-system risk, spanning the entire AI lifecycle from theoretical research formulations all the way to system implementation, deployment, and operation. This symposium will bring together industry, academia, and government researchers and practitioners who are vested stakeholders in addressing these challenges in applications where a priori understanding of risk is critical.

AI4CCAM will be presenting the paper “Rashomon in the Streets: Explanation Ambiguity in Scene Understanding“: explainable AI (XAI) is essential for validating and trusting models in safety-critical applications like autonomous driving. However, the reliability of XAI is challenged by the Rashomon effect, where multiple, equally accurate models can offer divergent explanations for the same prediction. This paper provides the first empirical quantification of this effect for the task of action prediction in real-world driving scenes. Using Qualitative Explainable Graphs (QXGs) as a symbolic scene representation, we train Rashomon sets of two distinct model classes, and findings suggest that explanation ambiguity is an inherent property of the problem, not just a modeling artifact.

Read more about the event!

AI4CCAM will be attending the Automated Transportation Symposium 2025 – November 3-6, 2025 at Tempe, Arizona.

The Automated Transportation Symposium, ATS25, (formerly ARTS) is the leading global forum for advancing automated vehicle development at SAE Levels 4 and 5. Now produced by SAE International, brings together innovators from industry, government, and academia to address the most pressing technical, regulatory, and policy challenges shaping the future of automated mobility. 

Over three days of dynamic plenaries, interactive workshops, poster sessions, and high-value networking, ATS delivers cutting-edge insights into the latest R&D breakthroughs, real-world deployment data, and evolving standards. The event places a strong emphasis on issues affecting U.S. and international transportation agencies, providing a platform for shaping the future of AV regulation and deployment strategies.

On Wednesday, Novemebr 5th, 1:30-5:00 PM, Arnaud Gotlieb, Simula Research Laboratory, AI4CCAM coordinator, will be speaking at the Session “Cover Your AI – ODD Coverage and Validation, Challenge for AI-Centric AVs“, organized by Sagar Behere, VP Safety and Gil Amid, Chief Regulatory Affairs Office, Foretellix Inc.

AI is becoming more and more prominent in highly automated driving systems (ADS). Many new ADS architectures are pursuing the “end-to-end” approach, from sensors to vehicle movement. For such AI-centric (or AI-heavy) ADS, the validation challenges multiply beyond the traditional ones. For example, it is important to achieve sufficient ODD coverage not just for validation, but also for the training data, to ensure it has sufficient breadth, diversity, and complexity for a given ODD. Furthermore, there are challenges pertaining to validation of bug fixes, changes, and other enhancements. The “traditional” AI challenges of hallucinations, unpredictability, lack of explainability etc. also remain.

This session focuses on challenges and approaches to effective and efficient validation of safe AI-centric ADS. It brings together perspectives from national and international govt. authorities, solution providers from the industry, and academic research. The goal is to share relevant approaches and requirements while providing a thoughtful way forward. Let’s learn more on:

  • Unique characteristics of AI centric AVs
  • The Challenges of validating AI centric AV and ensuring their safety
  • Proposed solutions to those challenges
  • How to get to sufficient ODD coverage for validation and training, given AI centric AV

Read more about the event!

The AI4CCAM Participatory Space continues its work on the creation of a glossary of capital and common terms used in the project.

This participatory process is aimed to create a glossary of terms with the involvement of different CCAM stakeholders. The process consists of proposals of terms to be included along their definitions. Later on, discussions about the correctness of these definitions take place and a final survey to decide the best option is performed. Results are monitored and agreed definitions are incorporated to a document that represents the glossary.

A third release of terms and definitions related to Ethics and Governance is now available: participants will be asked to share feedback on the correctness and understandability of the proposed definitions

Join the discussion!

The AI4CCAM project coordinator, Arnaud Gotlieb, Simula Research Lab, will be speaking at the
HIDDEN online workshop on CCAM Insights on 21 October 2025.

HIDDEN is a new, EU Research & Innovation Action project focused on advancing urban mobility through safer, smarter, and more ethical automation. The project develops collective awareness systems for connected and automated vehicles, using hybrid intelligence (AI combined with human intelligence) to detect occluded objects and support advanced, ethically aligned decision-making. The project kicked-off in July 2025, and although still at its onset, it can significantly benefit from the experience and work performed in relevant EU projects.

Therefore, AI4CCAM, along with the EU projects BERTHA, i4Driving and AIthena will share its insights and expertise and discuss implementation challenges in CCAM technologies to provide valuable input to HIDDEN and help boost the quality of its developments.

The workshop results will allow HIDDEN to explore challenges and results together, supporting the consortium in defining system requirements, final use cases, and the ethical and legal framework.

AI4CCAM will especially focus on VRU simulation environment including ethical dilemmas.