AI4CCAM interview! CNRS telling more about the interaction between humans and automated cars
— 27 September 2023

AI4CCAM interviewed Ganesh Gowrishankar, CNRS (Centre National de la Recherche Scientifique), leader of the WP2 of the project, working on “Advance AI-driving CCAM sense-plan-act predictive models”.

Ganesh Gowrishankar is a Senior Researcher (Directeur de Recherche), Interactive Digital Human group;
CNRS-UM Laboratoire d’Informatique, de Robotique et de Microelectronique de Montpellier (LIRMM).

In this interview, Ganesh tells us more about the interaction between humans, and their behavior, and automated cars.

As leader of WP2, can you tell us about the research directions explored in the project?

WP2 is the scientific WP of the project. In the WP we are specifically interested in VRU prediction, which is a major challenge for automated vehicles. The WP aims to develop a more explainable and trustworthy AI framework to predict VRU movements. We plan to do this by developing a ‘hybrid AI model’ that integrates traditional end to end data based AI models with human behavioral models developed using neuroscientific psychophysical experiments and techniques. WP2 involves DEEPS and AKKODIS who will provide AI models for VRU prediction. VIF and SKODA will help develop the scenarios to be tested and augment data using GAN to help train these models. CNRS will provide a behavioral model of VRU that will be integrated with the AI model/s. SIMULA will develop techniques and test the explainability of the developed model/s.

You are specialist of human-machine interactions and especially the role of neuro-science in this important field of research, can you summarize what the research challenges of the project in this area?

When we are driving and see a pedestrian near the road for example, we are able to get a good prediction of their next moves by just looking at his/her physical features and the environment. We will predict differently for a kid, compared to an adult for example, and predict differently if an adult is walking alone compared to in a group. To efficiently interact with humans, automated cars need to do the same.

However, this is a major challenge for automated cars (and machines and robots that interact with humans) because humans behaviors are complex. Human behaviors both with their environment and with other humans, are characterized by complex dynamics that change with an interacting individual’s physiology, age and pathology, and also depend on emotional factors like fear and anxiety. Furthermore, human behaviors are determined by their current observations as well as by predictions of behavioral models they possess of their environment and of the agents they interact with (often investigated as Theory of mind), which themselves are continuously adapted with day to day experiences.

Due to this complexity, and diversity of behaviors across humans, VRU behaviors are very difficult for AI systems to predict. We will therefore utilize behavioral experiments to get a better insight into these aspects of VRU movement prediction. Using virtual reality, we will develop experiments in which participants will be put in daily situations of interaction with cars (in the virtual reality) and evaluate how their future behaviors can be predicted from their current behaviors and environmental conditions. We will try to integrate this model with the AI model to improve overall VRU prediction behavior.

What are the main research breakthroughs that can be achieved by the project and how will they impact the future in a 5-years horizon?

Ideally, we would be able to develop a behavioral model that will be integrated with the current state of art AI models of VRU prediction to develop a hybrid model of VRU prediction. Such a model can improve VRU prediction, while being more explainable due to its neuroscientific parts.

All news