European IA research is getting organised

Distinctions International Computer science

Behind the mass of acronyms and projects which are interconnected to varying degrees, French and European scientists are stepping up to the plate for artificial intelligence. However, the aims of the CLAIRE, HumanE-AI and TAILOR networks are not simply performance-related. These networks are supporting the emergence of an approach that reflects European values. This approach concerns data protection and is attentive to the implications - particularly those of an ethical nature - of interactions between humans and machines.

European research is getting organised to compete with the efforts and resources deployed by China and the United States in the field of artificial intelligence. There is a real multi-layered environment of networks and projects which consider ethical issues to be of central importance. "In recent years the European Commission has launched several calls for ambitious AI projects," explains François Yvon, a CNRS senior researcher at the LIMSI1 . "Several networks of researchers have been set up and have continued to exist, involving both winners of these calls and those who were not selected."

François Yvon, an automatic language processing specialist, is taking part in the multidisciplinary HumanE-AI programme dedicated to the enhanced development of interactions between robots and humans. An example of an application of this idea is an AI learning directly from an operator, like, for example, a worker who shows it a technical gesture it needs to reproduce. This task is becoming increasingly complicated in the industrial context because ambient noise forces employees to communicate differently and disturbs the AI's voice recognition functions. HumanE-AI is thus focusing on the integration of human cognitive and social models.

  • 1Computer Science Laboratory for Mechanics and Engineering Sciences (CNRS)
Face-to-face communication triggers many unconscious social microadaptation phenomena and robots need to be taught these.

Adapting to a rate of speech, keeping the right distance, avoiding sudden movements near people, spotting non-verbal acquiescence signals, taking cultural differences into account when expressing emotions... These questions differ from the purely mathematical aspects of optimising deep learning or big data but are nonetheless essential to the development of a responsible AI. This form of AI is capable of understanding its users' intentions and explaining what it does and what it aims to do. HumanE-AI brings together around fifty academic and industrial partners such as the CNRS, the National Institute for Research in Computer Science and Control (INRIA), the Sorbonne University, Université Grenoble Alpes, Thalès, Airbus, etc.

CLAIRE1 is also a fine example of this type of cooperation and is keen to distinguish itself from the lively international competition in the field. "Machine learning requires a huge amount of data to be collected," emphasises Nicolas Sabouret, a professor at Paris-Saclay University and member of both CLAIRE and the LIMSI. "The Americans don't really care where this data comes from, or how to protect it, while the Chinese are taking over their citizens' data. Europe needs its own more responsible vision".

It was because he is a member of CLAIRE that Andreas Herzig, a senior researcher at the IRIT2 , heard about the TAILOR project3 which he has since joined. This programme's aim is to bring the two main types of AI together. Symbolic AI represents information in a logical statement and then attempts to deduce its answers from the data available. On the other hand, the very popular deep learning methods are in the subsymbolic category in which neural networks consume millions of examples to learn how to give the best solution.

"We don't have access to an explanation as to how subsymbolic AIs come to their conclusions," regrets Andreas Herzig. "Even if they give good results, how can we trust them? We want to accompany learning about them with a logical explanation but we don't know how to do that yet." For the moment, TAILOR is establishing a state-of-the-art review of research but its ultimate goal is to integrate the respect of privacy and a theory of the mind (the ability to put yourself in someone else's place) into AIs from the outset. In France, the project involves members of the IRIT, the LIRMM4 and the CRIL5 .

"The Chinese and the Americans are ahead in terms of AI," continues Andreas Herzig, "but we are betting on this hybrid form of AI to progress further. A form of AI people can trust will be very fast but we will also know why it makes its choices without letting it just do, no matter what. This is a characteristic goal of European efforts in this area."

  • 1Confederation of Laboratories for Artificial Intelligence Research in Europe
  • 2Institut de Recherche en Informatique de Toulouse (Toulouse Research Institute in Computer Science, CNRS/Université Toulouse 3 Paul Sabatier/INP Toulouse)
  • 3Trustworthy AI - Integrating Reasoning, Learning and Optimization
  • 4Laboratory of Computer Science, Robotics and Microelectronics of Montpellier (CNRS/University of Montpellier)
  • 5Lens Computer Science Research Lab (CNRS/Artois University)

Contact

François Yvon
CNRS senior researcher at LISN
Nicolas Sabouret
Professor at Université Paris-Saclay, member of LISN
Andreas Herzig
CNRS senior researcher at IRIT