Welcome to the National Transport Library Catalogue

Normal view MARC view

Augmented reality interface design for autonomous driving Pokam, Raïssa ; Chauvin, Christine ; Debernard, Serge ; Langlois, Sabine

By: Contributor(s): Publication details: Göteborg Chalmers University of Technology. SAFER Vehicle and Traffic Safety Centre, 2015Description: s. 145-146Subject(s): Online resources: In: FAST-zero'15: 3rd International symposium on future active safety technology toward zero traffic accidents: September 9-11, 2015 Gothenburg, Sweden: proceedingsNotes: Konferens: FAST-zero'15: 3rd International symposium on future active safety technology toward zero traffic accidents, 2015, Gothenburg Abstract: Recently, a new kind of vehicles has appeared. These vehicles are called autonomous, self-driving, or driverless vehicles. The most well known is the Google car prototyped by Google Corporation. These autonomous vehicles have changed the traditional driving paradigm. Instead of the usual unilateral configuration where drivers are solely responsible for the driving task, a bilateral configuration where drivers rely totally or partially on the vehicle has emerged. For example, in the Google car, the driver is completely out of the driving loop. As a consequence, these fully automated vehicles are not authorized to drive on the open road because of safety issues. There are a number of projects working on lower levels of automation in vehicles, such as the LRA (French acronym for Localization and Augmented Reality) project. The LRA project, which involves about 10 partners from the industry and academic worlds, is sponsored by the French government, in an Institute for Technological Research (IRT) SystemX structure. In its automotive part, the project deals with technical and behavioral issues related to the automotive human-machine interface (HMI) design. Our work is related to this task. The HMI design for autonomous vehicles is an important issue in two particular situations: autonomous mode and handover processing. In the autonomous mode, where drivers can perform a few secondary tasks, they should be able to establish, at any time, a mental representation of the automated system state and the whole context (traffic, infrastructure type, ambient characteristics, etc.). While favoring a trust construct, this representation will enable mutual control, assumptions on how the system works, etc. Before and during the handover processing, drivers should be aware of all the information that will ensure their safety and that of the other road users. Drivers thus need to be reengaged in the control loop. It is possible that they have lost proper situation awareness in the autonomous mode (Endsley, 1995). Many modern cars (e.g. Audi Q7, BMW M3 Berline) are equipped with Head-Up Display (HUD) technology. This technology enables Augmented Reality (AR) implementation (Tonnis, Sandor, Lange, & Bubb, 2005). Usually, AR is defined as a continuum from real to Virtual Reality (Milgram, 1994). We adapt this definition to introduce what we call “Registrated Augmented Reality”. The term means that the AR annotation will match the real objects first, and then adapt itself to the drivers’ glance. It is thus a kind of dynamic AR. Generally, AR in cars deals with “the problem of directing a user’s attention to a point of interest” (Tonnis et al., 2005). AR can “alert drivers and guide their attention to dangerous situations” (Tonnis et al., 2005). We thus assume that AR can enhance global awareness and local guidance by conveying the right information at the right moment. The issue thus becomes: What if we combine AR and a classical interface to ensure safety on the road during autonomous driving and handover processing? This summary presents the research questions that arise when defining the problem of interface specifications. It also presents our methodology, namely building an algorithm from a cognitive approach to support the design of the final adaptive interface for autonomous driving in complex environments. Finally, it presents the expected output of this work.
Item type: Reports, conferences, monographs
No physical items for this record

Konferens: FAST-zero'15: 3rd International symposium on future active safety technology toward zero traffic accidents, 2015, Gothenburg

Recently, a new kind of vehicles has appeared. These vehicles are called autonomous, self-driving, or driverless vehicles. The most well known is the Google car prototyped by Google Corporation. These autonomous vehicles have changed the traditional driving paradigm. Instead of the usual unilateral configuration where drivers are solely responsible for the driving task, a bilateral configuration where drivers rely totally or partially on the vehicle has emerged. For example, in the Google car, the driver is completely out of the driving loop. As a consequence, these fully automated vehicles are not authorized to drive on the open road because of safety issues. There are a number of projects working on lower levels of automation in vehicles, such as the LRA (French acronym for Localization and Augmented Reality) project. The LRA project, which involves about 10 partners from the industry and academic worlds, is sponsored by the French government, in an Institute for Technological Research (IRT) SystemX structure. In its automotive part, the project deals with technical and behavioral issues related to the automotive human-machine interface (HMI) design. Our work is related to this task. The HMI design for autonomous vehicles is an important issue in two particular situations: autonomous mode and handover processing. In the autonomous mode, where drivers can perform a few secondary tasks, they should be able to establish, at any time, a mental representation of the automated system state and the whole context (traffic, infrastructure type, ambient characteristics, etc.). While favoring a trust construct, this representation will enable mutual control, assumptions on how the system works, etc. Before and during the handover processing, drivers should be aware of all the information that will ensure their safety and that of the other road users. Drivers thus need to be reengaged in the control loop. It is possible that they have lost proper situation awareness in the autonomous mode (Endsley, 1995). Many modern cars (e.g. Audi Q7, BMW M3 Berline) are equipped with Head-Up Display (HUD) technology. This technology enables Augmented Reality (AR) implementation (Tonnis, Sandor, Lange, & Bubb, 2005). Usually, AR is defined as a continuum from real to Virtual Reality (Milgram, 1994). We adapt this definition to introduce what we call “Registrated Augmented Reality”. The term means that the AR annotation will match the real objects first, and then adapt itself to the drivers’ glance. It is thus a kind of dynamic AR. Generally, AR in cars deals with “the problem of directing a user’s attention to a point of interest” (Tonnis et al., 2005). AR can “alert drivers and guide their attention to dangerous situations” (Tonnis et al., 2005). We thus assume that AR can enhance global awareness and local guidance by conveying the right information at the right moment. The issue thus becomes: What if we combine AR and a classical interface to ensure safety on the road during autonomous driving and handover processing? This summary presents the research questions that arise when defining the problem of interface specifications. It also presents our methodology, namely building an algorithm from a cognitive approach to support the design of the final adaptive interface for autonomous driving in complex environments. Finally, it presents the expected output of this work.