ESR9 – Assessing Interactions between AVs/VRUs using Virtual/Augmented Reality

Popular scientific abstract

The urban road environment is rife with communication between different participants such as pedestrians, cyclists and car drivers.  When crossing the road, a pedestrian would know that it is fine to go ahead when the driver behind the wheel gives a signal. This could be a flash of the headlamps or even a gesture with their arm or eyes. However, in a future where AVs will be driving in urban environments, such communication would be lacking as the driver is now a passenger and would probably not be paying much attention to the road. Such a situation gives rise to a dangerous scenario. To mitigate this, researchers and industry have been experimenting with external human machine interfaces (eHMIs) which are found on the exterior of the AV and communicate its intentions to other road users. Examples include using LED lights, LED screens, projections on the road, robotic attachments and communicating to a pedestrian’s smartphone amongst others. With so many different approaches, it would be difficult for people to understand the many different eHMIs that would be encountered. Moreover, it could get confusing to whom the car is trying to communicate with.

In comes augmented reality technology. What if the communication arrived individually to the pedestrian through AR glasses? These wearables are expected to penetrate the mass market in the coming years with the possibility of fundamentally changing the way we experience our surroundings. This would also give the opportunity to machines to communicate to us in manner we would personally understand.

Using virtual reality simulations, I will be investigating the use of such technology to improve the communication between robotic cars and humans. This is my contribution to SHAPE-IT, the project which my fellow colleagues and I are working on to support new interactions between these machines and people. Together, we are working to prepare for the environment of tomorrow.

Who am I?

I’m Wilbert Tabone from the island nation of Malta. I graduated BSc. (Hons.) with first class honours in Creative Computing from Goldsmiths, University of London and later read for an MSc in Artificial Intelligence at the University of Malta, conducting my research at the Bernoulli Institute for Mathematics, Computer Science and Artificial Intelligence, University of Groningen in the Netherlands.

Back home I am actively involved in the cultural, technology and education sectors and serve as an activist for a number of Maltese and international NGOs, including the Commonwealth Youth Council.

Previous to my PhD  position I was spearheading creative computing development in the Maltese heritage sector and formed part of the core team which developed the new Malta National Community Art Museum (MUŻA) that subsequently hosted the 2018 Network of European Museum Organizations (NEMO) conference.  Furthermore, I have served as a quality assurance auditor for the National Commission for Further and Higher Education (NCFHE). Lastly, I was  also part of Malta.AI, the Malta National Task Force on Artificial Intelligence which was tasked with  formulating Malta’s national strategy on AI.  I have a keen interest in user experience design (UX), Artificial intelligence, digital cultural heritage, algorithmic computational art and generative design.

My affiliation

Personal page at TUDelft

Contact details to supervisor:

Dr. Riender Happee:

Dr. Joost C.F. de Winter:


Previous work in eHMIs has experimented with different technologies, modalities, and placement techniques. Approaches have included the utilization of LED strip lights [1], [2], [11]–[14], [3]–[10], LED screens [11], [12], [15]–[19], projections [5], [7], [8], [10], [20], [21], attaching additional hardware to anthropomorphise the AV [22]–[25] and utilising street infrastructure [3], [25], [26] amongst others. Information is usually communicated to the VRU either visually, auditory, or through the use of haptic feedback [27]–[29]. Studies have been conducted to contrast the effectiveness of the aforementioned modalities while also exploring which methodology works best. Questions have explored whether textual information is more effective then iconographic information [3], [5], [11], [15], [17], [19], [20], [30] or whether auditory feedback is of any help [3], [9], [16], [17]. Others explored if haptics could be effectively utilised for AV-VRU communication [31] and whether anthropomorphic interfaces are better at conveying their message effectively to VRUs then their counterparts [8], [29], [32]. In order to answer these questions, researchers have been conducted experiments with human participants that involve the utilisation of a diverse set of tools such as real vehicles with eHMI installed [3], [15], [33], [34], Wizard of Oz vehicles [3], [4], [15], [35], on-screen VR [7], [20], [36], CAVE environments [37], [38] and more recently, head-mounted VR displays [9], [11], [17], [18], [39], [40]. Such experiments seek to establish which eHMI design strategy works best and the research and production direction that needs to be taken.

In fact, more recently, research direction has shifted to the next phase of eHMI designs that make use of extended reality technologies such as augmented reality (AR). AR methods have previously been used in internal vehicle interfaces that are targeted towards the driver and passenger [41] and more frequently for pedestrian navigation [42]–[46] through the use of handheld or wearable devices.

Wearable head-mounted displays are slowly penetrating the consumer market with the outlook that a future of pervasive AR would be possible [47]. In this scenario, the AR system would adapt to changing requirements and user constraints in order to provide context-sensitive information and allow for continuous usage [47]. It is this outlook of a context-aware AR future which propagates what is believed to be the next stage of eHMI designs.

Aims and objectives

1). To assess whether augmented reality is a suitable technology for the development of AV eHMIs.

2). Identify user-preferred design elements for an AR eHMI.

3). Identify the best use of AR eHMIs and the best combination of augmented and real environmental elements to promote safe interactions.

4). Develop Virtual/Augmented Reality simulation methods to investigate the interaction between cars and VRUs.

5). Compare virtual and real-world VRU interactions to validate the simulation methods.

Research description

This study proposes a novel approach where the eHMI is displayed as an augmented layer that could be accessed by the VRU using AR glasses. Therefore, the eHMI is no longer exclusively present on the AV or street infrastructure, but rather on the VRU. Information pertaining to the AV’s status and intentions will be displayed as heads-up-display (HUD) information on the user’s augmented layer.

It is envisioned that this approach may alleviate current problems related to VRU-AV interaction using eHMIs. The first of these problems is the issue of ambiguity in a scenario where an AV attempts to communicate with multiple VRUs or vice versa. In this case, VRUs are uncertain whether a particular AV yielded to them or rather to another VRU or whether the AV stopped for an entirely different reason [48], [49]. Since the eHMI would be displayed to each VRU individually, each VRU would have a separate communication detailing whether a particular AV has issued a communication directly to them.  Secondly, the problem of VRUs not deciphering visual eHMIs in time [15] and the language barrier problem [10], [11], [20], [27], [29], [32], [48]–[51] would also potentially be alleviated with this approach since the system would allow for various  customisations to be made to the HUD, such as the user’s preferred information modality (ie: text or iconography) and language. Furthermore, this approach may alleviate the cognitive load on VRUs when encountering varying eHMI designs according to car brand since an individual VRU would experience the same eHMI for each AV. This approach complies with the vision for standardising eHMIs and having a universal system for every car [49]. 

The proposed solution does not only offer a novel approach of displaying eHMI to the VRU but also enables a new testing paradigm whereby multiple  VRUs can simultaneously interact with the same AV utilising distinct eHMIs. It is possible that the proposed solutionwould allow for a better understanding if simultaneous testing is conducted since multiple VRUs would be present in an urban environment, thereby closely representing the real-world scenario.

Lastly, AR offers a novel approach towards validating VR simulators by contrasting VRU behaviour in the simulator with the real world. Utilising real AVs has ethical implications while using Wizard of Oz cars is costly, and experiment reset time is lengthy. With AR, virtual cars can be superimposed on the real layer, and hence testing can be conducted anywhere and efficiently. This approach has the potential to cut experiment cost and time. Unlike the novelty of AR for eHMI designs, AR simulated cars for pedestrian behaviour experiments have already been proposed [52], [53].

Therefore, the proposed approach is threefold in that it proposes a novel AR eHMI design, a way to test simultaneously in VR, and a way to validate in the real world by using AR technology.  The proposed work plan commences with an interview with field experts in order to gauge whether AR has a place in VRU-AR interaction research while exploring design directions, possibilities, and general future directions. Following this, a group user-centred design exercise would be conducted in order to generate user-preferred interfaces for the AR layer. Chosen designs would then be implemented and tested in a virtual environment to gauge their effectiveness in various VRU-AV scenarios (yielding, crossing, etc.). Other experiments which analyse the effectiveness of combining the AR eHMI with physical eHMIs on the car and or street infrastructure would also be conducted in order to identify which non-AR eHMI is best suited for VRUs who are not wearing AR glasses. These experiments would be conducted in a VR pedestrian simulator, which would be developed specifically to allow for AR or hybrid eHMIs to be evaluated simultaneously. The final research step would be to validate the simulator by repeating the experiments in the real domain. In this case, the effectiveness of generating and augmenting virtual cars for eHMI testing would also be gauged.  Although there are a number of challenges associated with this approach, including those which are technical and others related to the acceptance of wearable devices, it is believed that the above approach would make a contribution to the sector as the effectiveness of an upcoming modality would be assessed and analysed.


To come…

My publications

To come…

References and links

[1]        J. ZHANG, “Evaluation of an Autonomous Vehicle External Communication System Concept: A Survey Study,” Adv. Hum. Asp. Transp., vol. 1, no. June, pp. 242–250, 2017.

[2]        T. Petzoldt, K. Schleinitz, and R. Banse, “Potential safety effects of a frontal brake light for motor vehicles,” IET Intell. Transp. Syst., vol. 12, no. 6, pp. 449–453, 2018.

[3]        K. Mahadevan, S. Somanath, and E. Sharlin, “Communicating awareness and intent in autonomous vehicle-pedestrian interaction,” Conf. Hum. Factors Comput. Syst. – Proc., vol. 2018-April, pp. 1–12, 2018.

[4]        A. Habibovic et al., “Communicating Intent of Automated Vehicles to Pedestrians,” Front. Psychol., vol. 9, no. August, 2018.

[5]        L. Fridman, B. Mehler, L. Xia, Y. Yang, L. Y. Facusse, and B. Reimer, “To Walk or Not to Walk: Crowdsourced Assessment of External Vehicle-to-Pedestrian Displays,” 2017.

[6]        C. Ackermann, M. Beggiato, L.-F. Bluhm, and J. Krems, “Vehicle Movement and its Potential as Implicit Communication Signal for Pedestrians and Automated Vehicles,” Proc. 6th Humanist Conf., no. June, pp. 1–7, 2018.

[7]        C. M. Chang, K. Toda, T. Igarashi, M. Miyata, and Y. Kobayashi, “A video-based study comparing communication modalities between an autonomous car and a pedestrian,” Adjun. Proc. – 10th Int. ACM Conf. Automot. User Interfaces Interact. Veh. Appl. AutomotiveUI 2018, pp. 104–109, 2018.

[8]        V. Charisi, A. Habibovic, J. Andersson, J. Li, and V. Evers, “Children’s views on identification and intention communication of self-driving vehicles,” IDC 2017 – Proc. 2017 ACM Conf. Interact. Des. Child., pp. 399–404, 2017.

[9]        M. P. Böckle, M. Klingegard, A. Habibovic, and M. Bout, “SAV2P – Exploring the impact of an interface for shared automated vehicles on pedestrians’ experience,” AutomotiveUI 2017 – 9th Int. ACM Conf. Automot. User Interfaces Interact. Veh. Appl. Adjun. Proc., no. October, pp. 136–140, 2017.

[10]      A. Dietrich, J.-H. Willrodt, K. Wagner, and K. Bengler, “Projection-based external human-machine interfaces – enabling interaction between automated vehicles and pedestrian,” Proc. Driv. Simul. Conf. Eur., no. September, pp. 43–50, 2018.

[11]      K. de Clercq, A. Dietrich, J. P. Núñez Velasco, J. de Winter, and R. Happee, “External Human-Machine Interfaces on Automated Vehicles: Effects on Pedestrian Crossing Decisions,” Hum. Factors, vol. 61, no. 8, pp. 1353–1370, 2019.

[12]      F. Weber, R. Chadowitz, K. Schmidt, J. Messerschmidt, and T. Fuest, “Crossing the Street Across the Globe: A Study on the Effects of eHMI on Pedestrians in the US, Germany and China,” in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2019, vol. 11596 LNCS, pp. 515–530.

[13]      Nissan, “IDS Concept – Experience Nissan | Nissan,” 2015. [Online]. Available: [Accessed: 07-Apr-2020].

[14]      Ford, “Ford, Virginia Tech Go Undercover to Develop Signals That Enable Autonomous Vehicles to Communicate with People | Ford Media Center,” 2017. [Online]. Available: [Accessed: 07-Apr-2020].

[15]      M. Clamann, “Evaluation of Vehicle-to-Pedestrian Communication Displays for Autonomous Vehicles,” pp. 1–10, 2016.

[16]      S. Deb, L. J. Strawderman, and D. W. Carruth, “Investigating pedestrian suggestions for external features on fully autonomous vehicles: A virtual reality experiment,” Transp. Res. Part F Traffic Psychol. Behav., vol. 59, pp. 135–149, 2018.

[17]      C. R. Hudson, Pedestrian Perception of Autonomous Vehicles with External Interacting Features, vol. 781. Springer International Publishing, 2019.

[18]      I. Othersen, A. S. Conti-Kufner, A. Dietrich, P. Maruhn, and K. Bengler, “Designing for Automated Vehicle and Pedestrian Communication: Perspectives on eHMIs from Older and Younger Persons,” Hum. Factors Ergon. Soc. Eur. Chapter 2018 Annu. Conf., vol. 4959, 2018.

[19]      Y. E. Song, C. Lehsing, T. Fuest, and K. Bengler, “External HMIs and their effect on the interaction between pedestrians and automated vehicles,” Adv. Intell. Syst. Comput., vol. 722, pp. 13–18, 2018.

[20]      C. Ackermann, M. Beggiato, S. Schubert, and J. F. Krems, “An experimental study to investigate design and assessment criteria: What is important for communication between pedestrians and automated vehicles?,” Appl. Ergon., vol. 75, no. November 2018, pp. 272–282, 2019.

[21]      Mercedes-Benz, “The Mercedes-Benz F 015 Luxury in Motion.,” 2015. [Online]. Available: [Accessed: 07-Apr-2020].

[22]      C. M. Chang, K. Toda, D. Sakamoto, and T. Igarashi, “Eyes on a car: An interface design for communication between an autonomous car and a pedestrian,” AutomotiveUI 2017 – 9th Int. ACM Conf. Automot. User Interfaces Interact. Veh. Appl. Proc., no. Figure 1, pp. 65–73, 2017.

[23]      Jaguar, “THE VIRTUAL EYES HAVE IT | JLR Corporate Website,” 2018. [Online]. Available: [Accessed: 07-Apr-2020].

[24]      Semcon, “The Smiling Car – Self driving car that sees you | Semcon,” 2016. [Online]. Available: [Accessed: 07-Apr-2020].

[25]      K. Mahadevan, E. Sanoubari, S. Somanath, J. E. Young, and E. Sharlin, “AV-pedestrian interaction design using a pedestrian mixed traffic simulator,” DIS 2019 – Proc. 2019 ACM Des. Interact. Syst. Conf., pp. 475–486, 2019.