DiscoverHCI Deep Dives
HCI Deep Dives
Claim Ownership

HCI Deep Dives

Author: Kai Kunze

Subscribed: 3Played: 12
Share

Description

HCI Deep Dives is your go-to podcast for exploring the latest trends, research, and innovations in Human Computer Interaction (HCI). Auto-generated using the latest publications in the field, each episode dives into in-depth discussions on topics like wearable computing, augmented perception, cognitive augmentation, and digitalized emotions. Whether you’re a researcher, practitioner, or just curious about the intersection of technology and human senses, this podcast offers thought-provoking insights and ideas to keep you at the forefront of HCI.
47 Episodes
Reverse
Our bodies experience a wide variety of kinesthetic forces as we go about our daily lives, including the weight of held objects, contact with surfaces, gravitational loads, and acceleration and centripetal forces while driving, to name just a few. These forces are crucial to realism, yet simply cannot be rendered with today’s consumer haptic suits, which primarily rely on arrays of vibration actuators built into vests. Rigid exoskeletons have more kinesthetic capability to apply forces directly to users’ joints, but are generally cumbersome to wear and cost many thousands of dollars. In this work, we present Kinethreads: a new full-body haptic exosuit design built around string-based motor-pulley mechanisms, which keeps our suit lightweight (<5kg), soft and flexible, quick-to-wear (<30 seconds), comparatively low-cost (~$400), and yet capable of rendering expressive, distributed, and forceful (up to 120N) effects. We detail our system design, implementation, and results from a multi-part performance evaluation and user study. Vivian Shen and Chris Harrison. 2025. Kinethreads: Soft Full-Body Haptic Exosuit using Low-Cost Motor-Pulley Mechanisms. In Proceedings of the 38th Annual ACM Symposium on User Interface Software and Technology (UIST '25). Association for Computing Machinery, New York, NY, USA, Article 1, 1–16. https://doi.org/10.1145/3746059.3747755
Xiaru Meng, Yulan Ju, Christopher Changmok Kim, Yan He, Giulia Barbareschi, Kouta Minamizawa, Kai Kunze, and Matthias Hoppe. 2025. A Placebo Concert: The Placebo Effect for Visualization of Physiological Audience Data during Experience Recreation in Virtual Reality. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems (CHI '25). Association for Computing Machinery, New York, NY, USA, Article 807, 1–16. https://doi.org/10.1145/3706598.3713594 A core use case for Virtual Reality applications is recreating real-life scenarios for training or entertainment. Promoting physiological responses for users in VR that match those of real-life spectators can maximize engagement and contribute to more co-presence. Current research focuses on visualizations and measurements of physiological data to ensure experience accuracy. However, placebo effects are known to influence performance and self-perception in HCI studies, creating a need to investigate the effect of visualizing different types of data (real, unmatched, and fake) on user perception during event recreation in VR. We investigate these conditions through a balanced between-groups study (n=44) of uninformed and informed participants. The informed group was provided with the information that the data visualizations represented previously recorded human physiological data. Our findings reveal a placebo effect, where the informed group demonstrated enhanced engagement and co-presence. Additionally, the fake data condition in the informed group evoked a positive emotional response. https://doi.org/10.1145/3706598.3713594
Perceiving and altering the sensation of internal physiological states, such as heartbeats, is key for biofeedback and interoception. Yet, wearable devices used for this purpose can feel intrusive and typically fail to deliver stimuli aligned with the heart’s location in the chest. To address this, we introduce Heartbeat Resonance, which uses low-frequency sound waves to create non-contact haptic sensations in the chest cavity, mimicking heartbeats. We conduct two experiments to evaluate the system’s effectiveness. The first experiment shows that the system created realistic heartbeat sensations in the chest, with 78.05 Hz being the most effective frequency. In the second experiment, we evaluate the effects of entrainment by simulating faster and slower heart rates. Participants perceived the intended changes and reported high confidence in their perceptions for +15% and -30% heart rates. This system offers a non-intrusive solution for biofeedback while creating new possibilities for immersive VR environments. Waseem Hassan, Liyue Da, Sonia Elizondo, and Kasper Hornbæk. 2025. Heartbeat Resonance: Inducing Non-contact Heartbeat Sensations in the Chest. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems (CHI '25). Association for Computing Machinery, New York, NY, USA, Article 913, 1–22. https://doi.org/10.1145/3706598.3713959
To enhance focused eating and dining socialization, previous Human-Food Interaction research has indicated that external devices can support these dining objectives and immersion. However, methods that focus on the food itself and the diners themselves have remained underdeveloped. In this study, we integrated biofeedback with food, utilizing diners’ heart rates as a source of the food’s appearance to promote focused eating and dining socialization. By employing LED lights, we dynamically displayed diners’ real-time physiological signals through the transparency of the food. Results revealed significant effects on various aspects of dining immersion, such as awareness perceptions, attractiveness, attentiveness to each bite, and emotional bonds with the food. Furthermore, to promote dining socialization, we established a “Sharing Bio-Sync Food” dining system to strengthen emotional connections between diners. Based on these findings, we developed tableware that integrates biofeedback into the culinary experience. Weijen Chen, Qingyuan Gao, Zheng Hu, Kouta Minamizawa, and Yun Suen Pai. 2025. Living Bento: Heartbeat-Driven Noodles for Enriched Dining Dynamics. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems (CHI '25). Association for Computing Machinery, New York, NY, USA, Article 353, 1–18. https://doi.org/10.1145/3706598.3713108  
When several individuals collaborate on a shared task, their brain activities often synchronize. This phenomenon, known as Inter-brain Synchronization (IBS), is notable for inducing prosocial outcomes such as enhanced interpersonal feelings, including closeness, trust, empathy, and more. Further strengthening the IBS with the aid of external feedback would be beneficial for scenarios where those prosocial feelings play a vital role in interpersonal communication, such as rehabilitation between a therapist and a patient, motor skill learning between a teacher and a student, and group performance art. This paper investigates whether visual, auditory, and haptic feedback of the IBS level can further enhance its intensity, offering design recommendations for feedback systems in IBS. We report findings when three different types of feedback were provided: IBS level feedback by means of on-body projection mapping, sonification using chords, and vibration bands attached to the wrist.   Jamie Ngoc Dinh, Snehesh Shrestha, You-Jin Kim, Jun Nishida, and Myungin Lee. 2025. NeuResonance: Exploring Feedback Experiences for Fostering the Inter-brain Synchronization. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems (CHI '25). Association for Computing Machinery, New York, NY, USA, Article 363, 1–16. https://doi.org/10.1145/3706598.3713872  
Yulan Ju, Xiaru Meng, Harunobu Taguchi, Tamil Selvan Gunasekaran, Matthias Hoppe, Hironori Ishikawa, Yoshihiro Tanaka, Yun Suen Pai, and Kouta Minamizawa. 2025. Haptic Empathy: Investigating Individual Differences in Affective Haptic Communications. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems (CHI '25). Association for Computing Machinery, New York, NY, USA, Article 501, 1–25. https://doi.org/10.1145/3706598.3714139 Nowadays, touch remains essential for emotional conveyance and interpersonal communication as more interactions are mediated remotely. While many studies have discussed the effectiveness of using haptics to communicate emotions, incorporating affect into haptic design still faces challenges due to individual user tactile acuity and preferences. We assessed the conveying of emotions using a two-channel haptic display, emphasizing individual differences. First, 24 participants generated 187 haptic messages reflecting their immediate sentiments after watching 8 emotionally charged film clips. Afterwards, 19 participants were asked to identify emotions from haptic messages designed by themselves and others, yielding 593 samples. Our findings suggest potential links between haptic message decoding ability and emotional traits, particularly Emotional Competence (EC) and Affect Intensity Measure (AIM). Additionally, qualitative analysis revealed three strategies participants used to create touch messages: perceptive, empathetic, and metaphorical expression. https://dl.acm.org/doi/10.1145/3706598.3714139  
Riku Kitamura, Kenji Yamada, Takumi Yamamoto, and Yuta Sugiura. 2025. Ambient Display Utilizing Anisotropy of Tatami. In Proceedings of the Nineteenth International Conference on Tangible, Embedded, and Embodied Interaction (TEI '25). Association for Computing Machinery, New York, NY, USA, Article 3, 1–15. https://doi.org/10.1145/3689050.3704924   Recently, digital displays such as liquid crystal displays and projectors have enabled high-resolution and high-speed information transmission. However, their artificial appearance can sometimes detract from natural environments and landscapes. In contrast, ambient displays, which transfer information to the entire physical environment, have gained attention for their ability to blend seamlessly into living spaces. This study aims to develop an ambient display that harmonizes with traditional Japanese tatami rooms by proposing an information presentation method using tatami mats. By leveraging the anisotropic properties of tatami, which change their reflective characteristics according to viewing angles and light source positions, various images and animations can be represented. We quantitatively evaluated the color change of tatami using color difference. Additionally, we created both static and dynamic displays as information presentation methods using tatami. https://doi.org/10.1145/3689050.3704924  
Hu, Yuhan, Peide Huang, Mouli Sivapurapu, and Jian Zhang. "ELEGNT: Expressive and Functional Movement Design for Non-anthropomorphic Robot." arXiv preprint arXiv:2501.12493(2025). https://arxiv.org/abs/2501.12493 Nonverbal behaviors such as posture, gestures, and gaze are essential for conveying internal states, both consciously and unconsciously, in human interaction. For robots to interact more naturally with humans, robot movement design should likewise integrate expressive qualities—such as intention, attention, and emotions—alongside traditional functional considerations like task fulfillment, spatial constraints, and time efficiency. In this paper, we present the design and prototyping of a lamp-like robot that explores the interplay between functional and expressive objectives in movement design. Using a research-through-design methodology, we document the hardware design process, define expressive movement primitives, and outline a set of interaction scenario storyboards. We propose a framework that incorporates both functional and expressive utilities during movement generation, and implement the robot behavior sequences in different function- and social-oriented tasks. Through a user study comparing expression-driven versus function-driven movements across six task scenarios, our findings indicate that expression-driven movements significantly enhance user engagement and perceived robot qualities. This effect is especially pronounced in social-oriented tasks.
K. Brandstätter, B. J. Congdon and A. Steed, "Do you read me? (E)motion Legibility of Virtual Reality Character Representations," 2024 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Bellevue, WA, USA, 2024, pp. 299-308, doi: 10.1109/ISMAR62088.2024.00044.   We compared the body movements of five virtual reality (VR) avatar representations in a user study (N=53) to ascertain how well these representations could convey body motions associated with different emotions: one head-and-hands representation using only tracking data, one upper-body representation using inverse kinematics (IK), and three full-body representations using IK, motioncapture, and the state-of-the-art deep-learning model AGRoL. Participants’ emotion detection accuracies were similar for the IK and AGRoL representations, highest for the full-body motion-capture representation and lowest for the head-and-hands representation. Our findings suggest that from the perspective of emotion expressivity, connected upper-body parts that provide visual continuity improve clarity, and that current techniques for algorithmically animating the lower-body are ineffective. In particular, the deep-learning technique studied did not produce more expressive results, suggesting the need for training data specifically made for social VR applications. https://ieeexplore.ieee.org/document/10765392
The Oscar best picture winning movie CODA has helped introduce Deaf culture to many in the hearing community. The capital "D" in Deaf is used when referring to the Deaf culture, whereas small "d" deaf refers to the medical condition. In the Deaf community, sign language is used to communicate, and sign has a rich history in film, the arts, and education. Learning about the Deaf culture in the United States and the importance of American Sign Language in that culture has been key to choosing projects that are useful and usable for the Deaf.   
J. Lee et al., "Whirling Interface: Hand-based Motion Matching Selection for Small Target on XR Displays," 2024 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Bellevue, WA, USA, 2024, pp. 319-328, doi: 10.1109/ISMAR62088.2024.00046. We introduce “Whirling Interface,” a selection method for XR displays using bare-hand motion matching gestures as an input technique. We extend the motion matching input method, by introducing different input states to provide visual feedback and guidance to the users. Using the wrist joint as the primary input modality, our technique reduces user fatigue and improves performance while selecting small and distant targets. In a study with 16 participants, we compared the whirling interface with a standard ray casting method using hand gestures. The results demonstrate that the Whirling Interface consistently achieves high success rates, especially for distant targets, averaging 95.58% with a completion time of 5.58 seconds. Notably, it requires a smaller camera sensing field of view of only 21.45° horizontally and 24.7° vertically. Participants reported lower workloads on distant conditions and expressed a higher preference for the Whirling Interface in general. These findings suggest that the Whirling Interface could be a useful alternative input method for XR displays with a small camera sensing FOV or when interacting with small targets. https://ieeexplore.ieee.org/abstract/document/10765156
Z. Chang et al., "Perceived Empathy in Mixed Reality: Assessing the Impact of Empathic Agents’ Awareness of User Physiological States," 2024 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Bellevue, WA, USA, 2024, pp. 406-415, doi: 10.1109/ISMAR62088.2024.00055. https://doi.org/10.1109/ISMAR62088.2024.00055 In human-agent interaction, establishing trust and a social bond with the agent is crucial to improving communication quality and performance in collaborative tasks. This paper investigates how a Mixed Reality Agent’s (MiRA) ability to acknowledge a user’s physiological state affects perceptions such as empathy, social connectedness, presence, and trust. In a within-subject study with 24 subjects, we varied the companion agent’s awareness during a mixed-reality first-person shooting game. Three agents provided feedback based on the users’ physiological states: (1) No Awareness Agent (NAA), which did not acknowledge the user’s physiological state; (2) Random Awareness Agent (RAA), offering feedback with varying accuracy; and (3) Accurate Awareness Agent (AAA), which provided consistently accurate feedback. Subjects reported higher scores on perceived empathy, social connectedness, presence, and trust with AAA compared to RAA and NAA. Interestingly, despite exceeding NAA in perception scores, RAA was the least favored as a companion. The findings and implications for the design of MiRA interfaces are discussed, along with the limitations of the study and directions for future work. https://ieeexplore.ieee.org/document/10765390
Uğur Genç and Himanshu Verma. 2024. Situating Empathy in HCI/CSCW: A Scoping Review. Proc. ACM Hum.-Comput. Interact. 8, CSCW2, Article 513 (November 2024), 37 pages. https://doi.org/10.1145/3687052   Empathy is considered a crucial construct within HCI and CSCW, yet our understanding of this complex concept remains fragmented and lacks consensus in existing research. In this scoping review of 121 articles from the ACM Digital Library, we synthesize the diverse perspectives on empathy and scrutinize its current conceptualization and operationalization. In particular, we examine the various interpretations and definitions of empathy, its applications, and the methodologies, findings, and trends in the field. Our analysis reveals a lack of consensus on the definitions and theoretical underpinnings of empathy, with interpretations ranging from understanding the experiences of others to an affective response to the other's situation. We observed that despite the variety of methods used to gauge empathy, the predominant approach remains self-assessed instruments, highlighting the lack of novel and rigorously established and validated measures and methods to capture the multifaceted manifestations of empathy. Furthermore, our analysis shows that previous studies have used a variety of approaches to elicit empathy, such as experiential methods and situational awareness. These approaches have demonstrated that shared stressful experiences promote community support and relief, while situational awareness promotes empathy through increased helping behavior. Finally, we discuss a) the potential and drawbacks of leveraging empathy to shape interactions and guide design practices, b) the need to find a balance between the collective focus of empathy and the (existing and dominant) focus on the individual, and c) the careful testing of empathic designs and technologies with real-world applications. https://dl.acm.org/doi/10.1145/3687052  
Isna Alfi Bustoni, Mark McGill, and Stephen Anthony Brewster. 2024. Exploring the Alteration and Masking of Everyday Noise Sounds using Auditory Augmented Reality. In Proceedings of the 26th International Conference on Multimodal Interaction (ICMI '24). Association for Computing Machinery, New York, NY, USA, 154–163. https://doi.org/10.1145/3678957.3685750 While noise-cancelling headphones can block out or mask environmental noise with digital sound, this costs the user situational awareness and information. With the advancement of acoustically transparent personal audio devices (e.g. headphones, open-ear audio frames), Auditory Augmented Reality (AAR), and real-time audio processing, it is feasible to preserve user situational awareness and relevant information whilst diminishing the perception of the noise. Through an online survey (n=124), this research explored users’ attitudes and preferred AAR strategy (keep the noise, make the noise more pleasant, obscure the noise, reduce the noise, remove the noise, and replace the noise) toward different types of noises from a range of categories (living beings, mechanical, and environmental) and varying degrees of relevance. It was discovered that respondents’ degrees of annoyance varied according to the kind of noise and its relevance to them. Additionally, respondents had a strong tendency to reduce irrelevant noise and retain more relevant noise. Based on our findings, we discuss how AAR can assist users in coping with noise whilst retaining relevant information through selectively suppressing or altering the noise, as appropriate. https://dl.acm.org/doi/10.1145/3678957.3685750    
Pratheep Kumar Chelladurai, Ziming Li, Maximilian Weber, Tae Oh, and Roshan L Peiris. 2024. SoundHapticVR: Head-Based Spatial Haptic Feedback for Accessible Sounds in Virtual Reality for Deaf and Hard of Hearing Users. In Proceedings of the 26th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS '24). Association for Computing Machinery, New York, NY, USA, Article 31, 1–17. https://doi.org/10.1145/3663548.3675639 Virtual Reality (VR) systems use immersive spatial audio to convey critical information, but these audio cues are often inaccessible to Deaf or Hard-of-Hearing (DHH) individuals. To address this, we developed SoundHapticVR, a head-based haptic system that converts audio signals into haptic feedback using multi-channel acoustic haptic actuators. We evaluated SoundHapticVR through three studies: determining the maximum tactile frequency threshold on different head regions for DHH users, identifying the ideal number and arrangement of transducers for sound localization, and assessing participants’ ability to differentiate sound sources with haptic patterns. Findings indicate that tactile perception thresholds vary across head regions, necessitating consistent frequency equalization. Adding a front transducer significantly improved sound localization, and participants could correlate distinct haptic patterns with specific objects. Overall, this system has the potential to make VR applications more accessible to DHH users. https://dl.acm.org/doi/10.1145/3663548.3675639  
Giulia Barbareschi, Ando Ryoichi, Midori Kawaguchi, Minato Takeda, and Kouta Minamizawa. 2024. SeaHare: An omidirectional electric wheelchair integrating independent, remote and shared control modalities. In Proceedings of the 26th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS '24). Association for Computing Machinery, New York, NY, USA, Article 9, 1–16. https://doi.org/10.1145/3663548.3675657 Depending on one’s needs electric wheelchairs can feature different interfaces and driving paradigms with control handed to the user, a remote pilot, or shared. However, these systems have generally been implemented on separate wheelchairs, making comparison difficult. We present the design of an omnidirectional electric wheelchair that can be controlled using two sensing seats detecting changes in the centre of gravity. One of the sensing seats is used by the person on the wheelchair, whereas the other is used as a remote control by a second person. We explore the use of the wheelchair using different control paradigms (independent, remote, and shared) from both the wheelchair and the remote control seat with 5 dyads and 1 triad of participants, including wheelchair users and non. Results highlight key advantages and disadvantages of the SeaHare in different paradigms, with participants’ perceptions affected by their skills and lived experiences, and reflections on how different control modes might suit different scenarios. https://dl.acm.org/doi/10.1145/3663548.3675657
Giulia Barbareschi, Songchen Zhou, Ando Ryoichi, Midori Kawaguchi, Mark Armstrong, Mikito Ogino, Shunsuke Aoiki, Eisaku Ohta, Harunobu Taguchi, Youichi Kamiyama, Masatane Muto, Kentaro Yoshifuji, and Kouta Minamizawa. 2024. Brain Body Jockey project: Transcending Bodily Limitations in Live Performance via Human Augmentation. In Proceedings of the 26th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS '24). Association for Computing Machinery, New York, NY, USA, Article 18, 1–14. https://doi.org/10.1145/3663548.3675621 Musicians with significant mobility limitations, face unique challenges in being able to use their bodies to interact with fans during live performances. In this paper we present the results of a collaboration between a professional DJ with advanced Amyotrophic Lateral Sclerosis and a group of technologists and researchers culminating in two public live performances leveraging human augmentation technologies to enhance the artist’s stage presence. Our system combines Brain Machine Interface, and accelerometer based trigger, to select pre-programmed moves performed by robotic arms during a live event, as well as for facilitating direct physical interaction during a “Meet the DJ” event. Our evaluation includes ethnographic observations and interviews with the artist and members of the audience. Results show that the system allowed artist and audience to feel a sense of unity, expanded the imagination of creative possibilities, and challenged conventional perceptions of disability in the arts and beyond. https://dl.acm.org/doi/10.1145/3663548.3675621  
F. Chiossi, I. Trautmannsheimer, C. Ou, U. Gruenefeld and S. Mayer, "Searching Across Realities: Investigating ERPs and Eye-Tracking Correlates of Visual Search in Mixed Reality," in IEEE Transactions on Visualization and Computer Graphics, vol. 30, no. 11, pp. 6997-7007, Nov. 2024, doi: 10.1109/TVCG.2024.3456172. Mixed Reality allows us to integrate virtual and physical content into users' environments seamlessly. Yet, how this fusion affects perceptual and cognitive resources and our ability to find virtual or physical objects remains uncertain. Displaying virtual and physical information simultaneously might lead to divided attention and increased visual complexity, impacting users' visual processing, performance, and workload. In a visual search task, we asked participants to locate virtual and physical objects in Augmented Reality and Augmented Virtuality to understand the effects on performance. We evaluated search efficiency and attention allocation for virtual and physical objects using event-related potentials, fixation and saccade metrics, and behavioral measures. We found that users were more efficient in identifying objects in Augmented Virtuality, while virtual objects gained saliency in Augmented Virtuality. This suggests that visual fidelity might increase the perceptual load of the scene. Reduced amplitude in distractor positivity ERP, and fixation patterns supported improved distractor suppression and search efficiency in Augmented Virtuality. We discuss design implications for mixed reality adaptive systems based on physiological inputs for interaction. https://ieeexplore.ieee.org/document/10679197  
S. Cheng, Y. Liu, Y. Gao and Z. Dong, "“As if it were my own hand”: inducing the rubber hand illusion through virtual reality for motor imagery enhancement," in IEEE Transactions on Visualization and Computer Graphics, vol. 30, no. 11, pp. 7086-7096, Nov. 2024, doi:        10.1109/TVCG.2024.3456147 Brain-computer interfaces (BCI) are widely used in the field of disability assistance and rehabilitation, and virtual reality (VR) is increasingly used for visual guidance of BCI-MI (motor imagery). Therefore, how to improve the quality of electroencephalogram (EEG) signals for MI in VR has emerged as a critical issue. People can perform MI more easily when they visualize the hand used for visual guidance as their own, and the Rubber Hand Illusion (RHI) can increase people's ownership of the prosthetic hand. We proposed to induce RHI in VR to enhance participants' MI ability and designed five methods of inducing RHI, namely active movement, haptic stimulation, passive movement, active movement mixed with haptic stimulation, and passive movement mixed with haptic stimulation, respectively. We constructed a first-person training scenario to train participants' MI ability through the five induction methods. The experimental results showed that through the training, the participants' feeling of ownership of the virtual hand in VR was enhanced, and the MI ability was improved. Among them, the method of mixing active movement and tactile stimulation proved to have a good effect on enhancing MI. Finally, we developed a BCI system in VR utilizing the above training method, and the performance of the participants improved after the training. This also suggests that our proposed method is promising for future application in BCI rehabilitation systems. https://ieeexplore.ieee.org/document/10669780
Pavel Manakhov, Ludwig Sidenmark, Ken Pfeuffer, and Hans Gellersen. 2024. Filtering on the Go: Effect of Filters on Gaze Pointing Accuracy During Physical Locomotion in Extended Reality. IEEE Transactions on Visualization and Computer Graphics 30, 11 (Nov. 2024), 7234–7244. https://doi.org/10.1109/TVCG.2024.3456153 Eye tracking filters have been shown to improve accuracy of gaze estimation and input for stationary settings. However, their effectiveness during physical movement remains underexplored. In this work, we compare common online filters in the context of physical locomotion in extended reality and propose alterations to improve them for on-the-go settings. We conducted a computational experiment where we simulate performance of the online filters using data on participants attending visual targets located in world-, path-, and two head-based reference frames while standing, walking, and jogging. Our results provide insights into the filters' effectiveness and factors that affect it, such as the amount of noise caused by locomotion and differences in compensatory eye movements, and demonstrate that filters with saccade detection prove most useful for on-the-go settings. We discuss the implications of our findings and conclude with guidance on gaze data filtering for interaction in extended reality. https://ieeexplore.ieee.org/document/10672561
loading
Comments