Towards Understanding the Capability of Spatial Audio Feedback in Virtual Environments for People with Visually Impairments

Virtual Reality (VR) applications have been developed for numerous diverse fields, such as physical rehabilitation, education, and healthcare. Beneficiaries of VR technology include children, the elderly, and persons with physical and mobility impairments. One of VR technology’s most important purposes is its use in safety, such as the pilot training simulator, However, the great majority of all research works for applications related to VR have been based on visual feedback, while very little research in this area has provided a Virtual Environment (VE) person’s with visual impairment who are unable to experience Virtual Reality.  Therefore, due to the dearth of study of VEs as they apply to persons with visual impairments, this proposed research project is to investigate if or how we could use audio feedbacks to provide proper Human-Computer Interaction (HCI) methods for people with visual impairments in VEs.

The proposed project includes two phases. The first phase will address the question of whether current virtual audio techniques could provide enough useful feedback in VEs to people with visual impairments so that they can adapt and map their real life experiences. For example, people with visual impairments can use their echolocation skill to estimate distance; in addition they can also recognize the direction of the sound coming and use the echo to determine moving directions of the object. How could we build a virtual world that people with visual impairments can use? In the virtual world, we could use sound volume to simulate the distance between the objects and the users, and we could also use Head Related Transfer Function (HRTF) to simulate the positions of the sound sources in VEs. However, almost all of the existing VEs are using audio as an additional feedback to the vision. It is unknown if audio feedback itself could provide good and useful presence to visually impaired users in VEs. To investigate this, we will design a usability study to test if users will have enough depth perception (estimations of the distance between the users and the objects) and location (estimations of where the sound sources are, including direction and distance) information from only audio feedback. The hypotheses are:  1) there is no significant difference of the depth estimations made between the real world and the virtual world; 2) there is no significant difference of location estimation made between the real world and the virtual world. We will have both subjective questionnaires and objective data collected from the VEs to evaluate the usability study.

The second phase is based on whether one of the hypotheses of the first phase has been accepted. The research question is what kind of interaction methods we could use for clients navigating in VEs with only audio feedback. To investigate this, we will use HRTF technique to design three different interaction methods for users to navigate in a VE: 1) Passive Objects Audio feedback (POAF), in which users can only hear audio feedback of the walls when they are close (< 1.5 meters) to a wall. 2) Active Objects Audio Feedback (AOAF), in which users can hear audio feedback of a wall in front of them (< 15 meters at any time). The audio feedback will present the distance between the wall and the user. 3) Active Route Audio Feedback (ARAF), in which users can hear a sound source on the route they need to navigate. The audio feedback will present the location of the sound source and distance between the sound source and the user. The hypotheses are: 1) Participants prefer either of the active audio feedback conditions to passive audio feedback condition; and 2) Participants prefer ARAF to AOAF. We will also use subject questionnaires and objective data collected from the VEs to evaluate this usability study.

The ultimate goal of this research is to allow people with visual impairments to use sound for perception and movement in VEs and to provide guidelines for developers to design VEs for them. The potential application of this research work could be using VR technology to design training environments and serious/sedentary games to increase their life skills and improve their daily activities.

Project Department: 

 
Project Status: 
 
Seeking Researchers
 
Researcher Requirements: 

The project includes 4 stages:

  1. Virtual environments developments for the study (Using Unity 3D): Need to know how to create a 3D environment using Unity3D. Need to implement Head Related Transfer Function (HRTF) in Unity3D. The project will use a whole body motion capture system as an input device.
  2. Design and run 2 or more usability studies with more than 40 participants and collecting data
  3. Statistical analysis (SPSS or other statistical analysis tool)
  4. Writing 2 or more publications

The students can choose to participate one or more stages of this project.  Weekly group meeting is required. The student researchers need to report the prograss or show demos, discuss problems and potential solutions.

Project Duration: 
 
Thursday, September 17, 2015 to Saturday, April 30, 2016
©