Health Systems Institute Homepage
Photo of the Molecular Science and Engineering buildingCISE labPhoto of Taryn Davis at APHA06 HSI Booth

Join HSI email list Join an HSI mailing list

HSI Intranet HSI Intranet

Health Systems Institute
Georgia Institute of Technology
828 West Peachtree Street, NW
2nd Floor
Atlanta, GA 30332-0477
404.385.8193 (phone)
404.385.7452 (fax)


Research

Project Profiles

Previous | Next

Adaptive, Multimodal Human-Computer Interfaces for the Visually Impaired

Due to the growing popularity of the Internet and other information technology systems, there is a critical need for all citizens to be empowered to access information electronically. As a result, the concept of universal accessibility has emerged in the fields of human-computer interaction and interface design. To achieve universal access to electronic information technologies, designs must overcome barriers that have been perpetuated by traditional “one-size-fits-all” philosophies in order to accommodate the disabled, the elderly and technologically unsophisticated individuals. In order for information technologies to be universally accessible, there must be a paradigm shift in human-computer interaction (HCI) that shifts the burden of interpreting behavior from the human to the computer. An ever-growing population of users who will benefit tremendously from this paradigm shift is those with impaired vision. By age 45, one in every six Americans will develop some type of uncorrectable visual impairment.

To facilitate shifting the burden of interpreting behavior from the human to the computer, the notion of adaptive interfaces has emerged. In order for an interface to be successfully adaptive for users who are visually impaired, it must be:

  • intelligent so that it can dynamically assess a person’s visual capabilities,
  • personalized so that it can morph to dynamically accommodate multiple users representing a full range of visual capabilities, and
  • multimodal so that it can accommodate visual, auditory and haptic interaction modalities.

The objective of this research is to develop the methodology and tools necessary to implement adaptive, multimodal human-computer interfaces that are personalized for individual users representing a full spectrum of visual capabilities. The specific modes to be considered for display and control interactions are: (1) speech, vocalizations, and other aural information, (2) vision, including text, images, video and computer graphics for iconic displays, and (3) haptics, including tactile information.


Sponsor: Intel Research

To the Georgia Tech homepageTo the Emory University homepage