Title: Appliance displays: accessibility challenges and proposed solutions
Legend: Sample LED display containing a string of digits, each of which is a standard seven-segment digit. For each digit, our algorithm estimates its bounding box (shown in white) and then reads it aloud using text-to-speech (results in yellow).
Citation: Fusco, G., Tekin, E., Giudice, N. A., & Coughlan, J. M. (2015, October). Appliance Displays: Accessibility Challenges and Proposed Solutions. In Proceedings of the 17th International ACM SIGACCESS Conference on Computers & Accessibility (pp. 405-406). ACM.
Abstract: People who are blind or visually impaired face difficulties using a growing array of everyday appliances because they are equipped with inaccessible electronic displays. We report developments on our “Display Reader” smartphone app, which uses computer vision to help a user acquire a usable image of a display and have the contents read aloud, to address this problem. Drawing on feedback from past and new studies with visually impaired volunteer participants, as well as from blind accessibility experts, we have improved and simplified our user interface and have also added the ability to read seven-segment digit displays. Our system works fully automatically and in real time, and we compare it with general-purpose assistive apps, such as Be My Eyes, which recruit remote sighted-assistants (RSAs) to answer questions about video captured by the user. Our discussions and preliminary experiment highlight the advantages and disadvantages of fully automatic approaches compared with RSAs, and suggest possible hybrid approaches to investigate in the future.
About the lab: One of the main interests of the Tekin lab is using emerging technologies to improve communication aids for persons with vision and hearing loss, a fast growing segment of the population in developed countries as life expectancies increase. Whereas individuals who have hearing loss but good vision can make use of facial cues to improve their speech reception, those who have combined vision and hearing loss are unable to compensate for the loss of information in communication. We are exploring combining audio and video inputs in order to improve speech enhancement algorithms to aid speech reception for persons with such dual sensory losses.