Visualizing for the Non‐Visual: Enabling the Visually Impaired to Use Visualization

The majority of visualizations on the web are still stored as raster images, making them inaccessible to visually impaired users. We propose a deep-neural-network-based approach that automatically recognizes key elements in a visualization, including a visualization type, graphical elements, labels, legends, and most importantly, the original data conveyed in the visualization. We leverage such extracted information to provide visually impaired people with the reading of the extracted information. Based on interviews with visually impaired users, we built a Google Chrome extension designed to work with screen reader software to automatically decode charts on a webpage using our pipeline. We compared the performance of the back-end algorithm with existing methods and evaluated the utility using qualitative feedback from visually impaired users.

Focus: Tool
Source: Computer Graphics Forum
Readability: Expert
Type: Website Article
Open Source: Yes
Keywords: Human-centred computing, Visual analytics, Visualization toolkits, Consume Content, Chart Data Extraction, Screen Reader, NVDA
Learn Tags: Data Tools Design/Methods AI and Machine Learning Assistive Technology Disability
Summary: A paper that proposes a deep‐neural‐network‐based approach that automatically recognizes key elements in a visualization, including visualization type, graphical elements, labels, legends, and the original data conveyed in the visualization.