They ensure that Elsevier products are more fully accessible, which is a requirement for a large and growing number of customer contracts as well as a legal issue.They give users with visual impairments the ability to understand dynamic charts and get the information they need indepedently.The other risk you take in using sighted assistance is that the person who’s giving you the information has their own interpretation of the chart, so you’re subject to their bias you don’t get to form your own opinions on the information. Somebody else had to be involved, so I had to schedule the time to get the information. I used to have to pay a reader or find a volunteer.
In her case, being able to interpret charts on her own is saving her time and money while enabling her to come up with an independent interpretation of the visualization. “Charts and graphs are really important for us in academia, and Ted's and Øystein’s work has filled a gap, providing us with a solution we haven't had access to before.” “This innovation allows me to interact with the chart and understand the relationships of all the components of the chart to all other components rather than just getting a description of the chart,” said Lucy, a Web Accessibility Evangelist for Information Services and Technology (IST) at the University of California, Berkeley. “We are hoping that our work can not only be a concrete improvement to our products but also help bring a new generation of accessibility to charts and graphics on the web."įor Lucy Greco, a test user for Highcharts who has been blind since birth, the difference was substantial. “This project has sparked a discussion among experts in the field towards more accessible data visualization,” Øystein said. In their 8-month collaboration, Ted used his accessibility expertise to guide the creation of an improved system of descriptive tags for charts and graphs that is setting a new industry standard for accessibility. “Instead of following established solutions, Ted was eager to explore new ideas in an attempt to improve the accessibility experience of charts,” said Øystein Moseng, Core Developer for Highsoft, “and we embarked on a long process of trial, error and user testing to find a better solution.” And when they heard from Ted, they were eager to work with him to find a solution for Elsevier. Highcharts and other Highsoft products are used by many of the world’s technology leaders, including IBM, Microsoft, GE, Facebook and MasterCard. But when they discussed the limitations with Ted, he offered to contact Highsoft, the Norway-based company that makes Highcharts.
In the case of Scopus – which millions of researchers rely on as the world’s largest abstract and citation database of peer reviewed literature – the system uses a third-party software library, Highcharts, to produce its charts and graphs.Īs a result, members of the Scopus technology and product teams feared there wasn't much they could do about the problem. “One of the first design goals we agreed upon was to create a single accessible version of a chart and not a separate accessible version,” said Ted Gies, Elsevier’s User Experience Lead Specialist. So colleagues at Elsevier set out to find a user-friendly way to make these charts and graphs fully accessible. However, this solution requires that the user switch to an alternate view in order to access the data – and this tabular view lacks the visual comparison of elements sighted users benefit from. Meanwhile, the inaccessible graphical chart remains on the page like a speedbump to understanding the overall content.
Web developers have traditionally addressed this issue by providing chart data in a separate table view the screen reader can pick up. Typically they have to pay a reader or find a volunteer to assist.Īfter all, alternative text alone is unlikely to convey what is contained in the x and y axes, for example, or the detailed contours and underlying data of trend lines. Would you be able to make sense of scientific charts and graphs? Or get any information about what they look like and the information they convey?įor many researchers in this position, the answer has been “no” – or in a limited way that is far from ideal. Imagine you’re visually impaired and you rely on a screen reader to read text aloud and interpret images for you on your computer.