Part of our Siegel Research Fellow Series
Dr. Samantha Bradshaw, part of our inaugural class of Siegel Research Fellows and a Postdoctoral Fellow at the Program on Democracy and the Internet and the Digital Civil Society Lab at Stanford, has published a blog post with David Thiel, Chief Technical Officer at the Stanford Internet Observatory, examining how trust and safety measures on social media platforms can compromise accessibility for visually impaired users.
Trust and safety measures are tools or features designed to mitigate a variety of harms on social media platforms. For example, these tools may label misinformation or direct users to reliable sources of information on critical subjects, such as on elections and public health. Social media companies such as Facebook and Twitter have taken a number of steps to improve trust and safety on their platforms, but how well do these strategies work for users who access social media through accessibility technologies? Bradshaw and Thiel looked at how these efforts impact screen readers, which allow blind or visually impaired users to listen to text from websites or apps, or translate that text into a braille display.
By examining the user interface design of these platforms, they were able to assess if and how newly introduced trust and safety measures compromise accessibility tools. Changes to an app or platform may not translate linearly to how that information is interpreted by an accessibility technology. Warning labels displayed as a visual overlay, for example, may not be interpreted that way by a screen reader, depending on how that information is coded. This can create a disconnect between a feature or a flag and the content to which it refers.
Bradshaw and Thiel analyzed how screen readers interact with such warning labels, and the mechanisms that are intended to warn them about harmful content. They find that generally, “accessibility remains poorly implemented for various trust and safety features,” consistent with an “industry-wide problem where accessibility features are implemented in an ad-hoc fashion instead of built into the development process.”
The post offers recommendations for developers, managers, and testers ranging from easy fixes, such as putting the warning label first before the flagged content, to broader cultural changes, such as encouraging design teams to work with and familiarize themselves with accessibility tools. In the long-term, it’s clear to us that the deeper system needs must be addressed, such as recruiting for more diverse design teams and developing capacity for user research that prioritizes accessibility.
Read the full research brief on the Stanford Internet Observatory Cyber Policy Center’s blog.
Dr. Samantha Bradshaw completed her D.Phil. at the Oxford Internet Institute, where she examined the producers and drivers of disinformation and computational propaganda. Through her fellowship with Siegel Family Endowment, Samantha’s work offers insight into the design of socio-technological infrastructure in ways that ensure all people are able to participate, contribute, and thrive amongst ongoing cultural and technological change.