Let's explore the rapid adoption of AI in automating accessibility tasks, addressing ethical concerns, reducing data bias, eand transforming complex texts into plain language for broader comprehension.
If you aren't familiar with A11yTalks, and you're interested in creating inclusive experiences, I would suggest tuning in for their monthly sessions exploring the current landscape of digital accessibility. I have been tuning into these since their inception and always find them quite insightful and get me thinking about new ways to push myself as an accessibility advocate.
This months topic, with Sheri Byrne-Haber (she/her), was one I was more excited for that I have been in a long time, because it bridges two worlds I am very passionate about. The intersection of AI and Accessibility.
Sheri's talk, Improving Accessibility through Leveraging Large Language Models (LLMs), provides a quick overview of what AI is and then dives deep into some great use cases for leveraging these models to create more inclusive experiences.
If you haven't had a chance to hear Sheri talk, her brilliance in this space is unmatched.
Key Highlights
- AI can automate tasks such as generating test plans for screen readers and summarizing complex bug tickets, thereby saving time and resources.
- The adoption of AI technology is rapid, often outpacing ethical considerations. Sherry highlights the issue of data bias, particularly in terms of gender, race, and disability, and emphasizes the need for careful oversight to prevent unethical use of AI.
- AI's capability to convert text into plain language can make content more accessible to people with varying levels of literacy and cognitive abilities.
- While AI can accurately predict medical conditions from diagnostic tests, the misuse of AI by insurance companies to deny care is a significant ethical issue.
- AI can translate text to different languages more efficiently helping reduce the knowledge gap of the content author and the translator
I think my favorite part of listening to Sheri was right near the end when there was a question from the audience about the pace at which this technology is being adopted. She replied with "the adoption of technology always happens faster than the ethical use of that technology," and I couldn't have been more aligned with a response to a question.
In the accessibility space we find more and more pushback to automation taking out the human element and being overly cautious about ethical concerns, causing us to look the other way. It is critical in these moments that the people with the knowledge and the right mindsets step up to help retrain and better align the technology to behave in a more inclusive way.
Use AI to Address the Mundane
My key takeaway from this though was not about capabilities, or ethics, but around how to best leverage it within the accessibility space. Focusing on tasks like VPAT reports, writing up personas, creating test stories, etc. can all be expedited with the use of AI.
We can build prompt libraries to help enable folks without extensive knowledge in this space to help them become testing experts faster and more efficiently.
This, as Sheri mentioned, would then allow the folks with more experience to be able to perform testing on assistive technology devices that are often ignored due to time constraints, and will enable a more robust experience for everyone.