
< session />
Tue, April 21FrontEndDeepTech TechLead
Images carry meaning, but that meaning is often lost for people who rely on alternative text. While AI is widely used to generate images from prompts, its role in describing existing images is frequently overlooked. This session focuses on how modern AI models can generate useful, accurate alternative text for complex visuals, including infographics, charts, and data heavy graphics. The talk examines where automated descriptions work well, where they fall short, and how to use them responsibly to improve accessibility without introducing confusion or bias.
What You Will Learn
How AI models can generate alternative text for complex images such as charts and infographics
Best practices for producing clear, meaningful, and accessible image descriptions
Limitations and ethical considerations when using AI for automated alt text
Who Should Attend
Front end and web developers
UX and accessibility practitioners
Designers working with data visualizations
Engineering teams responsible for inclusive digital experiences
< speaker_info />
Scott Davis is a Web Architect and Digital Accessibility Advocate, focusing on the multisensory aspects of web development. In a world where half of all Google searches are done by voice, and 80% of all social media videos are watched with the sound off and closed captions on, accessibility is a springboard for innovation.