Stay up to date with SPARK - the sour power newsletter

AI Generated Art For Visually Impaired Screen Reader Users - What Is The Risk?

How do we draw the line for what is real and what is fake for non sighted users? If you haven't thought about this before, you aren't alone.

The majority of the folks that create digital content tend to forget that not everyone who consumes it does so with their eyes. There are around 4 million people who use the internet that are visually impaired. The most popular technology users with visual impairments rely on is screen readers.

So the question then becomes, how does one consume visuals with a screen reader?

This is done with something known as alt text. Alt text, short for alternative text, is a description of an image that is added to the HTML code of a webpage. Alt text is used to provide a textual alternative to visual content, such as images, graphs, and charts, for people who cannot see or access the visual content.

Understanding the basic principles of alt text might help us answer that question "What is the risk for non sighted users consuming generated art?" Alt text should be concise, descriptive, and relevant to the image it describes. It is input by the user who uploads that image. Are you posting on twitter? You've been asked to write alt text before. The thing is, most people don't understand why, and I believe many people confuse this with a caption. Again, alt text isn't visible to you on the front end. Alt text is in your code, and programatically conveyed to assistive technology devices.

Alt text should also convey the same information as the visual content it describes, while being mindful of the context in which the image appears. Alt text should avoid being overly long or use promotional language that is not related to the image's content. Again, all of this is provided by the user who is putting the image out into the world.

As folks are consuming and interacting with artificial imagery I can’t help but wonder how that conveys to a non sighted user and what the cues are that this is indeed a generated image. 

We can of course rely on alt text to covey the intended purpose of an image, but that relies on one core principal that we know gets broken way too often on the internet - trust and honesty. For example, I do not imagine most people writing “ai generated image of…” in their descriptions. Some may, but think for a moment how many will not. This is the first piece of deception, if the content on the page around the image isn't talking about the process of how the image was created, or giving cues to that, how will I know?

The second piece of the deception is the intentional misrepresentation of information. Deepfakes.

Describing Deepfakes

With the rise of deepfakes that ai generators are capable of, I am actually quite alarmed about the process of uncovering the details of a deepfake if we cannot rely on our vision to aid us in that process. Think of the intent behind a deepfake, it is to manipulate, to trick, to deceive. If that is your goal, are you going to take the time to write alt text that gives any indication that this is not a real image?

We could consider that either missing alt text, alt text with a lot of typos, etc. could help us determine the information should be of concern. However, we find websites all the time that do have quite valuable information and good intentions, but do not do a good job of either providing alt text accurately, or at all. The lack of alt text is less of a problem, because no alt text means no description meaning I have no idea what the image is of.

This circles back to the question, how does one determine what a deepfake is, if they cannot see the visual cues to help them?

Will screen readers begin to incorporate ai to help detect ai generated images and by default prompt that before reading alt text?

What other methods can we think about in order to ensure folks aren’t targeted and dished fake content with no means of verification?

I don't have all the answers here, but I want us as the leaders in this next chapter of technology to think about it and think about ways to solve the sprad of misinformation.

Of course there can be heavy research that follows one encounter of a false narrative to help prove otherwise, but how many times do folks read an article or headline and not go any further than that one encounter? How many times do folks read a heading of an article and think that’s enough and not even read the piece itself? 

We can’t rely on folks to do more homework, although it is always strongly advised to never have one singular source of truth, there are times that we all do this. 

If you have thoughts on this, I want to hear from you. DM me on Twitter, email me, whatever you're cozy with. Let's keep this discussion alive.