Primary Auditory Disability
Like in the previous blog post regarding visual disabilities, auditory disabilities can be separated into primary (persistent) and secondary (situational) issues. And as with visual loss or limitation, hearing loss or limitation can have various stages and causes. When I consider hearing loss, I try to break it up into ranges:
- Mild hearing loss: Difficulty hearing sounds lower than about 30 decibels. If there is background noise, it may be hard to understand someone speaking.
- Moderate hearing loss: Difficulty hearing sounds lower than 50 decibels. To put this in perspective, that’s approximately the volume of a refrigerator running. A hearing aid is likely to be helpful in this kind of situation.
- Severe hearing loss: Difficulty hearing sounds lower than 80 decibels. To put this into perspective, that’s the volume of an average washing machine while it is running, or the average alarm clock. At this level, hearing aids may not be helpful.
- Profound hearing loss: Difficulty hearing sounds lower than 95 decibels. That’s about the volume of a passing subway train.
In addition, there are a variety of auditory situational disabilities that many of us may deal with:
- If you are in a noisy restaurant or bar and someone calls you on the phone, you may not be able to hear the phone ringing, even with normal hearing, much less hold a conversation with a person due to the noise level in the room
- Ear infections can happen to all of us as a result of sickness, and often they can have a profound effect on hearing
These situations, again, are temporary or situational, but they would require users to have an option to perceive the audible information in an alternative way. Below are a few considerations for testers to verify the ability for hearing-challenged people to perceive audible content, many of which do not require assistive technology.
Are there text alternatives where possible?
If a video is being displayed, the application should provide closed captions. If presenting a podcast, there should be a text transcript of the content. Additionally, aim to keep the message straightforward, and don’t veer into inside jokes or figures of speech that may not be able to be interpreted as written. This is good advice for audio content in general.
Is the page design smooth and simple?
If there are places in the application where the content is meant to be heard, that should be clear to the user. Follow the suggestion above to make sure there is a way to provide text content for the audio.
Are audio signals used alone as cues?
If an alert relies on a sound, the application should display a message as well; or, if on a mobile device, vibrate to alert the user.
Are there a variety of ways to communicate?
In an age where texting seems the most common way to communicate, many organizations still default to using a phone to handle issues. Allow for other ways to let this communication happen.
Is the content structured clearly?
Too much badly formatted information can be overwhelming for many people, and this is also true for auditory impairment. Well-structured information, with clear headers, bulleted or numbered lists, separation of content and a lack of clutter, can help make pages and applications easier to deal with.
Tests We Can Automate
Automating accessibility tests around auditory issues is a little less simple than it is for visual issues. But, there are aspects, which once we are aware of them, we can look for in pages and markup.
The alt tag isn’t just for screen readers
The alt tag is the simplest and most straightforward option and can be used with many elements. It’s typically used with images to describe through screen readers, but an alt tag can also be used with audio content to describe what it is. There are limitations to this approach, however, as this would not be appropriate for a long presentation.
Use track and VTT files for longer text
If we take advantage of the video and audio tags inside of pages, we can also leverage the track tag to provide a text equivalent of the audio that is being presented. The track tag takes a variety of options. Common examples include kind (what the track tag represents), src (the file that contains the content saved as a vtt file), and srclang (which defines the language/locale to use, such as “en”, “de”, “ja”, etc.). If track files are used with audible content, this makes for an easy test to confirm their existence and verify that the breadth of language options we offer are being represented.
Using the transcript tag to identify transcripts
For truly large text files, the transcript tag is still considered one of the best options. However, the ability to associate a transcript with an audio and video file depends on how the pages and content are structured. A newer feature in HTML5 allows for extending the tag to include transcript as an option. Example: Within a video tag, if the application references a video file, you can extend the track definition to include the kind “transcript” and make a link to a document. The link can be in the same page or it can be in an external document if desired. Once the “kind” transcript is associated, look for the transcript tag as part of a test to verify that it has been included along with the video file. If the goal is to make sure that the content you have worked so hard to create can be shared with (and purchased by, in many cases) the broadest number of people, it makes business sense to allow for multiple options to perceive that information. Just because your application’s users can’t hear the audio content doesn’t mean they would not benefit from it. By giving an alternative and, hopefully, comparable experience, your sites and applications can be useful for more people.