WCAG2.0 Level A automated checks unveiled

There are many software products that claim to check WCAG2.0 guidelines. Here we talk about which checks can be managed with software and what will need manual checks made by people. For more details about what these checks are for take a look at the previous article WCAG2.0 Level A, your guide part 1.

Perceivable

1.1.1 Non-text Content: It is possible to have software check for things like, does images within the HTML content have alt attribute and check if the value is not empty. It may even check for common mistakes like alt text that starts with "image of" or "photo of" or "picture of". It may even check for the presents of words or single character alt text like "*" or "-". However, it can’t check if the alt text relates to the image purpose or that the inline placement of the image makes sense and doesn’t cause any confusion. These checks require people to make a judgement.

1.2.1 Audio-only and Video-only (Pre-recorded): It may be possible to check if a video has an audio track, However, it isn’t possible to check for relevance or that the same information is presented in an alternative format. This will need people to check manually and may require some isolated testing of each format for comparison.

1.2.2 Captions (Pre-recorded): Again it may be possible to check if captions exist but not ensure that the captions are relevant, complete and usable. It will take people to manually check captions against the media content to ensure it’s a suitable alternative.

1.2.3 Audio Description or Media Alternative (Pre-recorded): In some cases, it may be possible to check if a separate audio description track is present but it will need people to ensure that the audio descriptions are accurate, provided clearly and in a usable timely way.

1.3.1 Info and Relationships: It is possible to check if specific HTML tags are used and if all HTML headings follow hierarchical structures. Although it can not check if the content is making best use of HTML semantics or if the text used makes sense and is relevant.

1.3.2 Meaningful Sequence: It is possible to check for the use of the tabindex attribute but it will take manual checks to see if the ordering is logical and makes sense.

1.3.3 Sensory Characteristics: It may be possible to check for commonly used phrases that may break this guideline but it will need manual checks by people to see if the guideline is actually broken or if the text instructions will make sense to all people.

1.4.1 Use of Colour: It may be possible to check the colours used or for any common colour names used in text. However, it will take people manually checking to see if colour is being used solely for identifying content.

1.4.2 Audio Control: It may be possible to check if a page has any audio start automatically and if that audio lasts longer than two seconds. However, it will take manual checks to see if the controls are accessible for everyone and any assistive technology.

Operable

2.1.1 Keyboard: It may be possible to check that if mouse events are used that keyboard or focus events are also used. However, it is not possible to automatically check that all controls can be used via a keyboard. Or that any mouse controls can equally be controlled by a keyboard without any loss of functionality or information.

2.1.2 No Keyboard Trap: It is possible to check for potential cases of keyboard traps but not check for the effects or if it will cause people any difficulties. This will take manual testing by people.

2.2.1 Timing Adjustable: It may be possible to check if any timing events are being used but it will take manual checks to measure the effects and decide if any exceptions can be made.

2.2.2 Pause, Stop, Hide: It may be possible to identify some instances of blinking or animation but it will take manual checks by people to see if they break this guideline or if any exceptions can be made.

2.3.1 Three Flashes or Below Threshold: It may be possible to check for the existence of potential high risk content but ultimately it will take people to make a judgement on how to approach the content and to manually check all media and dynamic content.

2.4.1 Bypass Blocks: It is possible to check if any internal page links exist but it takes manual checks to see if these links work effectively and serve a useful purpose.

2.4.2 Page Titled: It is possible to check if page titles exist and are not empty. However, it takes manual checks by people to ensure the titles are relevant and make sense.

2.4.3 Focus Order: This can not be automatically checked, it takes manual checks by people to see if the focus order is logical and makes sense.

2.4.4 Link Purpose (In Context): It is possible to check that links contain text, even check that duplicate link text doesn’t point to different locations. However, it takes manual checks by people to ensure that the link text used, is relevant and makes sense both in and out of context.

Understandable

3.1.1 Language of Page: It is possible to check that a page language has been specified. However, it will take people to ensure that the language set is indeed the default language for the page.

3.2.1 On Focus: It is possible to check for potential issues by finding any on focus events but it will take manual checks by people to measure the effect and see if it breaks this guideline.

3.2.2 On Input: Again it is possible to highlight potential problems but it will take manual checks by people to ensure that adequate notification is given and no unexpected action will result in confusion.

3.3.1 Error Identification: This will need manual checks by people to ensure that any errors are shown in text and that the it makes sense and is clearly indicating where the error is.

3.3.2 Labels or Instructions: It is possible to highlight missing form labels although it takes people to ensure that the instructions are suitable, clear and that all labels make sense.

Robust

4.1.1 Parsing: This can for the most part be checked by automated software, the W3C Mark-up Validation Service, is one example. However, it take people to ensure that the content is marked up with the best use of semantics.

4.1.2 Name, Role, Value: It may be possible to check for the existence of these attributes and values not being empty. However, it will take manual checks to see if the values make sense, are relevant and provide useful assistance with navigation or operation.

Conclusion

Even at this lowest level of accessibility, the kind of things that can be checked effectively with automated software is very limited. Plus if many of these checks are made during template design and content creation, it can save both time and money with checking and then having to re-engineer or change content after it’s already been published.

Remember for each level up with WCAG2.0 guidelines, you first must meet all of the previous WCAG2.0 level. So to be WCAG2.0 Level AA, you first must meet all WCAG2.0 Level A and then all WCAG2.0 Level AA checks.

Share the love

If you found this article useful, please share and send it to a friend

2 thoughts on “WCAG2.0 Level A automated checks unveiled”

Leave a Reply

Your email address will not be published. Required fields are marked *