Web Accessibility for Translators: A Quick Start Guide

Overview

These days, more and more people in localization are starting to pay attention to accessibility, and for good reason. Dubbing often intermingles with visual description. Subtitling collaborates with captioning. Web developers learning how to format text strings for localization are also learning how to add alt-text. Olivia Plowman and I decided it to do a small project learning more about translation from a localizer’s perspective.

Brief overview of our project

Types of Assistive Technology

There are tons of clever tools people use to navigate the web. Envato Tuts+ has a quick video overview with some examples. A barebones list includes:

  • Screen magnifiers: These make the text and/or other elements of the page larger
  • Color changers: These tools can change the color of the page, such as turning black text on a white background to white text on a black background. They might also change the appearance of links. My blog has a magnifier and color changer plugin by WP Accessibility.
  • Alternate input devices: Instead of typing or using a mouse, some people use technology that tracks body motion or eye movement
  • Screen readers: Screen readers convert the contents of pages into a new format such as sound narration or a braille display. My limited circle of colleagues who are blind prefer Apple’s built-in screen reader called Voiceover

Creating Accessible Content

The World Wide Web Consortium (W3C) Web Accessibility Initiative (WAI) is the place to start if you want to learn how to design your content. They published a public working draft of their 3.0 Accessibility Guidelines this January. If English isn’t your first language, have no fear! They translate tons of their content.

Visit the W3C’s tip guide for web developers, which has brief overviews of key features to include paired with code samples.

WAI also created a fantastic, annotated demo website to show the importance of accessible design. The two sites look identical, but one version is a breeze for people with disabilities to navigate, and the other is a nightmare. The page is a little old (2012) based on 2.0 guidelines, but is still relevant to today. Hopefully a 3.0 guideline version of the demo comes out soon.

Content management systems (CMSs) like WordPress and Drupal have built-in features to make your site more accessible. For WordPress, pick a theme with an “accessibility-ready” tag. You can also add a plugin like WP accessibility. For Drupal, look for the #D8AX pledge, which stands for Drupal 8 Accessibility eXperience. The MacArthur Foundation has complied resources about WordPress, Drupal, Joomla, Squarespace, and Wix. They also have info on forms and surveys, as well as accessibility cheatsheets for web content, Microsoft Office, and Adobe.

Identifying Issues

If you want to review a site to identify potential problems, there are loads of free automated tools like Level Access, TenonAccessibility InsightsGoogle Lighthouse, and the Siteimprove Accessibility Checker extension for Chrome. 

That’s a lot of links. If I had to recommend just two good options, these were my favorites:

Instructions for identifying Web Accessibility Issues by The National Center on Disability and Access to Education (NCDAE)

The WAI has a running list of all the possible Web Accessibility Evaluation Tools on the market (free and paid) globally. The list has checkers for specific locales and languages.

Translating Your Site

When localizing a game a few months ago, Olivia and I had trouble making sure our Computer Assisted Translation (CAT) tool had all the relevant text it needed from the source code. We wondered how well CAT tools do picking up non-visible text that screen readers use, so we ran a few test pages through SDL Trados Studio, memoQ, and Memsource.

We used this checklist to evaluate the pages:

  • Alternative Text for Images: This text is used to describe images embedded in the webpage.
  • Title Attributes: Similarly, this text is for describing the site titles that may be created as images.
  • Certain CSS Text for Screen Readers: This text does not appear to the end-user and is only used by screen readers to help further audibly describe the webpage.
  • Table Summaries: Screen readers can read tables quite literally, which results in a confusing jumble for the user. A table summary can help the user understand what the table shows.
  • Long Descriptions: Known as longdesc in HTML, this provides longer descriptions to the screen reader and can be found in the website’s HTML.
  • ARIA-Label Attributes: These label elements of the HTML that have specific purposes, like buttons.
  • Language Attribute: A label for the page’s language.
  • Sometimes Applicable: Captions

Trados did the best job picking up non-visible text, followed by memoQ, then Memsource. Overall, we were surprised at how well they did.

Other Thoughts

This project set a lot of cogs turning for me. I spent a while on Adobe InDesign tutorials and my own computer’s screen reader trying to figure out how to make the tables in our grading PDF work. This webpage pops up a couple of errors on WAVE. Accessible design is hard. Accessible design is time consuming. Done right, though, it has some surprising benefits related to translation.

Automated translation is a lot easier when your web-pages are accessible. I’ve had to do research in Indonesian and Bosnian before. Do I know those languages? Nope! I just used Google Translate’s browser extension to get the “gist” of the pages. In my everyday life, I frequently deploy cursor dictionaries to look up new Chinese words. When text is embedded in images, these tools can’t work.

I look forward to seeing more LSPs and clients pay mind to accessibility. Even companies dragging their feet will need to start paying attention. Level Access predicts that there’ll be over 4,000 web accessibility lawsuits this year. In our increasingly global world, understanding accessibility legal requirements isn’t just “nice to have;” it’s a must.

Most importantly though, my screen-reader using friends don’t deserve to get caught in a death spiral of garbled nonsense image labels.

Selecting The Right TMS

Translation Management Systems are a huge investment for Language Service Providers. What can you do to ensure you chose the right system for your company? How can you justify your choice to senior management?

Xiaoxin Damerow and I worked together to simulate the enterprise software selection process for a hypothetical language service provider that specializes in audiovisual localization. We created a scorecard that breaks down key business requirements based on stakeholders, identifies “Must Have” versus “Nice to Have” features, and weighs total evaluation scores accordingly.

You’re welcome to download our .XLSX scorecard and tweak it for your own project.


Video overview of our selection process

NMT Engine: Training a Neural Machine Translation Engine in Microsoft Custom Translator

Engine translation sample

It’s hard to keep up to date with recent Chinese legal texts. My team and I decided to try our hands at customizing a neural machine translation engine (think Google Translate) that specializes in cybersecurity law using a free trial version of Microsoft Custom Translator. We fed the engine segments of aligned Chinese and English text segments and Microsoft Custom Translator did the heavy lifting of identifying patterns.


ProjectFluent

Usually, the text translators work with is separated from the code where it’s ultimately published. Let’s be honest: many translators and linguists don’t know how to code. But what if your translation team is tech-savvy? Mozilla developed a localization system that helps create customizable code for the grammar of different languages: Fluent. The system is now Mozilla’s baseline software for web-based localization projects.

What does Fluent do differently? In traditional localization, there is often an expectation that there is a 1-to-1 equivalency for every source and target language. This just isn’t the case. Take for example, the article “the” in English, which would vary in German based on gender: “der,” “die,” or “das.” Chinese has no “the” article.

Fluent works well for text that is customized for a user based on permutations like numbers, dates, seasons, or gender. I experimented with Fluent’s “playground” to create messages for blood donors. The message can be customized based on name, donation type, blood type, and usage stage. Here are a few examples of the text in-action.

Dex donated red blood cells. He has O+ blood, so he gets a customized message about his special blood-type. His blood is currently in the testing stage.

Dex's customized message code

Billy donated whole blood, with a blood type of A+ (no extra special message for him, but he is complimented for being a hero).

Billy's generic message code

Qian donated platelets. She has O- blood, so she gets a customized message too. Unfortunately, her donation was transported improperly and had to be thrown out. Rather than give too many unhappy details, this message encourages her to donate again with a generic, “Your donation saves lives” message.

Qian's customized message code

In our increasingly digitized, world, translators roles are quickly adapting. It’s exciting to see ways we can build internationalization into code early on rather than treat it as an afterthought.

Artificial Intelligence

Mediapipe Holistic test
Screenshot selfie on Mediapipe Holistic’s demo page

A couple days ago, Google blogged about a technology it’s working on: MediaPipe Holistic. It caught my eye because the post featured a gif of the technology being used to detect body, face, and hand motions of a prominent American Sign Language (ASL) instructor, Dr. Bill Vicars (I highly recommend his website, lifeprint.com to anyone interested in learning more about sign). Google claims MediaPipe Holistic can detect human poses, facial expressions, and hand motions in real time.

Does this mean we’ll have ASL versions of Google Translate and Google Assistant? Will Dr. Vicars be able to auto-grade his students’ ASL homework assignment videos? Probably not anytime soon.

This isn’t a new technology, just three old technologies combined. First: it detects your overall body shape and creates a stick figure pose outline. Next it identifies where your face and hands are, and creates a skeleton of your hand joint landmarks and a more detailed grid outline of your face. So far, that’s all it does. No translation capabilities. Yet.

Right now, the technology is just a clunky proof of concept. You can try it out on their demo page like I did. What it does do is show that computers can do a fairly decent job of detecting what your face and hands are doing, even from different camera angles and perspectives. Somewhere far down the road, we might be able to assign these hand and face shapes meaning values in a database.

Traditionally, translation has focused solely on text, but what about emotion? ASL is a great example because it’s a language for which physical details like eyebrow placement are important grammatical components. Spoken languages could also benefit from paying closer attention to emotion as well: there’s a big difference between widening your eyes and waving your hands, smiling, “fantastic!” versus heaving your shoulders in a sigh, rolling your eyes, and saying “fantastic.”

CAT Tools: Translating a Disability Housing App in Trados Studio

Me and my team of tech superstars were drawn to a local disability organization called L’Arche Wavecrest, which provides services and housing for individuals with intellectual and developmental disabilities (I/DDs for short). Wavecrest has a sizable Chinese-speaking population, but the organization’s website only offers information in English and Spanish, and thus our translation journey began!

Our team worked together to produce a Chinese language version of L’Arche Wavecrest’s information pamphlet and housing application. In addition to translating the text itself, we also simulated creating a project proposal/quote (though we donated the translation for free), generated a termbase, and created a translation memory (TM) in a computer assisted translation (CAT) tool called Trados Studio.


Here’s a brief video about our translation process and the lessons we learned along the way