Self-publishing a card game may seem daunting, but have no fear! In this post, I’ll show you how I took my card game Fruit Vendor from concept to game night.
Step 1) Planning
Fruit Vendor first came about when Alaina Brandt envisioned using a game to spark conversations about the localization vendor selection process. Through pre-design conversations, we boiled down the most necessary information needed on each of the cards. Cards are teeny tiny, so try to keep the design simple.
Step 2) Chose Your Platform
There are many ways to self publish your game. The site I used to publish Fruit Vendor is called Game Crafter. Whichever platform you chose, check their site for templates. All your assets, like card fronts, card backs, box, and instructions, will need to meet strict measurements to ensure they print correctly. You don’t want to make all your cards only to learn at the very end of your project that your assets are too small!
Step 3) Draft and Test
Rather than start making all your cards right away, make one or two of each type of cards in the deck. Evaluate how this drafting process went. Are your cards easily scalable? Did you hit any snags? My first draft of Fruit Vendor had images of humans for the vendor characters in the game. I realized through drafting that assigning the characters genders, skin colors, ages, and clothing could encourage bias in the hiring process, so I decided to scratch the humans and make the characters fruit instead! This ended up being a great decision as it inspired fun branding for the game. It was at this point that I settled on the name Fruit Vendor for the game.
Step 4) Plan for Your Future Self
My biggest regret for this project is that I made the assets in Photoshop. Vector graphics would have been easier to blow up for posters.
My best decision for this project was adopting puppet templates by Dave Werner to make fruit characters. By using these templates, I saved loads of time, and all of the characters can be automatically animated in Adobe Character Animator for video advertising!
These days, more and more people in localization are starting to pay attention to accessibility, and for good reason. Dubbing often intermingles with visual description. Subtitling collaborates with captioning. Web developers learning how to format text strings for localization are also learning how to add alt-text. Olivia Plowman and I decided it to do a small project learning more about translation from a localizer’s perspective.
Brief overview of our project
Types of Assistive Technology
There are tons of clever tools people use to navigate the web. Envato Tuts+ has a quick video overview with some examples. A barebones list includes:
Screen magnifiers: These make the text and/or other elements of the page larger
Color changers: These tools can change the color of the page, such as turning black text on a white background to white text on a black background. They might also change the appearance of links. My blog has a magnifier and color changer plugin by WP Accessibility.
Alternate input devices: Instead of typing or using a mouse, some people use technology that tracks body motion or eye movement
Screen readers: Screen readers convert the contents of pages into a new format such as sound narration or a braille display. My limited circle of colleagues who are blind prefer Apple’s built-in screen reader called Voiceover
Creating Accessible Content
The World Wide Web Consortium (W3C) Web Accessibility Initiative (WAI) is the place to start if you want to learn how to design your content. They published a public working draft of their 3.0 Accessibility Guidelines this January. If English isn’t your first language, have no fear! They translate tons of their content.
WAI also created a fantastic, annotated demo website to show the importance of accessible design. The two sites look identical, but one version is a breeze for people with disabilities to navigate, and the other is a nightmare. The page is a little old (2012) based on 2.0 guidelines, but is still relevant to today. Hopefully a 3.0 guideline version of the demo comes out soon.
Content management systems (CMSs) like WordPress and Drupal have built-in features to make your site more accessible. For WordPress, pick a theme with an “accessibility-ready” tag. You can also add a plugin like WP accessibility. For Drupal, look for the #D8AX pledge, which stands for Drupal 8 Accessibility eXperience. The MacArthur Foundation has complied resources about WordPress, Drupal, Joomla, Squarespace, and Wix. They also have info on forms and surveys, as well as accessibility cheatsheets for web content, Microsoft Office, and Adobe.
Instructions for identifying Web Accessibility Issues by The National Center on Disability and Access to Education (NCDAE)
The WAI has a running list of all the possible Web Accessibility Evaluation Tools on the market (free and paid) globally. The list has checkers for specific locales and languages.
Translating Your Site
When localizing a game a few months ago, Olivia and I had trouble making sure our Computer Assisted Translation (CAT) tool had all the relevant text it needed from the source code. We wondered how well CAT tools do picking up non-visible text that screen readers use, so we ran a few test pages through SDL Trados Studio, memoQ, and Memsource.
We used this checklist to evaluate the pages:
Alternative Text for Images: This text is used to describe images embedded in the webpage.
Title Attributes: Similarly, this text is for describing the site titles that may be created as images.
Certain CSS Text for Screen Readers: This text does not appear to the end-user and is only used by screen readers to help further audibly describe the webpage.
Table Summaries: Screen readers can read tables quite literally, which results in a confusing jumble for the user. A table summary can help the user understand what the table shows.
Long Descriptions: Known as longdesc in HTML, this provides longer descriptions to the screen reader and can be found in the website’s HTML.
ARIA-Label Attributes: These label elements of the HTML that have specific purposes, like buttons.
Language Attribute: A label for the page’s language.
This project set a lot of cogs turning for me. I spent a while on Adobe InDesign tutorials and my own computer’s screen reader trying to figure out how to make the tables in our grading PDF work. This webpage pops up a couple of errors on WAVE. Accessible design is hard. Accessible design is time consuming. Done right, though, it has some surprising benefits related to translation.
Automated translation is a lot easier when your web-pages are accessible. I’ve had to do research in Indonesian and Bosnian before. Do I know those languages? Nope! I just used Google Translate’s browser extension to get the “gist” of the pages. In my everyday life, I frequently deploy cursor dictionaries to look up new Chinese words. When text is embedded in images, these tools can’t work.
I look forward to seeing more LSPs and clients pay mind to accessibility. Even companies dragging their feet will need to start paying attention. Level Access predicts that there’ll be over 4,000 web accessibility lawsuits this year. In our increasingly global world, understanding accessibility legal requirements isn’t just “nice to have;” it’s a must.
Most importantly though, my screen-reader using friends don’t deserve to get caught in a death spiral of garbled nonsense image labels.
Translation Management Systems are a huge investment for Language Service Providers. What can you do to ensure you chose the right system for your company? How can you justify your choice to senior management?
Xiaoxin Damerow and I worked together to simulate the enterprise software selection process for a hypothetical language service provider that specializes in audiovisual localization. We created a scorecard that breaks down key business requirements based on stakeholders, identifies “Must Have” versus “Nice to Have” features, and weighs total evaluation scores accordingly.
You’re welcome to download our .XLSX scorecard and tweak it for your own project.
A few years ago, my former high school art teacher invited me back to campus to give a brief presentation on Chinese art history. I gushed with the students about the symbolism of jade and clouds, surprised them with fun facts about how the terracotta warriors were originally painted realistic colors, but the most interesting part was when I pulled out my own personal copy of Zhang Zeduan’s Qingming Festival Along the River. I lovingly took it out of its brocade embroidered box, gathered the students in a circle, and walked through the painting segment-by-segment, explaining how scrolls are actually meant to be read in sections like a comic strip.
My art teacher’s eyes widened. “All this time, I thought you were supposed to unroll the whole thing first,” he said.
It’s not his fault he had this misconception. Museums usually display Chinese hand scrolls completely unfurled, under shiny glass cases and big signs that scold, “no flash photography!” Crowds mull by in a clockwise rotation, glancing down at the images here and there. It’s the exact opposite of the intimate way scrolls are meant to be viewed.
How can we teach Western audiences about hand scrolls?
This experience got me thinking: there’s got to be a better way to teach Western audiences about hand scrolls. What if I made a scroll English speakers could actually read?
The New York Times has a fantastic English language demo on how to view a hand scroll.
Process
I chose one of China’s most iconic scrolls, the Admonitions of the Instructress to the Court Ladies, attributed to Gu Kaizhi (ca. 345-406) with text by Zhang Hua (232-300). This version was copied a few hundred years after Gu Kaizhi. Sadly, the original is lost. There are multiple copies of this scroll, but I selected the most well known copy, which was owned by the Qianlong Emperor (1711-1799), stolen after the Boxer Rebellion, and now lives at the British Museum an ocean away (stolen artwork at museums is a topic for a whole other blog post). The scroll is a political commentary originally meant for the murderous Empress Jia (257-300). It gives instructions on how a palace lady ought to behave, and has some hilarious advice that would make One Direction proud, like “The ‘beautiful wife who knew herself to be beautiful’ Was soon hated.”
I carried out localization in Photoshop, making thorough use of its “content aware fill” and “flip horizontally” functions. To flip some of the seals, I used a magic wand to select the right color elements.
The translation is sourced from an official translation by Shane McCausland, which is what the British Museum uses for the scroll’s main section. When no official translations were available, I made my own, such as the frontispiece that reads, “For the ladies of the court.” I considered translating the seals as well, but eventually decided against it because they would mostly be untranslatable names. Plus the Chinese is prettier. Maybe in a future iteration, I’ll experiment and try my hand at stamp effects.
An early experimentation with English seals
One of the toughest parts of the projects was deciding fonts. This wasn’t a transcreation project; I’m not trying to fool anyone into thinking this is a European artwork. If you’re interested in China-Europe art mashups, though, check out Emperor Qianlong’s court artist Giuseppe Castiglione (Lang Shining). I mulled back and forth between trying to find fonts that resembled brushwork, versus medieval block text. For the font of the main text, I chose a font common with Roman scribes of the same time period.
Fun fact: the font for Xiang Yuanbian’s (1525-90) inscription is based on the handwriting of Queen Isabella I of Castile (1451-1504)
Extra Thoughts
I worry some people might look at my localized version, especially my clumsy handwriting at the end, and think, “But…but you ruined it!” And that’s precisely the point. Scrolls are meant to be cut, rebound, scribbled over with drunken ramblings, and stamped with seals that scream, “I was here!” Maybe even totally modernized like Dai Xiang’s reinterpretationof that Qingming along the River scroll I mentioned. They’re a participatory event. If you want to join the conversation, leave a comment below!
Usually, the text translators work with is separated from the code where it’s ultimately published. Let’s be honest: many translators and linguists don’t know how to code. But what if your translation team is tech-savvy? Mozilla developed a localization system that helps create customizable code for the grammar of different languages: Fluent. The system is now Mozilla’s baseline software for web-based localization projects.
What does Fluent do differently? In traditional localization, there is often an expectation that there is a 1-to-1 equivalency for every source and target language. This just isn’t the case. Take for example, the article “the” in English, which would vary in German based on gender: “der,” “die,” or “das.” Chinese has no “the” article.
Fluent works well for text that is customized for a user based on permutations like numbers, dates, seasons, or gender. I experimented with Fluent’s “playground” to create messages for blood donors. The message can be customized based on name, donation type, blood type, and usage stage. Here are a few examples of the text in-action.
Dex donated red blood cells. He has O+ blood, so he gets a customized message about his special blood-type. His blood is currently in the testing stage.
Billy donated whole blood, with a blood type of A+ (no extra special message for him, but he is complimented for being a hero).
Qian donated platelets. She has O- blood, so she gets a customized message too. Unfortunately, her donation was transported improperly and had to be thrown out. Rather than give too many unhappy details, this message encourages her to donate again with a generic, “Your donation saves lives” message.
In our increasingly digitized, world, translators roles are quickly adapting. It’s exciting to see ways we can build internationalization into code early on rather than treat it as an afterthought.
Screenshot selfie on Mediapipe Holistic’s demo page
A couple days ago, Google blogged about a technology it’s working on: MediaPipe Holistic. It caught my eye because the post featured a gif of the technology being used to detect body, face, and hand motions of a prominent American Sign Language (ASL) instructor, Dr. Bill Vicars (I highly recommend his website, lifeprint.com to anyone interested in learning more about sign). Google claims MediaPipe Holistic can detect human poses, facial expressions, and hand motions in real time.
Does this mean we’ll have ASL versions of Google Translate and Google Assistant? Will Dr. Vicars be able to auto-grade his students’ ASL homework assignment videos? Probably not anytime soon.
This isn’t a new technology, just three old technologies combined. First: it detects your overall body shape and creates a stick figure pose outline. Next it identifies where your face and hands are, and creates a skeleton of your hand joint landmarks and a more detailed grid outline of your face. So far, that’s all it does. No translation capabilities. Yet.
Right now, the technology is just a clunky proof of concept. You can try it out on their demo page like I did. What it does do is show that computers can do a fairly decent job of detecting what your face and hands are doing, even from different camera angles and perspectives. Somewhere far down the road, we might be able to assign these hand and face shapes meaning values in a database.
Traditionally, translation has focused solely on text, but what about emotion? ASL is a great example because it’s a language for which physical details like eyebrow placement are important grammatical components. Spoken languages could also benefit from paying closer attention to emotion as well: there’s a big difference between widening your eyes and waving your hands, smiling, “fantastic!” versus heaving your shoulders in a sigh, rolling your eyes, and saying “fantastic.”