Terminology Management: My Experiences

May 16, 2019

Before I began the Spring semester of 2019 at the Middlebury Institute of International Studies (MIIS), I generally had little to no experience either understanding or managing terminology. In that regard, I’d like to present my major takeaways from the Terminology Management course to demonstrate my new understanding of the subject field. The introductory ideas that we covered were initially alien to me, because concepts or designations weren’t the types of terms I’d really devoted any thought to prior to taking this course. And for good reason: they’re abstract terms. Regarding them with scrutiny seemed almost absurd. And yet, it can’t be understated how crucial they are to terminology management since they form the basis of knowledge for any term. Specifically I’m referring to the so-called “Triangle of Reference”.

Triangle of Reference

To explain, a concept is an abstract idea that is linked to a physical object and labelled with a designation. For example, a physical chair you might be sitting on while reading this would be an object, of which you have an idea in your head. How do you know it’s a chair? Because it matches a list of characteristics that are encapsulated in the concept of a chair as you perceive it. And, of course, the word “chair” itself is the label you apply to it, or the designation.

If any of that explanation sounded confusing or required more than one read-through, you are not alone. One of my discoveries in this course was that the field of terminology is, by its nature, self-referential. In a twisted sense, that means when you think about terminology, you’re thinking about thinking. In other words, from the perspective of a terminology manager, terminology deals a lot with meta-data (data about data). Because of this inherent redundancy, it can seem migraine-inducing when trying to classify or organize terms.

But mastering knowledge of the “Triangle of Reference” is the basis of terminology, and is, therefore, the figurative gate to heaven. But it can also serve as a good example of visual organization, which is a concept that is heavily relied upon in modern terminology management. In fact, if you don’t visually organize data in this field, it can seem almost impossible to keep track of every aspect of a collection of terms, or term base. I have come to appreciate the value of a “concept map” because of its visual nature, and my experiences in this course have shown me that learning how to organize information visually is an indispensable skill. Here is an example:

Credit: World Intellectual Property Organization (WIPO)

Above is a cut-away from the WIPO Pearl concept map in their photography subdomain. This visualization is notably easier on the eyes than a traditional term base, and its interactivity is also a bonus, enhancing its usefulness as a supplement to a classic term base. Imagine the process involved in navigating this information in a term management tool: lots of clicking and navigating through dialog boxes and menus. It would waste time and mental effort. With the visualization of this concept map, you can immediately determine relationships between terms and make your own judgments as to how a term fits in a subject field.

And this brings me to my final take-away from this course: the link between perception and definition. Earlier in the chair explanation, I mentioned the concept of a chair “as you perceive it”. Perception is the driving force in what concepts mean to us as human beings. And, due to our nature, everyone perceives a concept in their own way, because we all have our own lists of characteristics that we assign to concepts. Terminology, however, seems to contradict this idea, because terminology work is the effort to classify and organize terms so as to give them precise unequivocal meanings. And this is where the importance of context must be emphasized. When defining terms, context is king. It is the ultimate determiner to a complete concept. Without it, anyone perceiving the terms would have to determine the meaning on their own. Context allows terms to be defined precisely, while allowing human perceptions to retain their unique nature.

In conclusion, terminology work is a great balancing act between the three points of the Triangle of Reference, between visual and textual organization, and between human perception and rigid definitions. If you’d like to discuss or share your own thoughts or experiences on terminology, please feel free to get in touch with me via my Contact page.

WIPO. WIPO Pearl Concept Map Search, https://www.wipo.int/wipopearl/search/conceptMapSearch.html. Accessed May 16, 2019

A Little Experiment in Gaming L10n with Unity

FPS Microgame

During my time discovering the finer nuances of software and games localization at MIIS, certain aspects of the process have enlightened me to practical strategies and techniques that could be leveraged for efficiency and process optimization. They are, namely, baked-in localization support and internationalization practices. Implementing or taking advantage of them requires a technical skillset that is becoming increasingly necessary in a PM’s repertoire and not merely reserved for the engineering team. I have often heard that localization project managers must wear many hats, because the localization process encompasses so many different types of tasks, but when software and video games are involved, computer scientist seems to be an unavoidable one. 

Two of my colleagues, Nathaniel Bybee and Rebecca Guttentag, and I undertook the task of localizing a brief demo of “FPS Microgame”, a game developed using Unity. And, if it wasn’t already obvious, the nature of the task necessitated that we wear our own computer science hats. But along with our new roles came unique challenges requiring solutions that certainly took us out of our comfort zones. What follows are some of the steps we took to overcome those challenges in our attempts to track down and retrieve strings for translation.

From the beginning, we were able to utilize a pre-existing baked-in localization method known as “I2” [read: “eye” two] with Unity which significantly aided our efforts to track down strings. In fact, the very first strings, the title screen text and opening menu buttons for example, were easily retrieved for localization using this method. I2 works by allowing developers to attach a localization “component” to an object in Unity. This component contains all of the options and attributes necessary for smooth localization by targeting the strings and providing the localized versions to be stored in it. It even allows developers to localize images (with DTP in my case) and associate the localized versions to their respective languages so that when, say a language selector is used, the correct image is loaded in when switching languages. With source files available for easy access to strings, this entire process could easily be streamlined and become highly efficient. Since I didn’t have access to the source files, however, I was forced to do the usual work of recreating the text over a mask so that it would be editable and, therefore, detectable by a TMS. You can see my work below.

Original Image
Character styles were rampant in this image. I created upwards of twenty.
The Japanese and German localized versions overlaid. Layout was adjusted for text expansion/contraction.

The major challenge we encountered after the initial strings and images, unexpectedly, was string concatenation. Anyone who’s done software or game localization might cringe at that term for good reason. It’s essentially a nightmare for localization and represents bad internationalization practices. If you look at the image below, you should see two strings with a logical coding statement between them. The nature of this particular block of code causes the entire string to be formed only when the code runs, but it’s impossible to translate accurately as is. 

Fortunately, however, with a little bit of recoding, I was able to structure things so that the two potential strings this statement could make were separated and made into whole strings that could be wrapped in our I2 localization method. This allowed them to be stored in our list of strings and sent to translation with everything else.

The significance of this issue lay in internationalization best practices. Imagine a situation where the PM doesn’t know how to code and then they have to ask an engineer to step in and help. That’s wasted time that no one needs. Thus, internationalization became a critical issue in this case (and there were certainly far more that we didn’t get the opportunity to examine). It’s my hope that issues such as these can be evangelized to developers so localization processes might be optimized for software and gaming localization for the future.

Game Global Summit 2019

Prior to the Game Global Summit this year, I was ambivalent about specializing in any particular niche of localization. Although the industry itself is defined by the myriad of niches carved into it, I had yet to discover my own. My intent was to simply go where the industry took me. But video games have always held a special place in my heart, and for once, my passions in life intersected in a way that was seemingly inevitable. The video game industry has had a, shall we say, shady history with localization, but the twenty-first century has seen some incredible improvements to l10n workflows for game developers and, if Game Global were to serve as a sign, the future looks to be bright for both industries.

So here is a little summary of Game Global Summit to give you a taste of what I experienced and perhaps encourage more professionals to give it the attention it deserves. Although it was dwarfed by the monolithic LocWorld41 conference in the days following it, I would be remiss to deny its undeniable success even in the wake of some last-minute schedule adjustments. The discussions were insightful, productive and fascinating. And the professionals who presented had some fascinating bits of info that all deserve mention here.

On the first day was Glory Chan-Yang Choe’s presentation over voice-over (VO) localization. Hearing her recount her tale of virtually single-handedly running a successful VO localization program from start to finish while ensuring top-notch quality was awe-inspiring to say the least. Then there was George Tong’s insightful presentation over testing for compliance with China’s regulations when localizing games: a helpful reminder of what it means to go global and the restrictions and laws of which companies should be mindful.

The second day witnessed Virginia Boyero and Patrick Görtjes’ stellar presentation on the l10n workflow used for Massive Games’ “The Division 2”. The customizations they presented alongside the proprietary software that was used made for a truly impressive presentation overall (in-game animations and videos notwithstanding!). Lastly was Miguel Bernal-Merino and Teddy Bengtsson’s inclusive discussion about language variance, how it affects l10n projects and what it means for the industry.

I’d also like to give special mention to the panel speakers during the discussion about culture in the video game and l10n industries. Michaela Bartelt, Kate Edwards and Miguel Bernal-Merino led an incredibly insightful and open discussion about how a multi-cultural world has impacted the industries and what we might look forward to in the future. It was, in my opinion, the highlight of the conference. I’d also like to give a shoutout to the folks from Keyword Studios for sponsoring the event as well as María Ramos Merino for running the event. Thank you very much for your efforts in making this event a reality. I’m greatly looking forward to returning again next year!