When I was in my second year at MIIS, I spent a good amount of time creating one-pagers for startups who haven’t thought about localization with the hope of convincing them the benefits of setting up a localization program. The researching process was very time consuming, as the information on different aspects of localization is all over the places. This experience gave me an opportunity to look at localization from an outsider’s point of view, which made me realize that we as localization professionals can contribute to localization evangelization across various industries through building a centralized knowledge hub for anyone who wants to learn the basics of localization.
Hence, I teamed up with Megan Ling and In Young Kim and started working on a localization guide for startups. The ultimate goal of this guide is to provide a high-level overview of the essential elements in localization. We want to equip decision makers in startups with knowledge they need to evaluate whether they need a localization program and help them engage in conversation with language service providers.
Project Roadmap
We kicked off the project by identifying the FAQs regarding internationalization and localization. Next, we put those questions into the following categories:
Localization in a Nutshell: This section covers topics such as a brief overview to the current landscape in the localization industry, the important players in localization, and the common workflows in localization. This section will also include the definition of terminology such as L10N, I18N, G11N, T9N.
Tools and Infrastructure: This section covers the tools and infrastructure that are important to the success of a localization program. Such tools and infrastructure include CAT, TMS, MT, etc.
Investing in Localization: This section aims to provide the readers with an idea of how much it costs to set up a localization program. The section can be further broken down into sub-sections such as investment in infrastructure, investment in staffs, and the cost of purchasing language services.
Quality Management: This section will provide people with the knowledge they need to manage translation quality proactively. To be more specific, people will learn more about setting up the matric for quality evaluation, how to set up LQA program, how to run root-cause analysis on long-term quality issues, and how to strategize quality improvement plan with language suppliers.
Vendor Management: This section introduces various types of localization models (i.e. in-house model and outsource model), the tips and tricks of running RFP in search of language service providers, how to run QBRs with the vendors, and how to implement improvement plan with the vendors.
Localization Evangelization: This section covers localization evangelization within various groups of key stakeholders, such as product documentation team, product development team, marketing team, and executive management.
Topic menu on our website
Our progress
Our team has built the content structure and set up a website to host the content. So far, we’ve drafted, reviewed, and published three articles in tools and infrastructure. The first draft for the article on localization evangelization is also completed.
Published article: TMS
Future plan
The bulk of this project will be researching online, pulling useful tips and tricks from webinars and workshops, and consolidating the knowledge in a structural way. On top of that, our team believes that it would be a great idea to interview localization veterans who have experience setting up a localization program at startups. This will allow us to gain more insight into the common pain points and struggles of program managers and how they manage those challenges. Perhaps we can even collaborate with the podcast team who is also interviewing localization professionals to create various types of content for people who want to learn more about localization!
As my friends and I found mobile app UI localization immensely fascinating, we decided to localize an open source Android app “ShutUp!” from English into Traditional Chinese, Simplified Chinese, and Korean for our final project showcase. We obtained the source code from GitHub and mainly worked on the project in Android Studio.
ShutUp! dashboard in English
The hardest part of this project is to internationalize this app properly so that it’s ready for localization. We encountered two challenges with internationalization: string externalization and concatenation.
Challenge #1: string externalization
To externalize all the user-facing strings, we carefully examined the UI and documented the strings in the app. Next, we created an XML file to store all the strings and replaced the hard-coded strings with parameterized strings, such as “@string/stringName” in XML files and R.string.stringName in Java files. This is where we hit the brick wall: though we were able to externalize the strings in XML files, we couldn’t do the same for strings in Java files. Therefore, the app seemed fully internationalized in the emulator, but none of the button was functional.
We tried to tackle this issue with various approaches. We
tried to externalize the strings using all kinds of methods mentioned in Stack Overflow,
but none of them worked out. We even tried to solve the issue with brute force:
If an action in Java files is triggered by a condition contain a hard-coded
string, we will include the corresponding localized string in the condition, as
shown in the image below.
Example of brute force solution
We implemented this approach and successfully localized the
app into Traditional Chinese. However, the code will be too messy and difficult
to read if we plan to localize the app into more than one language.
Hence, to centralize the localized strings in Java files, we
imported java.util.Locale so that the program can perform
locale-sensitive operations and created multiple dictionaries (one dictionary
for each target language) with the localized strings as keys and the original strings
as values. Every time the program is run, a dictionary will be selected based
on the locale and the localized strings will be matched with the original
strings. Then the program can be executed successfully even though the strings cannot
be internationalized and localized in Java files.
This method also worked, but it is not scalable for two
reasons. To begin with, we need to add multiple dictionaries to multiple Java
files with hard-coded strings in this app. Secondly, whenever we want to add a
new language, we have to update all the Java files by adding a new dictionary. In
short, this approach is counter-intuitive, unscalable, and error-prone.
That is why we decided to go back and tried to fix the
string externalization issue. Fortunately, we finally externalized the strings
in Java files by wrapping the hard-coded strings with getString().
Challenge #2: string concatenation
Another challenge we encounter is concatenation issue. As shown in the following images, some of the strings were concatenated in the source code.
Example of string concatenation
The syntax works fine in Chinese and Korean, so we didn’t really fix the concatenation issue due to time constraint. However, if the app were to be localize into more languages, this is definitely something that needs to be fixed. The localizers can avoid string concatenation by changing the wording of the sentence or modifying the code by using variables that can be moved around in the sentence.
Final thoughts
To sum up, after conquering the internationalization issue, we successfully localized ShutUp! Into three languages.
ShutUp! in Korean
ShutUp! in Traditional Chinese
ShutUp! in Simplified Chinese
I believe this project can serve as a great example to evangelize the importance of taking localization into consideration in the engineering team. By storing strings in a centralized location and using variables properly to avoid string concatenation, the developers can help streamline the localization workflow tremendously and save the company a lot of time and money.
This blog post will explore the feasibility of using Content-aware
Fill in After Effects 2019 to remove on-screen texts in videos.
Content-aware Fill is a new feature in After Effects 2019
which enables users to remove unwanted objects from a video. As stated in
Adobe’s User Guide, “this feature is temporally aware, so it automatically
removes a selected area and analyzes frames over time to synthesize new pixels
from other frames.” This is a feature of great potential for video localization,
especially when the source files are not accessible to the localizers. Therefore, I want to look into the viability of using Content-aware Fill to remove on-screen texts from videos.
How to remove on-screen text with Content-aware Fill
First of all, since I cannot find detailed tutorials on how to use Content-aware Fill on the internet, I would like to start with the steps I took to mask on-screen texts and cover them with Content-aware Fill.
Import the video to After Effects
Draw a mask using Pen Tool
Right click on the mask in panel to track mask
A Tracker will pop up to track the mask frame by
frame automatically (manual adjustments needed)
Go to the frame in which the object/mask shows
up for the first time and add a keyframe in Position
Move the mask out of the frame
Go the next frame and add a keyframe in Position
Go to Animation panel and click Toggle Hold Keyframe
(so that the mask will not gradually move out of the frame)
Go to the frame in which the object/ mask shows
up for the last time and repeat step 5-8
Drag
the Work Area bar to cover the part of the video where you want to cover with
Content-Aware Fill
Before
generating Fill Layer, you can create a reference frame first in Content-Aware
Fill panel (A Photoshop file will pop up and the reference frame will be added
to the composition automatically)
Next,
adjust the Alpha Expansion, Fill Method and Range in Content-Aware Fill panel
Hit
Generate Fill Layer
The
Fill Layer will be added to the composition automatically
I tried to remove on-screen texts from Nike’s commercial “Fastest Ever” using this approach. Here are the original video and the one without the on-screen texts:
Clip from Nike’s ad “Fastest Ever”
Nike’s ad edited via Content-aware Fill in After Effects
As demonstrated in the videos, the experiment on using Content-aware Fill to remove on-screen texts from these videos generates subpar results. The footages created by Content-aware Fill are mostly distorted. Therefore, the selected areas could not blend in the videos seamlessly.
The potential issues with this approach
By examining the AE projects closely, I identified several potential issues:
The objects I want to mask are moving too
swiftly in the videos: When viewing the demo videos on AE Content-aware Fill I
found online, I noticed that they all have one thing in common: the objects
being masked and replaced with Content-aware Fill tend to move slowly in the
footages, and the elements surrounding the objects are relatively still from
frame to frame. However, in Nike’s advertisement, both the objects and the
surrounding elements are moving swiftly. This might diminish the program’s
ability to analyze the frames and synthesize new pixels from other frames.
On-screen text expansion leads to distortion in
the selected areas: Another thing that the demo videos have in common is that
the objects being masked tend to stay in the same shape, therefore the editors
only need to make some small adjustments to the masks to cover the objects.
Nevertheless, the text expansion in Nike’s ad distorts the newly-generated blocks
in the video, which is the reason that the masked areas could not blend
seamlessly with the surrounding elements.
A helpful trick
Despite the fact that AE Content-aware Fill didn’t do well on
removing on-screen texts from videos, I did find a useful tip that might
improve the quality of Content-aware Fill:
Create a reference frame before generating Fill
Layer: I noticed that the new pixels generated by Content-aware Fill can fit
into the background better if a reference frame is created beforehand. The
effect is especially significant if the reference frame and the frames
containing the selected object are alike. To illustrate the importance of
creating a reference frame before generating Fill Layer, I generated two clips
with Content-aware Fill. One is created with reference frame; the other is
created without the reference frame. Here are the results of this experiment:
Clip from Mercedes-Benz’s commercial
Mercedes-Benz “Chicken”_Edited with Content-aware Fill in After Effects (created without reference frame)
Mercedes-Benz “Chicken”_Edited with Content-aware Fill in After Effects (created with reference frame)
As
you can see, the background of the video with reference frame is more
consistent and less glitchy, which is why people should really generate
reference frames before using Content-aware Fill in AE.
Why it is not recommended to use Content-aware Fill in video localization
To
sum up, after trying to remove on-screen texts from the two videos with
Content-aware Fill, I would not recommend using this approach in video
localization due to the following reasons:
Content-aware Fill doesn’t work well with text
expansion and moving background: As mentioned above, the new pixels tend to be
distorted if the selected objects expand in the video or the background is
moving quickly.
This approach is very time-consuming: Removing
on-screen texts with Content-aware Fill is a lot of work. One has to mask the
object properly, create key frames to move the mask around, track the mask and
create reference frames before generating Content-aware Fill. Hence, it is not
worth it if this method only works well on videos with simple background, as
the localizers can simply create PSDs to mask the on-screen texts.
In short, Content-aware Fill in After Effects is definitely a tool with great potential for video localization. However, with its limitations and inconsistent performance, it might not be a very useful feature for video localization at this moment.
Throughout the course Advanced
Computer-assisted Translation, I have the opportunity to explore a wide
range of technology and tools used in the world of localization, including
Machine Translation, advanced QA settings in Trados, and project management
platforms such as monday.com and Podio. In this article, I will illustrate the things
I have learned from building a Statistical Machine Translation (SMT) model with
Microsoft Custom Translator, applying regular expression in QA, and filming an
introductory video for a project management tool called Podio.
As machine translation, especially Neural Machine
Translation, has become one of the most important trends in the localization
industry, it is of paramount importance for localizers to gain more knowledge in
the development and the application of MT. In order to gain more hands-on
experience in managing MT projects and building MT models, we teamed up as a
group of five and started working on a proposal for a client who wanted to incorporate
customized MT in their translation projects. We chose China Academy of
Translation as our client, since we would love to explore how machine
translation performs on political speeches given by Chinese officials. In order
to keep the project in scope, we narrowed it down to the Chinese-English translation
of Premier Li Keqiang’s speeches.
At the first stage of the project, we drafted a preliminary
proposal for our client, which listed out how we plan to build the customized
SMT model with the datasets we found on China Academy of Translation’s website,
the criteria we will use, the budget and the timeline for the project, and the projected
outcome. You can find our preliminary proposal right here.
After our client approved our initial proposal, we started
working on the pilot project by building the SMT model in Microsoft Custom
Translator. Basically, we employed the bilingual transcripts of speeches given
by Premier Li as our training and tuning data. For testing data, we used two Chinese
transcripts of Premier Li’s speeches. However, we had a hard time putting together
the datasets used in model training. This is because, while the materials we
used were already aligned by paragraphs, SMT model performs better when the
datasets are aligned by sentences. Hence, we spent a bulk of our time aligning
the bilingual materials manually, which was a major pain point for our team.
Another problem with our project is that, as SMT model works best when the training
data are consist of short sentences with simple structures, the sentences in political
speeches are way too lengthy and complicated. Therefore, it is more difficult for
the engine to identify and learn the patterns of the sentences. As a result, although
the translation produced by the SMT model we built was better than we expected,
it didn’t reach the minimum QA standard we set beforehand. In addition, the
training of the customized model was entirely out of scope in terms of budget
and time. Hence, in our updated
proposal and presentation
for the client, we advised against using customized SMT to translate political
speeches.
During the second half of the semester, we explored more functionality
of QA settings in Trados. By using regular expressions, we can create rules
that help us identify the language-specific formatting or grammatical mistakes.
For instance, when we are performing QA on Chinese translations, we can create
a regular expression that captures the dates formatting in MM/DD/YYYY and
change them into YYYY/MM/DD, which is the standard date format in Chinese. The
ability to employ regular expressions to create customized QA settings can
significantly enhance the QA testers’ work efficiency. Furthermore, using
language-specific rules in QA process has a massive impact on improving the
overall quality of translation projects, especially when the reviewers are
proofreading texts written in a language they can’t read. In short, having
knowledge in regular expression is definitely a plus for localizers.
At the end of this article, I would like to briefly talk
about a great project management tool called Podio. It is very suitable for
project management in localization, as it makes it easy for PMs to create
projects using templates and assign tasks with different deadlines to team
members. First of all, Podio has lots of built-in templates for different types
of project, and the templates are highly customizable. PMs can modify a basic
template swiftly and save it for future use. Therefore, it is extremely easy
for PMs to create a new project. Secondly, projects can be broken down to
assignable pieces for different team members. This is extremely convenient
because the team members can easily view the parts of the project they are assigned
to and the deadlines for those mini projects. In addition, the team members can
also schedule and launch meetings on Podio. If you are interested in learning
more about the basic functions of Podio, here is an introductory video I made for this powerful
tool.
Perform their jobs with a high level of integrity and consistency
Take responsibility for their actions and performances
Be dedicated in recruiting talent that is qualified to work on the project
Respect the specifications set by the clients and the company and be devoted in delivering high-quality products and services to the clients
Respect the project deadlines which are agreed upon
Commit to protect the confidentiality of information concerning the clients and the talents
Ensure data security regulations are being followed throughout the projects
Project managers shall, in their relations with clients:
Keep the communication channel open, and respond to clients’ inquiries in a timely manner
Be honest with the clients about issues encountered in the projects, such as timeline, budget, and production issues
Make sure the reviewers from the clients are involved in the projects from the beginning so that there will not be a gap in expectations in terms of the final deliverables
Project managers shall, in their relations with their team members:
Treat all project team members and coworkers fairly, regardless of their sex, race, religion, or age
Provide an inclusive and welcoming work environment to all team members
Assist project team members in their daily work and professional development
Accept constructive advice from team members and coworkers and update the best practices in the workflow accordingly
After learning about the best practices to adopt community translation and translation crowdsourcing in localization projects, my teammates and I came up with a community translation proposal to Codecademy. In our proposal, we demonstrate the reasons that Codecademy should adopt community translation, how they can perform prioritization in the project, how to improve the quantity and quality of translation in the project, and what are the costs that might incurred in the project.
Codecademy Community Translation Proposal
As an online interactive platform that offers free coding classes, Codecademy is gaining popularity among programming language learners around the world. For Codecademy to seize this opportunity and expand its reach globally, the company has to start considering different localization solutions. Our team believes that community translation will be a great localization solution for Codecademy for the following reasons:
Demand for localization from the international users
Our research shows that Codecademy has a very active multilingual forum for users who speak languages other than English. However, users’ posts suggest that, aside from discussing problems they encountered in their own languages, they also really look forward to learning Codecademy’s online courses in their native languages. Therefore, we are positive that there is a demand for localization from the users.
Existing resources that can be leveraged in community translation
In addition to requesting
for localized version of the website, the multilingual users of Codecademy also
offer help for translation on the forum. This is a strong indicator that
demonstrate why community translation is a suitable solution for Codecademy. As
the success of community translation relies on a devoted crowd base, the
passionate users of Codecademy have the potential to become valuable
contributors in translation projects.
The nature of Codecademy makes it very appealing to the crowd
Although Codecademy is a
for-profit company, it offers free programming educations to users around the
world. Therefore, we believe that the nature of Codecademy makes it seem
appealing to the volunteers contributing to the work of non-profit
organizations, which means that more people are willing to be involved in the
community translation project.
Recommendations for prioritization
In order to identify the most suitable materials for community
translation, our team has several recommendations for prioritization.
Prioritization based on visibility
The volunteers are more willing to
contribute to the localization project when the contents they help localize are
visible. Also, Codecademy’s homepage and the starter kit on the website are the
first point of contact for users from different locale. Therefore, we believe
that prioritizing these contents in community translation project is of great
importance to the success of the whole project.
Prioritization based on popularity and the level of difficulty of the courses
We believe that, in order to maximize the
impact of localization project, we can start with the most popular courses on
Codecademy based on the site traffic data. In addition, as most of Codecademy’s
users are novices in terms of programming languages, we believe translating
introductory courses in each programming language (i.e. Python, JavaScript,
C++) should also be a top priority for the localization team.
Identify Core languages
We believe that the impact of the
localization project can be maximized by prioritizing core languages based on
user geographic. For instance, the French community, Spanish community, and
Portuguese community are among the most active communities on Codecademy’s
forum. Hence, it would be logical to select French, Spanish, Portuguese as
primary target languages for localization.
Recommendations to optimize the quantity of translation
To
improve the quantity of community translation, our first practice is to bring
the right source to the right community. In our case, since CodeAcademy already
has an existing forum, international block, where a lot of their users gather
and talk about the courses and even offer to help translate, so promotion here
will definitely help us find volunteers who know both the content and the
language.
Secondly,
we want to be connected and engaged with the community. We are thinking about
assigning active users as community managers, who could bridge the community
and the company. If we are going to use a specific tool, we’ll make sure all
the volunteers know how to use it, which means that they will have the training
they need. We would also consider regional face-to-face meet-ups.
Another
thing we can do is micro-task. By breaking up the work into small chunks, it
could enhance the confidence and efficiency of the translators.
For
rewards, we want to offer them something that they really need or could be
beneficial for them. For volunteers who have finished a certain amount of
translation, they will be able to have access to courses that are only limited
to pro-members. Also, depending on the amount and quality of their work, there
will be professional certificates or a badge granted to them, so they can cite
their contribution on resumes.
Recommendations to optimize the quality of translation
In
addition to generating a satisfying amount of translation, it’s crucial to have
translation of satisfying quality too. Otherwise, the poorly translated content
might not only fail to achieve our original educational and marketing goals,
but also do enormous harm to the brand image. To safeguard and boost the
quality of the content translated by the non-professional community, we have
designed several solutions for Codecademy:
Interactive Training Process
It goes without saying that contributors
must develop an understanding of the content and also acquire basic translation
skills before they can provide quality translation.
For starters, to make sure translators
have an understanding of the content, we believe translators should be
recruited from the existing active user community.
For content like course materials, we
advise Codecademy to stipulate that only those who finish the courses they
would like to translate are eligible to translate the course content.
Moving onto building translation skills, we
were inspired by the LegoDragon training Google designed for its linguists and
came up with the idea to combine the initial training and screening process
together in gamified activities.
Our proposal is to create short
interactive courses on elements listed in style
guide (like tone, punctuation, and tags), glossary (ideally created by Codecademy employees who are already
product experts, including course advisor,
coaches, and content creators), tools, and workflow
(like rating schemes). After contributors finish the short courses, they will
have to pass a test before diving into translation. Since our goal is to
include every community member who would like to make contribution, we recommend
Codecademy to provide unlimited attempts for members to take the courses and a
test, so if they cannot pass the test in their first try, they can take it as
many times as they want. That way, we kill two birds with one stone! We do not
turn any members down and we make sure our contributors are fully qualified.
Built-in Automatic QA Feature
It saves a lot of time and effort in the
editing and proofreading stage if we integrate the right CAT tool that comes
with essential QA feature (with spell, grammar, punctuation, and tag checks)
onto the collaborative translation platform. Ideally, the QA feature should run
automatically after confirming segments, saving contributing translators extra
time to go through translation again. Based on our research, Lilt (https://lilt.com)
seems to be the best choice on the market because it is one of the most
intuitive, user-friendly tools and it also comes with an Auto QA feature.
3 Reviewing Approaches
In addition to the above-mentioned QA in
the translation stage, we also suggest Codecademy to incorporate community
voting and/or a final review conducted by professional translators and
reviewers to make sure the translation result align with the demand of users
and avoid any extra errors. Community voting system has proved to be successful
in many crowdsourcing and community translation cases. The voting and flagging
in Translate Facebook (https://www.facebook.com/translations)
and the validating in Google Translate Community are the 2 most significant
examples. Nevertheless, the publishing timeframe of translation outputs will be
more difficult to control for companies, so hiring professionals to review is
also a good workaround worth considering. If possible, Codecademy might even be
able to invite its employees who are already product experts to
review the content too.
Raising Quality Awareness Through Level-Up
Most crowdsourcing
and community translation platforms evaluate contribution by amount. We argue
that the quality of translation is equally or even more important than the
amount of translation, so taking approved/rejected segment ratio and error
statistics into account while building the gamified leveling up structure might
be a great way to stress the importance of quality in the community.
Reliable Community Managers
Existing managers of the
Codecademy learner community and contributors with more experiences and better
performances can become translator community managers. As reliable points of
contact, they can provide answers to questions and make sure contributors abide
by community guidelines.
Estimated costs and
how to cover those costs
Since this is not for-profit project, it is crucial to estimate
initial cost and manage budget efficiently. The first and the most urgent
consideration is recruiting cost. Although getting talented volunteers involved
is important, we also have to try to minimize the cost of hiring campaign.
Luckily, Codecademy already has community pages by locales with active users.
Utilizing these pages for recruiting and marketing, we can reduce initial
costs.
Next consideration is the cost for building platforms. For training
non-professional translators, effective onborading system is essential. One of
the answers is establishing gamified training tools. As a initial cost, there
is building proper infrastructure cost, such as data storage, and once it is
accomplished, operations and managing the community consume budget in the
long-term. To ensure quality translation, it is needed to hire professional
reviewers who can work on quality assurance.
We have to remember this is a localization project. For localization and internationalization,
DTP with layout and format will cost. Lastly, for our devoted community
translators, rewards have to be provided, such as pro-membership and schwags.
Then, what results we can expect from this community translation? First of all,
all the strings on user interface and prioritized materials will be translated
into core languages based on the site traffic. As a result, a robust and
sustainable system is established and then it will facilitate a future
localization. And last but not least, community translation will help establish
an active translation volunteer community and engage with users while boosting
brand loyalty, raising brand awareness, growing potential paid membership. All
these efforts will lead to market expansion to other locales.
Translation crowdsourcing has become one of the hottest ideas in the localization industry. However, sometimes even professional localizers might not be fully aware of the myths about translation crowdsourcing. For instance, will translation crowdsourcing threaten traditional translation suppliers? Does employing translation crowdsourcing mean that you can have your contents translated for free? These are important questions to think about before taking a stance on translation crowdsourcing.
In the course “Social Localization/ Translation Crowdsourcing”, not only did we tackle the myths surrounding this topic, but also learned about the best practices to improve the quantity and quality of community translation from the pioneers in this field, such as Mozilla, Facebook, and Twitter. The infographics below demonstrate the summary of the best practices for improving the quantity and quality of community translation:
How to increase quality of translation via motivation in community translation
How to improve quality of translation in translation crowdsourcing
In the following post, I will go into further details in terms of how we employ the knowledge and skills we learned from this course in a community translation proposal to a client.
In the course Localization Project Management, I grouped up with 5 classmates to implement a localization project throughout the semester. We chose Hyperbolic Magnetism, an indie game studio based in Czech Republic, as our client in this project, and our goal was to localize the press release of one of their games Beat Saber into six languages.
Our workflow in this project can be divided into three phases: pre-production, production, and post-production.
The actions in the pre-production stage are listed as follow:
Setting up our LPM office by creating Trello checklists and Dokuwiki pages for our team, the client, and the project
Creating specification template and draft the specification for our project
Creating quote template and quote for our client
Setting up talent screening standard and search for talents for our project
Creating translation projects, TM, and TB in Memsource for the translators
Creating WO and PO templates
Drafting the general style guide for six languages
Sending the translator’s kit (including the translation project, TM, TB, WO, PO, and style guide) to translators
The actions involved in the production stage are listed as follow:
Translating: translate the source document into six languages in Memsource and export the deliverables
Editing: run QA check and spell check, ensure terminology matches the term base, check for accuracy and fluency of the translated text
Proofreading: run spell check again, check for correct number and formatting
Conducting final verification: confirm the formatting and naming of the files are correct
The actions in the post-production stage are listed as follow:
Updating TM and TB
Handing in the deliverables to the client
Sending invoice to client and translators
Having post-mortem meeting the the team
Among all the things I’ve learned from the project, one thing really stood out from my perspective, which is the importance of paying attention to details. This lesson is learned via two challenges we encountered in our project.
First of all, we had to use Top Tracker to track our time while working on the project so that we could distinguish billable and non-billable time. However, when we reexamined our time record, we noticed that sometimes we didn’t create a new time slot when we switched from one task to another. This made it difficult to determine the exact time we spent on certain tasks, which undermined our creditability while charging the client for our time spent on billable working hours.
In addition, while we were in the proofreading stage, a segmentation issue popped up. We didn’t notice this issue because the translations seemed fine when viewed as a whole. However, if we adopt the problematic segmentation, it will mess up the translation memory. Therefore, we had to fix the segmentation to avoid troubles with the TM in the future. As a result, we did a lot of rework during the TEP stage and spent a decent amount of extra time on the project, which will definitely increase our cost.
In hindsight, we should have been more careful with details in every stage in our project. We should have been more mindful with the activity labels in time tracker, as it will affect our ability to back up our invoice to the client with organized time record. We would not have to redo our translation if we had examined the translation project more closely while we were on it. In short, I realize that the best way to carry out the localization project efficiently is to be careful with the details in the first place.
Finally, although there are some small bumps along the way, I truly enjoy working with my teammates. Even though the workload was formidable and sometimes stressful, our team managed to pull off the project by supporting each other. When everyone is willing to go an extra mile for the team, a strong sense of trust is developed among us. Therefore, we can count on others to do their parts and won’t be afraid to ask for help. In short, this is exactly what dream team is like. I would love to apply what I’ve learned about teamwork in this project to my future work.
For the website localization final project, my teammates and I decided to localize a game called Tower Building from Simplified Chinese into English. We chose this game for the following reasons:
Tower Building is an open-source game on Github, which means that the source code of this game can be easily accessed.
Tower Building is primarily built with HTML5 and JavaScript. Therefore, as we have built and localized our websites using these programming languages during the class, we are more confident with localizing this game.
When we were assessing the source code of the game, we noticed that most of the strings in the game are embedded in the images. Since we haven’t learned Desktop Publishing, this problem became the biggest challenge in the localization process.
In order to solve this problem, we did some research on how to localize the strings in images. According to the article What You Need To Know About Graphic Localization, the best practice for the localization of graphics containing text is to obtain the “original” artwork, which contains editable text layers. However, in our case, the source artwork files with editable text layers cannot be accessed. Thus, we had to photoshop every image containing text.
Hence, we started working on the localized version of the images with Photoshop. Nevertheless, as we weren’t very adept at using Photoshop, it took us a while to figure out how to remove original text and add the translated text to the images. Meanwhile, Hannah found a quick fix for this problem: If there is only one background color in the image, we can simply use the preview in Mac to edit the image.
After we finished editing the images, we duplicated all the files in the original folder and put them in a new folder (named en-us) in which we keep the localized content. Then we added the hreflang to the HTML files to distinguish two versions of the website. Next, we translated a few strings and swapped out all the images in the localized HTML file. At this point, both the original and the localized HTML files seemed fine. So far so good.
Nevertheless, we hit a wall when we tried to link the original website to the localized website. We adopted the approach used in our previous localization assignment, which is using a language selector to go back and forth between the websites. However, we couldn’t get the language selector to appear on the homepage of the game. Then I realized it was because we forgot to put the span tag inside a div, so the computer didn’t know where to insert the language selector. After this problem was fixed, the language selector finally showed up on the webpage.
Shortly after fixing the issue, we were faced with another problem: the language selector javascript file could not function properly. We didn’t know what was wrong with the code, so we adopted an alternative approach, which is connecting the original webpage and the localized webpage with hyperlink. It works, but we weren’t satisfied with this solution, as it is not scalable if we were to localize the game into more languages.
Therefore, we were determined to fix the problem with the language selector. After thoroughly examining the code, we realized that the problem was the directories in the JavaScript file. Because in our previous assignment, the language selector was designed to localize the website from Chinese into English . Therefore, the localized folder was named zh-tw instead of en-us. Therefore, the language selector could not find the correct HTML file because the folder name was incorrect. The language selector finally works after we fixed this problem.
In short, I think the greatest lesson I learn from this project is to be perseverant. It is very important to have grit while facing one challenge after another, because the process of troubleshooting and debugging can be very frustrating. We could have taken the easy way out by settling down with the hyperlink approach. However, we wanted to figure out why it didn’t work, so we got to the bottom of this and resolved the issue. Hence, I believe the lesson here is that, if we are determined in pulling through a difficult task, we can achieve our goal and really grow from the experience.