Evaluating Translation Management Systems

The race to automate more and more of the translation workflow has been heating up in recent years. Translation Management Systems (TMS) are making translation projects more efficient, allowing companies (and more specifically, project managers) to more easily oversee multiple large projects. As a part of my coursework at MIIS, I participated in a workshop designed to teach not only how to use TMS, but how to evaluate and recommend systems to companies based on their unique needs.

Key Business Requirements for BangZoom!

To put our theoretical knowledge to the test after a week of getting to know various TMS (such as WorldServer, GlobalLink and XTM), my team created a pilot project. The goal was to select a fictional client – in this case, BangZoom! –  and recommend them either GlobalLink and WorldServer base on an evaluation of their key business requirements. In order to carry out this project, my team created a detailed score sheet in Excel to rank the features of each TMS.

Workflow Comparison Between WorldServer and GlobalLink

To see which TMS came out on top, you can access our presentation HERE.

Once we found our sea legs in the wide ocean of TMS, my class was ready to take a consulting job for a real client. To establish our list of business requirements, we held a meeting with a representative from a large Language Service Provider (LSP). We found ourselves with the following task: help the client supplement their current TMS, Memsource, with additional tools (such as Plunet or XTRF), or recommend a complete overhaul of their current system by adopting a new TMS.

The breakdown of how each TMS was scored.

To take on this large project, our class split into teams. My team handled the comparison of the client’s current TMS to other TMS on the market. The main focus of our comparison hinged on the linguistic features of the TMS. Could a rival TMS compare to Memsource’s features and functionality, or even beat it? In order to find out, we assigned weights to the linguist requirements based on client input and tested out the functions of each TMS, making sure to document our processes.

The linguistic comparison that my team was responsible for.

You can find our finalized presentation HERE and our in depth score sheet and analysis HERE.

After three weeks of working in teams, coming together for scrum meetings, and creating our final presentation, our class gave our recommendation to the client through a final meeting. The client was pleased with our results, and will take our findings back to her own team for further evaluation.

The ability to quickly become familiar with TMS (and CAT tools in in general) and to accurately assess their strengths and weaknesses was my main takeaway from this course. I also gained experience using Excel to put a number on the quality of a certain feature, allowing me to definitely show which system came out on top with an objective process. I look forward to building on these skills in future courses and utilizing them in the industry, because choosing the right TMS for your company makes the translation process smoother and more cost effective for every stakeholder involved.