We conduct periodic Quality Audits and weekly annotations of sampled data, testing hypotheses and running deep analyses where we find higher than normal errors in our pipeline. We use the industry standard metric, , to be able to objectively compare our performance with third parties and open source translation libraries.
Unbabel MQM provides an adapted framework for describing and defining quality metrics used to assess the quality of translated texts and to identify specific issues in those texts.
Our annotation process is conducted by a pool of specialists with backgrounds in Translation Studies and Linguistics, who are able to build a deep store of knowledge within our platform that boosts overall quality and decreases turnaround time to delivery.
Unbabel ensures the quality of its translation through two checkpoints:
- Assessment of the quality of the work delivered (fine-grained issues Annotation)
- Assessment of the quality of the editors (rating 1-5).
Drawing on the examples of leading Quality Assurance systems and tools, Unbabel MQM
provides a comprehensive and extensible list of quality issue types that can be used in several
Quality Assurance tasks.