We provide a range of Quality Features to encourage translation quality – the most versatile being Linguistic Quality Assurance.
Linguistic Quality Assurance (LQA) is incremental to the evolution of your localization standards. LQA is a process by which human linguists review translations and, by using a set methodology or schema, determine if they contain any objective errors. Errors are grouped into categories.
Examples of objective translation errors could include, but are not limited to:
- Any inaccuracy or inconsistency with your Glossary, Style Guide, Translation Memory
- A disregard of your Quality Checks
- A disregard of instructions or information shared in an attachment
- Inadequate spelling or grammar
- A translation taken out of context
LQA can be performed by internal Localization teams or outsource LSP’s, and it can be evaluated from any method of translation, including human translations, machine translations or machine translations with edits. The process of performing Linguistic Quality Assurance begins with the recording of the errors in a set of translations and ends with reporting analysis.
Our LQA feature is influential in your localization process. By evaluating translation quality in a scoring schema, any category with high error counts helps you identify focus areas in translation quality that need improvement to increase the standard of your translations.
Evaluating with LQA
To make evaluations as objective as possible, a categorized list of errors is required. A translation quality error is a problem in the translation that can be objectively agreed on by most interested parties. For example, if a translation fails to follow the terminology or has a grammatical error, those can be considered objective translation errors.
Each error is “rated” on a severity level. The severity level can be numeric (e.g. 0-5) or custom (e.g. neutral-minor-major).
The collection of errors and severity levels that translations are evaluated against is called an LQA schema.