This post started as a comment on Seth’s blog, but the resulting thoughts were too long for a comment 😉
Conclusion: Litmus manual tests are important and hard to automate to replace visual verification. Has everyone played with Litmus?
I agree, Litmus tests are important (and, to be honest, quite boring) but they’re not enough. For example, one of the steps we need to do in order to ensure the quality of our localized builds is:
- Open every menu and find access keys conflicts.
- Open every window and find cropped dialogs or access keys conflicts. Sometimes you don’t even know how to check some strings, for example because it’s impossible to manually trigger that event (it took me weeks to find the “CrashMe” extension to check Breakpad’s localization).
It sounds like a task suitable for automation: create an extension that make screenshots (in png format) of every menu and window and save them in a folder (I remember something similar from Axel, but it was a limited set of windows). Since we have to check all these items anyway, a similar tool would greatly simplify our QA work: finding cropped dialogs would become a matter of seconds.
Conclusion: We might need to blog to demystify the bug filing process for l10n volunteers. Or, provide easy tools like bug-by-email templates.
Let’s take a step back: is BugZilla the best tool to discuss/improve localizations? For Italian we use our forum for this purpose, in the past we used a mailing list, and the discussions can reach a considerable length (this is the thread about Fx3.1, and it’s 25 pages long right now): one single bug would be difficult to follow, more bugs would probably make my work as a localizer more difficult.
While BugZilla it’s a valuable tool for released (and frozen) versions (you eventually need a patch+approval to fix the problem), I don’t think it’s the best solution for nightly/trunk localization.
Another problem: as far as I know, we’re not clearly telling people how they can report errors or improve the localization, we treat these like normal bugs. Maybe a landing (and localized) page would be a better choice: each team would be free to choose their tools – for example a mail address, a forum section, or even BugZilla if they want – and help users to find the right communication channel.
Conclusion: We might have to provide a generic blue-print for effective test planning for any locale, indicating what are the key l10n steps in test planning.
This sounds reasonable (as a suggested path, not mandatory).
Conclusion: We might need a third-party QA service to help lead volunteer L10N test activities.
“A third-party QA service” means that a third-party entity checks the localization and reports errors? This sounds dangerous and, potentially, time wasting: to check a localization you need to know a lot of the underlying decisions (for example why we choose to translate some words in a specific way, why we use the third person in verbs and not the second, etc.) and QA criteria. Both third-party QA and localization have been done in the past, and the results were far from good.
2 responses to “About Localization-QA survey results”
This is a terrific response to my post. Thanks, flod. I am going to go through each point you made and discuss with timr (QA lead at Mozilla).
You mentioned that the trackback is not working on my blog. I haven’t heard that before. Can you give me a bit more on what problem you experienced?
the trackback to your blog (automatically sent by WordPress since there’s a link inside this post) hasn’t appeared yet in the comments.
* the theme is filtering (and not displaying) trackbacks from comments
* the trackback has been marked as spam
* the trackback disappeared in the middle
In the third case, there’s something blocking trackbacks directed to your blog 😉