

Google’s aggressive push into artificial intelligence continues in 2026 with the release of TranslateGemma, a new set of multilingual AI models focused on translation. Aimed at the open community, these models support translation across a wide range of languages using text and image inputs (image input only).
In a blog post, Google announced three variants of TranslateGemma 4B, 12B, and 27B parameters now available on Hugging Face, Kaggle and via Vertex AI. Released under a permissive license, the models can be used for both academic and commercial purposes. The 4B model is optimized for mobile and edge devices, the 12B version targets consumer laptops, and the 27B model offers the highest accuracy, capable of running locally on a single Nvidia H100 GPU or TPU.
Built on Gemma 3, TranslateGemma was trained using supervised fine-tuning with diverse datasets to ensure strong performance even in low-resource languages, and further improved through reinforcement learning. Google claims the 12B model outperforms Gemma 3 27B on the WMT24++ benchmark, delivering similar quality with less than half the parameters. According to Google, TranslateGemma has been trained and evaluated on 55 language pairs, including Spanish, French, Chinese and Hindi, with exposure to nearly 500 additional language pairs. Beyond text translation, the models can also detect and translate text within images, expanding their real-world usability.












Comments (0)
No comments yet
Be the first to comment!