| Key Aspect | Details |
|---|---|
| Event | Google launched TranslateGemma, an open suite of translation-focused AI models. |
| Date of Announcement | January 15, 2026. |
| Based on Architecture | Gemma 3 architecture. |
| Purpose | Break language barriers worldwide through efficient, high-quality translation. |
| Supported Languages | 55 global languages, with training extended to nearly 500 additional language pairs. |
| Model Sizes | Available in 4B, 12B, and 27B parameter sizes for varying hardware capacities. |
| Performance Highlights | 12B model outperforms the 27B Gemma 3 baseline in the WMT24++ benchmark. |
| Efficiency | Faster inference, lower latency, reduced computational cost, and high translation fidelity. |
| Training Methodology | Two-stage fine-tuning: Supervised Fine-Tuning (SFT) and Reinforcement Learning (RL). |
| Multimodal Capabilities | Improved performance in translating text within images without dedicated multimodal fine-tuning. |
| Platform Compatibility | Runs on mobile, edge devices, laptops (4B, 12B), and cloud deployment (27B). |
| Question | Q. TranslateGemma, recently seen in the news, is associated with which field? |
| Answer | C. Artificial intelligence-based language translation |


