Banner
WorkflowNavbar

Contact Counsellor

Key AspectDetails
EventGoogle launched TranslateGemma, an open suite of translation-focused AI models.
Date of AnnouncementJanuary 15, 2026.
Based on ArchitectureGemma 3 architecture.
PurposeBreak language barriers worldwide through efficient, high-quality translation.
Supported Languages55 global languages, with training extended to nearly 500 additional language pairs.
Model SizesAvailable in 4B, 12B, and 27B parameter sizes for varying hardware capacities.
Performance Highlights12B model outperforms the 27B Gemma 3 baseline in the WMT24++ benchmark.
EfficiencyFaster inference, lower latency, reduced computational cost, and high translation fidelity.
Training MethodologyTwo-stage fine-tuning: Supervised Fine-Tuning (SFT) and Reinforcement Learning (RL).
Multimodal CapabilitiesImproved performance in translating text within images without dedicated multimodal fine-tuning.
Platform CompatibilityRuns on mobile, edge devices, laptops (4B, 12B), and cloud deployment (27B).
QuestionQ. TranslateGemma, recently seen in the news, is associated with which field?
AnswerC. Artificial intelligence-based language translation

Categories