Gemma 3n 2B

Google

Gemma 3n E2B IT is a multimodal, instruction-tuned model developed by Google DeepMind, designed to operate efficiently at an effective parameter size of 2B while leveraging a 6B architecture. Based on the MatFormer architecture, it supports nested submodels and modular composition via the Mix-and-Match framework. Gemma 3n models are optimized for low-resource deployment, offering 32K context length and strong multilingual and reasoning performance across common benchmarks. This variant is trained on a diverse corpus including code, math, web, and multimodal data.

Try Now

Technical Specifications

Context Window

8,192 tokens

Max Output

2,048 tokens

Pricing

Token Costs (per 1M tokens)

Cache Miss Input

$0

Non-Reasoning Output

$0

Legacy

Made legacy on

Reason

2B edge model; too small for reliable chat

Recommended Replacement

Gemma 4 26B A4B