LFM 2.2 6B

Liquid

LFM2 is a new generation of hybrid models developed by Liquid AI, specifically designed for edge AI and on-device deployment. It sets a new standard in terms of quality, speed, and memory efficiency.

Try Now

Example Use Cases

Ultra-lightweight edge deployment

On-device ai inference

Minimal resource text generation

Technical Specifications

Context Window

32,768 tokens

Max Output

32,768 tokens

Pricing

Token Costs (per 1M tokens)

Cache Miss Input

$0.01

Non-Reasoning Output

$0.02

Retired

Made legacy on

Reason

Untested

Recommended Replacement

Qwen3.6 Plus

Retired on