LFM 2.2 6B

liquid

LFM2 is a new generation of hybrid models developed by Liquid AI, specifically designed for edge AI and on-device deployment. It sets a new standard in terms of quality, speed, and memory efficiency.

Try Now

Example Use Cases

Ultra-lightweight edge deployment

On-device ai inference

Minimal resource text generation

Technical Specifications

Context Window

32,768 tokens

Max Output

32,768 tokens

Cache Miss Cost

$0.01 per 1M tokens

Non-Reasoning Cost

$0.02 per 1M tokens

Web Search Cost

$15 per 1K calls

Code Execution Cost

$0.19 per 1K calls

⚠️ Legacy

Made legacy on

Reason

Untested

Recommended Replacement

Qwen3.5 Plus