LFM 2.5 1.2B Instruct

Liquid

LFM2.5-1.2B-Instruct is a compact, high-performance instruction-tuned model built for fast on-device AI. It delivers strong chat quality in a 1.2B parameter footprint, with efficient edge inference and broad runtime support.

Try Now

Technical Specifications

Context Window

32,768 tokens

Max Output

32,768 tokens

Pricing

Token Costs (per 1M tokens)

Cache Miss Input

$0

Non-Reasoning Output

$0

Legacy

Made legacy on

Reason

1.2B model; too small for production

Recommended Replacement

Qwen3.6 Plus