LFM 2 8B A1B

Liquid

LFM2-8B-A1B is an efficient on-device Mixture-of-Experts (MoE) model from Liquid AI's LFM2 family, built for fast, high-quality inference on edge hardware. It uses 8.3B total parameters with only ~1.5B active per token, delivering strong performance while keeping compute and memory usage low-making it ideal for phones, tablets, and laptops.

Try Now

Technical Specifications

Context Window

32,768 tokens

Max Output

32,768 tokens

Pricing

Token Costs (per 1M tokens)

Cache Miss Input

$0.01

Non-Reasoning Output

$0.02

Retired

Made legacy on

Reason

8B MoE model; superseded by LFM 2 24B

Recommended Replacement

Qwen3.6 Plus

Retired on