LFM2 is a new generation of hybrid models developed by Liquid AI, specifically designed for edge AI and on-device deployment. It sets a new standard in terms of quality, speed, and memory efficiency.
Try NowUltra-lightweight edge deployment
On-device ai inference
Minimal resource text generation
32,768 tokens
32,768 tokens
$0.01 per 1M tokens
$0.02 per 1M tokens
$15 per 1K calls
$0.19 per 1K calls