LiquidAI/LFM2-2.6B is a groundbreaking hybrid AI model from Liquid AI, engineered for optimal performance in edge AI and on-device environments. This model redefines expectations for quality, speed, and memory efficiency, making it ideal for applications where resources are constrained but high performance is critical. With a generous context window of 32K tokens and a max output of 4K tokens, LFM2-2.6B is perfectly suited for engaging chat applications. It supports streaming capabilities, ensuring dynamic and responsive interactions. Pricing is highly competitive at $0.01 per 1M input tokens and $0.02 per 1M output tokens, with FREE access available for exploration on Multi AI.
65%Quality
33KContext Window
70%Speed
Category
Economy
API access
Unified context
RAG + Knowledge Base
24/7 Support
✅ Best For
Chat Applications
Edge Deployment
Memory Efficiency
🚀 Capabilities
Streaming
❌ Limitations
No image generation
Specifications
| Provider | liquid |
| Context Window | 32,768 tokens |
| Max Output | 4,096 tokens |
| Minimum Plan | Economy |
Pricing
| Input Price | $0.0100 / 1M tokens |
| Output Price | $0.0200 / 1M tokens |
💡 With PRO subscription, cost is reduced by 20%