Qwen3-30B-A3B-Instruct-2507 is a powerful 30.5B-parameter mixture-of-experts (MoE) language model developed by Qwen. With 3.3B active parameters per inference, this model operates in a non-thinking mode, optimized for superior instruction following, robust multilingual understanding, and advanced agentic tool use capabilities. It has been meticulously post-trained on extensive instruction data, demonstrating competitive performance across critical benchmarks. This model showcases exceptional results in reasoning (AIME, ZebraLogic), coding (MultiPL-E, LiveCodeBench), and alignment (IFEval, WritingBench). It significantly outperforms its non-instruct variant on subjective and open-ended tasks, while maintaining strong factual accuracy and coding proficiency. Key specifications include a vast 262K token context window and a maximum output of 4K tokens. Pricing is highly competitive at $0.08 per 1M input tokens and $0.33 per 1M output tokens, with FREE access available on Multi AI.
✅ Best For
🚀 Capabilities
❌ Limitations
Specifications
| Provider | qwen |
| Context Window | 262,144 tokens |
| Max Output | 4,096 tokens |
| Minimum Plan | Economy |
Pricing
| Input Price | $0.0800 / 1M tokens |
| Output Price | $0.3300 / 1M tokens |
💡 With PRO subscription, cost is reduced by 20%