Qwen3-235B-A22B-Instruct-2507 is an advanced multilingual, instruction-tuned Mixture-of-Experts (MoE) language model built upon the robust Qwen3-235B architecture. It operates with 22 billion active parameters per forward pass, making it highly efficient for a wide range of tasks. This model excels in general-purpose text generation, including complex instruction following, logical reasoning, mathematical problem-solving, code generation, and sophisticated tool usage. It boasts a native 262K context length, allowing for extensive understanding and generation, and does not utilize "thinking mode" (<think> blocks). Compared to its base variant, this version offers significant enhancements in knowledge coverage, long-context reasoning capabilities, and performance on coding benchmarks. It demonstrates superior alignment with open-ended tasks and is particularly strong in multilingual understanding, advanced math reasoning (e.g., AIME, HMMT), and alignment evaluations such as Arena-Hard and WritingBench. With a maximum output of 4K tokens and competitive pricing at $0.07/$0.46 per 1M input/output tokens, it's a versatile choice. This model supports functions, code, and streaming capabilities, and is available for free access on Multi AI.
✅ Best For
🚀 Capabilities
❌ Limitations
Specifications
| Provider | qwen |
| Context Window | 262,144 tokens |
| Max Output | 4,096 tokens |
| Minimum Plan | Economy |
Pricing
| Input Price | $0.0710 / 1M tokens |
| Output Price | $0.1000 / 1M tokens |
💡 With PRO subscription, cost is reduced by 20%