Qwen3-235B-A22B-Thinking-2507 is a cutting-edge, open-weight Mixture-of-Experts (MoE) language model specifically engineered for demanding reasoning tasks. Activating 22B of its 235B parameters per forward pass, this variant is designed to enhance structured logical reasoning, mathematics, science, and long-form content generation. It boasts strong benchmark performance across AIME, SuperGPQA, LiveCodeBench, and MMLU-Redux, and natively supports an extensive context window of up to 262,144 tokens. This "thinking-only" model enforces a special reasoning mode (</think>) and is optimized for high-token outputs, capable of generating up to 81,920 tokens in challenging domains. It is instruction-tuned, making it highly effective for step-by-step reasoning, tool use, and agentic workflows, including multilingual tasks. With a maximum output of 4K tokens and competitive pricing at $0.11/$0.60 per 1M input/output tokens, it represents the most capable open-source variant in the Qwen3-235B series, often surpassing many closed models in structured reasoning use cases.
✅ Best For
🚀 Capabilities
❌ Limitations
Specifications
| Provider | qwen |
| Context Window | 262,144 tokens |
| Max Output | 4,096 tokens |
| Minimum Plan | Balance |
Pricing
| Input Price | $0.1100 / 1M tokens |
| Output Price | $0.6000 / 1M tokens |
💡 With PRO subscription, cost is reduced by 20%