Q3
Balance

Qwen: Qwen3 235B A22B Thinking 2507

by qwen

Qwen3-235B-A22B-Thinking-2507 is a cutting-edge, open-weight Mixture-of-Experts (MoE) language model specifically engineered for demanding reasoning tasks. Activating 22B of its 235B parameters per forward pass, this variant is designed to enhance structured logical reasoning, mathematics, science, and long-form content generation. It boasts strong benchmark performance across AIME, SuperGPQA, LiveCodeBench, and MMLU-Redux, and natively supports an extensive context window of up to 262,144 tokens. This "thinking-only" model enforces a special reasoning mode (</think>) and is optimized for high-token outputs, capable of generating up to 81,920 tokens in challenging domains. It is instruction-tuned, making it highly effective for step-by-step reasoning, tool use, and agentic workflows, including multilingual tasks. With a maximum output of 4K tokens and competitive pricing at $0.11/$0.60 per 1M input/output tokens, it represents the most capable open-source variant in the Qwen3-235B series, often surpassing many closed models in structured reasoning use cases.

ReasoningMoECodeMathLong-form
75%Quality
262KContext Window
70%Speed
Category
Economy
API access
Unified context
RAG + Knowledge Base
24/7 Support
Try This ModelCompare models

Best For

Chat
Code Generation
Mathematics

🚀 Capabilities

Functions
Code
Streaming

Limitations

No image generation
No internet access

Specifications

Providerqwen
Context Window262,144 tokens
Max Output4,096 tokens
Minimum PlanBalance

Pricing

Input Price$0.1100 / 1M tokens
Output Price$0.6000 / 1M tokens

💡 With PRO subscription, cost is reduced by 20%

Ready to try Qwen: Qwen3 235B A22B Thinking 2507?

Get 1,000 tokens free on signup

Start for free