Q3
Economy

Qwen: Qwen3 235B A22B Instruct 2507

by qwen

Qwen3-235B-A22B-Instruct-2507 is an advanced multilingual, instruction-tuned Mixture-of-Experts (MoE) language model built upon the robust Qwen3-235B architecture. It operates with 22 billion active parameters per forward pass, making it highly efficient for a wide range of tasks. This model excels in general-purpose text generation, including complex instruction following, logical reasoning, mathematical problem-solving, code generation, and sophisticated tool usage. It boasts a native 262K context length, allowing for extensive understanding and generation, and does not utilize "thinking mode" (<think> blocks). Compared to its base variant, this version offers significant enhancements in knowledge coverage, long-context reasoning capabilities, and performance on coding benchmarks. It demonstrates superior alignment with open-ended tasks and is particularly strong in multilingual understanding, advanced math reasoning (e.g., AIME, HMMT), and alignment evaluations such as Arena-Hard and WritingBench. With a maximum output of 4K tokens and competitive pricing at $0.07/$0.46 per 1M input/output tokens, it's a versatile choice. This model supports functions, code, and streaming capabilities, and is available for free access on Multi AI.

Multilingual AIInstruction FollowingMoE ModelText GenerationCoding AI
75%Quality
262KContext Window
70%Speed
Category
Economy
API access
Unified context
RAG + Knowledge Base
24/7 Support
Try This ModelCompare models

Best For

Chat
Code Generation
Math Reasoning

🚀 Capabilities

Long context
Structured output
JSON mode
Functions
Code
Streaming

Limitations

No image generation
No internet access

Specifications

Providerqwen
Context Window262,144 tokens
Max Output4,096 tokens
Minimum PlanEconomy

Pricing

Input Price$0.0710 / 1M tokens
Output Price$0.1000 / 1M tokens

💡 With PRO subscription, cost is reduced by 20%

Ready to try Qwen: Qwen3 235B A22B Instruct 2507?

Get 1,000 tokens free on signup

Start for free