MiniMax M2.7 Review — A Self-Evolving AI 10x Cheaper Than GPT and Claude
MiniMax M2.7 launched March 18, 2026. It's a self-evolving AI model that matches GPT-5-level coding benchmarks at a fraction of the cost. Full review covering performance, pricing comparison, and how to use it.
![]()
MiniMax M2.7 — A Self-Evolving AI 10x Cheaper Than GPT and Claude
On March 18, 2026, Chinese AI startup MiniMax released M2.7 — a model that calls itself "self-evolving" and undercuts GPT-4o on price by 10 to 20 times. It's making waves in the AI world for good reason.
What Is MiniMax M2.7?
M2.7 is the latest in MiniMax's M2 model series. Its defining feature is the ability to participate in its own training process.
"M2.7 is the first model to participate in its own training. It ran over 100 autonomous optimization rounds, improving performance by 30%." — MiniMax official announcement
Key Specs
| Item | Details |
|---|---|
| Release date | March 18, 2026 |
| Context window | 204,800 tokens |
| Max output | 131,072 tokens |
| Input price | $0.30 / 1M tokens |
| Output price | $1.20 / 1M tokens |
What Does "Self-Evolving" Mean?
![]()
M2.7's most distinctive trait is that it autonomously improves its own training process using the OpenClaw agentic framework, cycling through:
- Analyze failure trajectories — identify patterns in wrong answers
- Plan changes — determine how to improve
- Modify scaffold code — automatically update its own code
- Run evaluations — test performance after changes
- Compare results — keep improvements, revert regressions
Three core modules power this:
- Short-term Memory — generates a markdown memo after each round
- Self-Feedback — evaluates its own outputs
- Self-Optimization — autonomously improves based on feedback
Performance Benchmarks
![]()
M2.7 Key Benchmarks
| Benchmark | M2.7 Score | Notes |
|---|---|---|
| SWE-Pro | 56.22% | On par with GPT-5.3-Codex |
| Multi-SWE-Bench | 52.7% | Multi-repo coding |
| SWE Multilingual | 76.5% | Multilingual coding |
| VIBE-Pro | 55.6% | Near Claude Opus 4.6 level |
| Terminal Bench 2 | 57.0% | Terminal tasks |
| Hallucination | +1 | Dramatic improvement from M2.5's -40 |
M2.5 vs M2.7 Comparison
| Item | M2.5 (February) | M2.7 (March) |
|---|---|---|
| Self-evolution | ✗ | ✓ |
| SWE-Pro | ~55% | 56.22% |
| MMLU | 88.4% | — |
| Hallucination index | -40 | +1 |
| Output price | $0.60/1M | $1.20/1M |
| Primary strength | Coding, office tasks | Agentic tasks, self-improvement |
Price Comparison — Major LLM Output Pricing ($/1M tokens)
Output Price Comparison (USD per 1M tokens)
MiniMax M2.5 ██ $0.60
MiniMax M2.7 ████ $1.20
DeepSeek V3.2 ████ $1.40
Llama 4 Mav. ██ $0.60
GPT-4o ████████████████████ $10.00
Gemini 3.1 Pro ████████████████ $12.00
Claude Sonnet ████████████████████████████████████████ $15.00
GPT-5.4 ████████████████████████████████████████ $15.00
Claude Opus ████████████████████████████████████████████████████ $25.00
| Model | Input ($/1M) | Output ($/1M) |
|---|---|---|
| MiniMax M2.5 | $0.15 | $0.60 |
| MiniMax M2.7 | $0.30 | $1.20 |
| DeepSeek V3.2 | $0.28 | $1.40 |
| Llama 4 Maverick | $0.15 | $0.60 |
| GPT-4o | $2.50 | $10.00 |
| Gemini 3.1 Pro | $2.00 | $12.00 |
| Claude Sonnet | $3.00 | $15.00 |
| Claude Opus | $5.00 | $25.00 |
Key takeaways:
- M2.7's output price ($1.20) is ~21x cheaper than Claude Opus ($25.00)
- ~8x cheaper than GPT-4o ($10.00)
- ~12x cheaper than Claude Sonnet ($15.00)
How to Use MiniMax M2.7 Right Now
![]()
Option 1 — OpenRouter (Easiest)
The fastest way for developers to get started.
- Sign up at openrouter.ai (Google/GitHub login supported)
- Add credits (start small)
- Use model ID:
minimax/minimax-m2-7
import openai
client = openai.OpenAI(
base_url="https://openrouter.ai/api/v1",
api_key="YOUR_OPENROUTER_API_KEY",
)
response = client.chat.completions.create(
model="minimax/minimax-m2-7",
messages=[{"role": "user", "content": "Hello!"}]
)
print(response.choices[0].message.content)Option 2 — MiniMax Native API
- Platform: platform.minimax.io
- Get your own API key and full API access
- Detailed docs available on the platform
Option 3 — MiniMax Web Interface
- Visit minimax.io
- Chat directly, like ChatGPT or Claude.ai
- No coding required
Option 4 — Ollama (Local)
- Run entirely on your own machine, no internet needed
- Command:
ollama run minimax-m2.7
Should You Use MiniMax M2.7?
Recommended When
- Heavy coding workloads — SWE benchmarks match GPT-5 tier
- Cost is a priority — dramatically cheaper than comparable models
- Agentic tasks — self-evolution makes it strong for complex autonomous workflows
Things to Watch
- Non-English language quality may be lower than top-tier models
- Relatively new model with a smaller community and fewer tutorials
- Costs 2x M2.5, but the hallucination improvements and self-evolution are meaningful upgrades
As LLM pricing competition heats up, MiniMax M2.7 is a serious challenger to the idea that "expensive equals better."
Get new posts by email ✉️
We'll notify you when new posts are published