Back to Blog
MiniMax M2.7 Review — A Self-Evolving AI 10x Cheaper Than GPT and Claude
AIFeatured

MiniMax M2.7 Review — A Self-Evolving AI 10x Cheaper Than GPT and Claude

MiniMax M2.7 launched March 18, 2026. It's a self-evolving AI model that matches GPT-5-level coding benchmarks at a fraction of the cost. Full review covering performance, pricing comparison, and how to use it.

Mar 29, 20265min read

MiniMax M2.7 — Self-Evolving AI Model

MiniMax M2.7 — A Self-Evolving AI 10x Cheaper Than GPT and Claude

On March 18, 2026, Chinese AI startup MiniMax released M2.7 — a model that calls itself "self-evolving" and undercuts GPT-4o on price by 10 to 20 times. It's making waves in the AI world for good reason.


What Is MiniMax M2.7?

M2.7 is the latest in MiniMax's M2 model series. Its defining feature is the ability to participate in its own training process.

"M2.7 is the first model to participate in its own training. It ran over 100 autonomous optimization rounds, improving performance by 30%." — MiniMax official announcement

Key Specs

ItemDetails
Release dateMarch 18, 2026
Context window204,800 tokens
Max output131,072 tokens
Input price$0.30 / 1M tokens
Output price$1.20 / 1M tokens

What Does "Self-Evolving" Mean?

Self-learning AI — neural network that evolves on its own

M2.7's most distinctive trait is that it autonomously improves its own training process using the OpenClaw agentic framework, cycling through:

  1. Analyze failure trajectories — identify patterns in wrong answers
  2. Plan changes — determine how to improve
  3. Modify scaffold code — automatically update its own code
  4. Run evaluations — test performance after changes
  5. Compare results — keep improvements, revert regressions

Three core modules power this:

  • Short-term Memory — generates a markdown memo after each round
  • Self-Feedback — evaluates its own outputs
  • Self-Optimization — autonomously improves based on feedback

Performance Benchmarks

AI Performance Benchmarks

M2.7 Key Benchmarks

BenchmarkM2.7 ScoreNotes
SWE-Pro56.22%On par with GPT-5.3-Codex
Multi-SWE-Bench52.7%Multi-repo coding
SWE Multilingual76.5%Multilingual coding
VIBE-Pro55.6%Near Claude Opus 4.6 level
Terminal Bench 257.0%Terminal tasks
Hallucination+1Dramatic improvement from M2.5's -40

M2.5 vs M2.7 Comparison

ItemM2.5 (February)M2.7 (March)
Self-evolution
SWE-Pro~55%56.22%
MMLU88.4%
Hallucination index-40+1
Output price$0.60/1M$1.20/1M
Primary strengthCoding, office tasksAgentic tasks, self-improvement

Price Comparison — Major LLM Output Pricing ($/1M tokens)

Output Price Comparison (USD per 1M tokens)

MiniMax M2.5    ██ $0.60
MiniMax M2.7    ████ $1.20
DeepSeek V3.2   ████ $1.40
Llama 4 Mav.    ██ $0.60
GPT-4o          ████████████████████ $10.00
Gemini 3.1 Pro  ████████████████ $12.00
Claude Sonnet   ████████████████████████████████████████ $15.00
GPT-5.4         ████████████████████████████████████████ $15.00
Claude Opus     ████████████████████████████████████████████████████ $25.00
ModelInput ($/1M)Output ($/1M)
MiniMax M2.5$0.15$0.60
MiniMax M2.7$0.30$1.20
DeepSeek V3.2$0.28$1.40
Llama 4 Maverick$0.15$0.60
GPT-4o$2.50$10.00
Gemini 3.1 Pro$2.00$12.00
Claude Sonnet$3.00$15.00
Claude Opus$5.00$25.00

Key takeaways:

  • M2.7's output price ($1.20) is ~21x cheaper than Claude Opus ($25.00)
  • ~8x cheaper than GPT-4o ($10.00)
  • ~12x cheaper than Claude Sonnet ($15.00)

How to Use MiniMax M2.7 Right Now

Using MiniMax M2.7 via API and code

Option 1 — OpenRouter (Easiest)

The fastest way for developers to get started.

  1. Sign up at openrouter.ai (Google/GitHub login supported)
  2. Add credits (start small)
  3. Use model ID: minimax/minimax-m2-7
import openai
 
client = openai.OpenAI(
    base_url="https://openrouter.ai/api/v1",
    api_key="YOUR_OPENROUTER_API_KEY",
)
 
response = client.chat.completions.create(
    model="minimax/minimax-m2-7",
    messages=[{"role": "user", "content": "Hello!"}]
)
print(response.choices[0].message.content)

Option 2 — MiniMax Native API

  • Platform: platform.minimax.io
  • Get your own API key and full API access
  • Detailed docs available on the platform

Option 3 — MiniMax Web Interface

  • Visit minimax.io
  • Chat directly, like ChatGPT or Claude.ai
  • No coding required

Option 4 — Ollama (Local)

  • Run entirely on your own machine, no internet needed
  • Command: ollama run minimax-m2.7

Should You Use MiniMax M2.7?

  • Heavy coding workloads — SWE benchmarks match GPT-5 tier
  • Cost is a priority — dramatically cheaper than comparable models
  • Agentic tasks — self-evolution makes it strong for complex autonomous workflows

Things to Watch

  • Non-English language quality may be lower than top-tier models
  • Relatively new model with a smaller community and fewer tutorials
  • Costs 2x M2.5, but the hallucination improvements and self-evolution are meaningful upgrades

As LLM pricing competition heats up, MiniMax M2.7 is a serious challenger to the idea that "expensive equals better."

Get new posts by email ✉️

We'll notify you when new posts are published