MiniMax
minimax-m2
mit
Benchmark Average (0–100)
56.4%
Average of non-null benchmark scores across all evaluated tasks.
BENCHMARK SCORES
GPQA78.0%
Graduate-Level Google-Proof Q&A. PhD-level multiple-choice questions in chemistry, biology, and physics. Scored by accuracy.
AIME 202578.0%
AIME 2025 is a mathematics competition benchmark testing advanced problem-solving.
SWE-bench Verified69.4%
SWE-bench Verified evaluates models on real-world software engineering tasks from GitHub issues.
BrowseComp44.0%
BrowseComp tests web browsing comprehension, measuring ability to find and synthesize information from websites.
HLE12.5%
Humanity's Last Exam (HLE) is a multi-modal benchmark testing frontier knowledge across mathematics, humanities, and natural sciences with 2,500 expert-level questions.
OTHER BENCHMARKS
Terminal Bench
46.3%
MODEL INFO
Organization
MiniMax
Context Window
1,000,000
Release Date
2025-10-27
Knowledge Cutoff
—
Parameters
230000000000.0B
License
mit
Input Price ($/M)
$0.3000
Output Price ($/M)
$1.2000
METADATA
Announcement Date
2025-10-27
Organization Country
CN
Canonical ID
—