OpenAI
o4-mini
proprietaryMultimodal
Benchmark Average (0–100)
61.7%
Average of non-null benchmark scores across all evaluated tasks.
BENCHMARK SCORES
GPQA81.4%
Graduate-Level Google-Proof Q&A. PhD-level multiple-choice questions in chemistry, biology, and physics. Scored by accuracy.
AIME 202592.7%
AIME 2025 is a mathematics competition benchmark testing advanced problem-solving.
SWE-bench Verified68.1%
SWE-bench Verified evaluates models on real-world software engineering tasks from GitHub issues.
BrowseComp51.5%
BrowseComp tests web browsing comprehension, measuring ability to find and synthesize information from websites.
HLE14.7%
Humanity's Last Exam (HLE) is a multi-modal benchmark testing frontier knowledge across mathematics, humanities, and natural sciences with 2,500 expert-level questions.
OTHER BENCHMARKS
MMMU
81.6%
TAU-bench Retail
71.8%
CharXIV-R
72.0%
MODEL INFO
Organization
OpenAI
Context Window
200,000
Release Date
2025-04-16
Knowledge Cutoff
2024-05-31
Parameters
—
License
proprietary
Input Price ($/M)
$1.1000
Output Price ($/M)
$4.4000
METADATA
Announcement Date
2025-04-16
Organization Country
US
Canonical ID
—