gemini-3-pro-preview
proprietaryMultimodal
Benchmark Average (0–100)
81.1%
Average of non-null benchmark scores across all evaluated tasks.
BENCHMARK SCORES
GPQA91.9%
Graduate-Level Google-Proof Q&A. PhD-level multiple-choice questions in chemistry, biology, and physics. Scored by accuracy.
AIME 2025100.0%
AIME 2025 is a mathematics competition benchmark testing advanced problem-solving.
SWE-bench Verified76.2%
SWE-bench Verified evaluates models on real-world software engineering tasks from GitHub issues.
MMMLU91.8%
Multilingual MMLU tests knowledge across many languages and subject areas.
HLE45.8%
Humanity's Last Exam (HLE) is a multi-modal benchmark testing frontier knowledge across mathematics, humanities, and natural sciences with 2,500 expert-level questions.
MMMU-Pro81.0%
MMMU-Pro is a more challenging version of MMMU with harder multimodal understanding tasks.
OTHER BENCHMARKS
SimpleQA
72.1%
ARC-AGI v2
31.1%
CharXIV-R
81.4%
ScreenSpot Pro
72.7%
MODEL INFO
Organization
Google
Context Window
1,048,576
Release Date
2025-11-18
Knowledge Cutoff
2025-01-31
Parameters
—
License
proprietary
Input Price ($/M)
$2.0000
Output Price ($/M)
$12.0000
METADATA
Announcement Date
2025-11-18
Organization Country
US
Canonical ID
—