How long until one of Gemini, Claude, etc... match the capabilities of O1?
Plus
23
Ṁ58512026
2%
Oct 12th 2024
5%
Dec 12th 2024
45%
April 12th 2025
36%
September 12th 2025
5%
April 12th 2026
5%
OpenAI's O1 model represents a new paradigm of LLMs. How long until a competitor catches up?
"Catches up" / "matches capabilities" is defined as matching or exceeding the O1 pass@1 benchmarks on AIME, Codeforces, and GPQA at the time of publication:
74.4-percentile on AIME
89-percentile on Codeforces
78% accuracy on GPQA
This question is managed and resolved by Manifold.
Get
1,000
and3.00
Sort by:
It's soooooo slow though. And the results in real world day-to-day usage are rarely better than Sonnet 3.5 new, which are nearly instantaneous. (I ask both the same questions, and use them on a nearly daily basis)
Related questions
Related questions
Before February 2025, will a Gemini model exceed Claude 3.5 Sonnet 10/22's Global Average score on Simple Bench?
85% chance
What will be true of Gemini 2?
Will ANY Gemini or Apollo astronaut become centenarian?
48% chance
Will Gemini 1.5 Pro seem to be as good as Gemini 1.0 Ultra for common use cases? [Poll]
70% chance
Will Gemini exceed the performance of GPT-4 on the 2022 AMC 10 and AMC 12 exams?
72% chance
Will "Gemini [Ultra, 1.0] smash GPT-4 by 5x"?
18% chance
Will Gemini 2 be released before EOY 2025?
97% chance