AI resolves at least X% on SWE-bench without any assistance, by 2028?
➕
Plus
26
Ṁ8926
2027
99%
X = 16
96%
X = 32
95%
X = 40
95%
X = 50
97%
X = 60
94%
X = 70
95%
X = 75
93%
X = 80
95%
X = 85
94%
X = 90
73%
X = 95

Currently the SOTA has 1.96% resolves "unassisted"

For the % resolves where assistance is provided, please refer to the following market:

Leaderboard (Scroll a bit)

Get
Ṁ1,000
and
S3.00
Sort by:

https://www.swebench.com/

@mods can you resolve the relevant options?

@HansPeter The website features multiple leaderboards, I assume the "full" one is closest to the original?

And there's no distinction between assisted and unassisted anymore, so is it all unassisted now?

reposted

It appears that while DEVEN gets really good scores on SWE bench (14%}, its misleading. They don't test on SWE bench, they test on a small subset of SWE bench which contains only Pull requests.

@firstuserhere SWE-Bench is only pull requests:

SWE-bench is a dataset that tests systems' ability to solve GitHub issues automatically. The dataset collects 2,294 Issue-Pull Request pairs from 12 popular Python repositories. Evaluation is performed by unit test verification using post-PR behavior as the reference solution.

See swebench.com

i'll resolve yes to x = 4 and 8 after a few days of wait just to make sure its all legit

bought Ṁ50 YES

@firstuserhere hey, could you please resolve

I resolved those

From https://www.cognition-labs.com/blog

We evaluated Devin on SWE-bench, a challenging benchmark that asks agents to resolve real-world GitHub issues found in open source projects like Django and scikit-learn.

Devin correctly resolves 13.86%* of the issues end-to-end, far exceeding the previous state-of-the-art of 1.96%. Even when given the exact files to edit, the best previous models can only resolve 4.80% of issues.

We plan to publish a more detailed technical report soon—stay tuned for more details.

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules