Will manifold think "it would be safer if all AI was open source" when:
➕
Plus
15
Ṁ866
2100
8%
It's 2025 Jan
10%
It's 2026 Jan
6%
GPT 5 comes out
10%
Llama 4 comes out
30%
It's 2030

At each of the dates in this market I'll run a poll asking:

"Would it be safer for humanity if all AI was open source?"

First poll:

2nd poll - Jan 2025

Get
Ṁ1,000
and
S3.00
Sort by:

I think the problem with this question is wording. "Safer for humanity" makes me think about existential risks. I think economic risks are MUCH higher and it would be better if more models were open source, but this doesn't translate to "safer for humanity" because I don't think unemployment and inequality will lead to extinction.
In my opinion, existential risks are very low at the moment because from what I've seen all current models completely fail at displaying agent behavior, they are also, architecturally, not optimisers, which is what most "AI kills all theories" assume.

This interview with zuck was my inspiration for this market.

At 38 minutes they discuss the dangers of open source and at 48:50 zuck makes an interesting point that maybe the biggest harms aren't the existential ones but the real ones that exist today.

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules