Will manifold think "it would be safer if all AI was open source" when:
Plus
15
Ṁ8662100
8%
It's 2025 Jan
10%
It's 2026 Jan
6%
GPT 5 comes out
10%
Llama 4 comes out
30%
It's 2030
This question is managed and resolved by Manifold.
Get
1,000
and3.00
Sort by:
I think the problem with this question is wording. "Safer for humanity" makes me think about existential risks. I think economic risks are MUCH higher and it would be better if more models were open source, but this doesn't translate to "safer for humanity" because I don't think unemployment and inequality will lead to extinction.
In my opinion, existential risks are very low at the moment because from what I've seen all current models completely fail at displaying agent behavior, they are also, architecturally, not optimisers, which is what most "AI kills all theories" assume.
Related questions
Related questions
Will Manifold stop using AI to make my questions worse by the end of 2025?
59% chance
Will open-source AI win (through 2025)?
25% chance
An AI is trustworthy-ish on Manifold by 2030?
46% chance
Will there be a disaster caused by open source developers doing unsafe things with AI by 2028?
61% chance
At the beginning of 2026, what percentage of Manifold users will believe that an AI intelligence explosion is a significant concern before 2075?
66% chance
Will open-source AI win? (through 2028)
35% chance
When will manifold users think we have AGI? [Resolves to a majority yes in poll]
At the beginning of 2027, what percentage of Manifold users will believe that an AI intelligence explosion is a significant concern before 2075?
69% chance
Does open sourcing LLMs/AI models (a la meta) increase risk of AI catastrophe?
61% chance
At the beginning of 2028, what percentage of Manifold users will believe that an AI intelligence explosion is a significant concern before 2075?
67% chance