What will be the median p(doom) of AI researchers after AGI is reached?
Plus
13
Ṁ5412101
83%
Above 5%
73%
Above 10%
62%
Above 20%
19%
Above 50%
10%
Above 80%
AGI defined as an AI that is better at AI research than the average human AI researcher not using AI.
p(doom) defined as human extinction or outcomes that are similarly bad.
In Katya Grace's 2022 survey, median values were 5% for "extremely bad outcome (e.g., human extinction)” and 5-10% for human extinction.
All answers which are true resolve Yes.
Related:
This question is managed and resolved by Manifold.
Get
1,000
and3.00
Sort by:
Related questions
Related questions
Will we reach "weak AGI" by the end of 2025?
30% chance
What will be the average P(doom) of AI researchers in 2025?
20% chance
Will we have at least one more AI winter before AGI is realized?
25% chance
Doom if AGI by 2040?
35% chance
Will OpenAI be in the lead in the AGI race end of 2026?
53% chance
Will Paul Christiano publicly announce a greater than 10% increase in his p(doom | AGI before 2100) within the next 5 years?
44% chance
If there exists a super-intelligent AI, would majority of AI researchers answer Yes to "Have we reached AGI?" ?
50% chance
In which year will a majority of AI researchers concur that a superintelligent, fairly general AI has been realized?
ML researchers’ median probability of existential risk from AI
30
Will we get AGI before 2037?
76% chance