If Artificial General Intelligence has a poor outcome, what will be the reason?
Basic
4
Ṁ2442030
75%
Something from Eliezer's list of lethalities occurs.
67%
Someone finds a solution to alignment, but fails to communicate it before dangerous AI gains control.
55%
Alignment is impossible.
37%
Someone successfully aligns AI to cause a poor outcome
Inverse of https://manifold.markets/EliezerYudkowsky/if-artificial-general-intelligence-539844cd3ba1?r=S3JhbnR6.
Will not resolve.
Primarily for users to explore particular lethalities.
Please add responses.
"poor" = human extinction or mass human suffering
This question is managed and resolved by Manifold.
Get
1,000
and3.00
Related questions
Related questions
If Artificial General Intelligence has an okay outcome, what will be the reason?
If Artificial General Intelligence has an okay outcome, what will be the reason?
If we survive general artificial intelligence, what will be the reason?
If Artificial General Intelligence has an okay outcome, which of these tags will make up the reason?
If Artificial General Intelligence has an okay outcome, what will be the reason?
Why will "If Artificial General Intelligence has an okay outcome, what will be the reason?" resolve N/A?
Will Eliezer's "If Artificial General Intelligence has an okay outcome, what will be the reason?" market resolve N/A?
29% chance
Conditional on a negative consequence of AI that shocks governments into regulating AI occurring, what will it be?
Who first builds an Artificial General Intelligence?
If AI causes human extinction before 2100, how will it happen