I make a contribution to AI safety that is endorsed by at least one high profile AI alignment researcher by the end of 2026
Plus
12
Ṁ2802026
59%
chance
1D
1W
1M
ALL
Let's consider any AI alignment researcher who has written a sequence in either the Alignment Forum library or the LessWrong library "high profile" for the purposes of this question.
This question is managed and resolved by Manifold.
Get
1,000
and3.00
Related questions
Related questions
By 2028, will I believe that contemporary AIs are aligned (posing no existential risk)?
33% chance
By the end of 2025, which piece of advice will I feel has had the most positive impact on me becoming an effective AI alignment researcher?
Conditional on their being no AI takeoff before 2030, will the majority of AI researchers believe that AI alignment is solved?
34% chance
Will non-profit funding for AI safety reach 100 billion US dollars in a year before 2030?
38% chance
In 2025, will I believe that aligning automated AI research AI should be the focus of the alignment community?
52% chance
Will someone commit terrorism against an AI lab by the end of 2025 for AI-safety related reasons?
23% chance
By end of 2028, will there be a global AI organization, responsible for AI safety and regulations?
43% chance
Will there be a coherent AI safety movement with leaders and an agenda in May 2029?
77% chance
Will there be serious AI safety drama at Meta AI before 2026?
58% chance
Conditional on their being no AI takeoff before 2050, will the majority of AI researchers believe that AI alignment is solved?
51% chance