Is the nature of AI risk completely misunderstood today with respect to the state of the art in 2030?
➕
Plus
25
Ṁ1845
2030
39%
chance

As of now people are debating about existential risk due to misalignment, technological unemployment, lack of security in critical applications, fairness/equity/inclusion issues among others. Will something completely different and very important be generally considered the main risk of AI in 2030? Resolves on Dec 31, 2030 based on the consensus of researchers in 2030.

  • Update 2024-25-12 (PST): - Lack of security in critical applications includes risks such as AI-enabled bioterrorism. (AI summary of creator comment)

Get
Ṁ1,000
and
S3.00
Sort by:

I just realized that the description does not specify what I mean by “today”. It means “at market opening”. If you are unsatisfied with this I can refund your bet.

"completely different" from what? e.g., if people in 2030 think the most important thing is something there have been three LessWrong posts about as of 2023, does that count as completely different?

@StevenK A relatively obscure precursor to most ideas is relatively easy to find in hindsight. Let’s draw the line at topics that have at least half as many preprints on arxiv or equivalent as of now as the least popular of the examples I listed above.

@mariopasquato Does using AI in molecular biology/chemistry count?

@Shalun The idea that an AI could leverage this to manufacture biological nanobots?

@mariopasquato I mean something simpler than the destruction of life on Earth, just, for example, the creation of new deadly viruses or prions.

@Shalun I would file it under “lack of security in critical applications”. People have already discussed the risk that AI can enable bioterrorism

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules