"Massive anti-AI preference cascade based on xrisk" by end of 2025?
97
Ṁ43k
Dec 31
11%
chance

@CateHall writes:

Will I agree that she was right? I'll resolve to YES if I think this has happened before the end of 2025; NO if obviously not; and open to partial resolutions.

Warning: this market may inherently be kind of fuzzy/vibes-based; if anyone has ideas for more objective criteria will answer the core question rather than getting into technicalities, please speak up!

Sources I will likely consult for a resolution, if they're open to advising:

  • My own information diet (friends, substack/blogs, LW, X/twitter)

  • Cate Hall herself

  • Manifold moderators

  • Some kind of LLM judge

Get
Ṁ1,000
and
S3.00
Sort by:

Are we counting mass unemployment as "catastrophic"? It's a much more obvious danger, and more directly salient to people's lives, so any public AI Danger movement will prob focus on that rather than extinction. But mass unemployment would in fact be catastrophic! So arguably it should count here

@AhronMaline I disagree very much. Not because it would not be bad, but because it is a different issue, treated separately in discourse and requiring different measures. Also I think "xrisk" is a pretty clear label.

Just like drones. There is just an overwhelming MIC and LEO vested interest that says otherwise

It would be nice if you could give some examples of things which would solidly count, just barely count, and just barely not count.

Some things which I think should be pretty far from counting:

  • OpenAI and xAI talk much more about x-risk and Bannon talks much more about this too. Employees at AI companies seem much more sold. (But no other substantial changes.)

  • Talking about x-risk from AI becomes substantially more common on X/twitter, but there aren't other substantial changes.

Things which seem more borderline and I'm unsure about (probably shouldn't count?):

  • There is a big leftist only preference cascade to start caring a bunch more about x-risk and generally being more anti AI. It's one of the main thing that Sanders and AOC talk about for some period. It also becomes very popular on bluesky. But, it doesn't spread beyond this.

  • David Sacks and the AI related parts of the Trump admin start saying things about x-risk which are much more sympathetic than they currently are and it becomes much more accepted on right wing tech twitter. AI companies piggy back on this to say much more about x-risk. There is substantially more discussion of AI being bad and an x-risk from the populist right. The left doesn't really change much. (Wouldn't qualify as massive?)

  • A clearly massive preference cascade toward caring much more about x-risk but which mostly doesn't manifest as being anti-AI.

huh, I'm a bit surprised that so much of the Manifold community think this is unlikely; anyone want to make a quick case for why?

I think I'm 50%+ on this, based on things like: the recent congress hearings, the miri book, the fact that my parents are now like "oh I'm glad you're working on safety", people saying their Uber drivers have takes on xrisk...

I think, as with James, it feels kinda inevitable but I'm least sure about the timeline on this - scary demos or an "inconvenient truth" moment would push for sooner, and distracting things in the world (eg Trump antics) would push for later

@Austin Nothing ever happens, AI is low salience, I expect the MIRI messaging to only work somewhat narrowly (but lower confidence about this last one).

@Austin Timeline is way too soon for AI-specific x-risk to become a high priority issue even among informed elites. Prob should be under 5%.

@Austin That said, your res sources seem to be centered on people who already are the likeliest people int the entire world to already think AI X-risk is a big deal, so... maybe I'm being foolish here? Or maybe them moving significantly further on AI X-risk is thereby less possible because they're already informed?

bought Ṁ1,000 NO

@Austin Most of the mainstream anti-AI sources I've seen just say that it's not even that smart, which doesn't mesh very well with concerns about x-risk.

@JamesGrugett The claim that xrisk would be top of mind is spicy. Here's the operationalization I most like for that subclaim https://manifold.markets/AdamK/in-january-2026-how-publicly-salien

yeah basically certain to me that labour is a bigger consideration than xrisk, for basically ever

bought Ṁ6,250 NO

I think this is a great prediction and I mostly agree with it, though still seems more unlikely than not in this time frame.

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules