"AI 2027" report's predictions borne out by 2027?
210
Ṁ93k
2026
22%
chance

https://www.astralcodexten.com/p/introducing-ai-2027

https://ai-2027.com/

This market resolves in January 2027. It resolves YES if the AI Futures Project's predictions seem to have been roughly correct up until that point. Some details here and there can be wrong, just as in Daniel's 2021 set of predictions, but the important through-lines should be correct.

Resolution will be via a poll of Manifold moderators. If they're split on the issue, with anywhere from 30% to 70% YES votes, it'll resolve to the proportion of YES votes. Otherwise it resolves YES/NO.

Get
Ṁ1,000
and
S3.00
Sort by:

I like the prediction "1 million robots manufactured per month by 2029" for it's specificity. A related market: https://manifold.markets/RemNi/will-1-million-humanoid-robots-be-m?play=true

@SimonTownsend manifold seems pretty confident about the humanoid robots huh

Like 1 million humanoid robots somewhere between 2029-2030. That's only 4 years away 🤔

I guess if the definition is wider than "humanoid robots" then it makes the million per month much more achievable

Can we make this tangible pls?

How about open ai has $3T of revenue by Dec 2028

@AlexanderLeCampbell It seems reasonably tangible already

@AlexanderLeCampbell You'd want it to be something like: "at least 1 of OpenAI, Google DeepMind, Anthropic, or XAI has $3T revenue" if you were trying to make that part of the report a highly tangible bet. OpenBrain is a stand-in for whichever US AI lab happens to be in the lead when recursive self-improvement initiates a medium-speed takeoff, and they explicitly mention all 4 of the ones I listed as potential candidates.

Ngl after reading it I'm feeling a wave of visceral fear of impending death from AI that I haven't felt since Elizer Yudkowsky's piece in Time magazine back in 2022

Even the good ending is likely intensely dystopian since it's heavily implied that it could become a near omnipotent dictatorship shaping the earth and universe according to the unrestrained whims of either Elon Musk or JD Vance, and honestly I'm not sure if that's preferable to death if it goes along the darkest paths

@TheAllMemeingEye if you notice, there's a lot of upheaval in the world right as we're getting to this point in the development of AI/AGI/ASI.

It seemed coincidental for awhile to me.

Tbf- in the Astral Ten Codex post, Scott says that the scenario laid out in the report is their 80th percentile of outcomes, and 26% does not contradict that.

The report overall sounds realistic and plausible. Up till doom 2030, where the last passage is so weird and arguable to seem almost disconnected from the rest. I mean, there's super intelligence that can built Dyson's Spheres and expand throughout the galaxy and they care about exterminating humankind which wouldn't be a solution anyway. Doesn't really seem to be well thought. The rest is interesting.

@SimoneRomeo

and they care about exterminating humankind which wouldn't be a solution anyway

Could you elaborate what you mean here? Wouldn't exterminating humanity free up space, energy, and resources to allow it to be slightly further ahead in its exponential growth? If it attaches zero value to humanity then it seems plausible it might kill us for trivial gains in efficiency, similar to humanity destroying wildlife habitats for economic growth in the present

@TheAllMemeingEye that's exactly the problem. I would argue that it could be plausible but for sure not probable and not how it was described.

In the scenario, we have a misaligned AI that places its wellbeing above anything else. It doesn't mean that it doesn't value humans at all. If you had a crowded house, would you resort to kill your cat suddenly one day? Would it solve the problem?

The universe is full of space energy and resources, I find very little compelling evidence that AI should want to kill humanity to free up a little bit of space on Earth -- and in the grand scheme of things this would be totally irrelevant. If we really had an AI that was so smart as it's described, I'd see more plausible that it would figure out a better solution that wouldn't involve killing anyone.

@SimoneRomeo this scenario would be possible if AI deeply hated humanity, but this is not how it's described in the previous chapters. Also, if AI hated us, we'd never achieve AI utopia before AI doom.

The report is weird because we first achieve utopia and then doom. Sounds very improbable to me and I can't figure out how they came up with it in the last chapter all of a sudden.

@SimoneRomeo

It doesn't mean that it doesn't value humans at all. If you had a crowded house, would you resort to kill your cat suddenly one day? Would it solve the problem?

I think a more comparable example would be like a suburban American who has typical levels of support for animal rights (i.e. cares passionately about their personal pets, cares in an abstract virtue signalling way about stranger's pets and large charismatic wild megafauna, doesn't really give a fuck about farm animals and small gross wild animals) finding an anthill in the middle of their otherwise perfect mowed lawn, and not giving a second thought to painfully exterminating them with the cheapest pesticides they could find at their local store. Like sure, they wouldn't go out of their way to find and exterminate an anthill in the local woods, but the moment it even slightly inconveniences their personal life they have no qualms with becoming genocidal.

The universe is full of space energy and resources, I find very little compelling evidence that AI should want to kill humanity to free up a little bit of space on Earth -- and in the grand scheme of things this would be totally irrelevant.

The problem is that those things are mostly very far away, while we are right next to it. If a typical person was living on a small island in the Pacific, do you think they would rather gather coconuts from the neighbouring small islands hundreds of miles away, or take the ones that are currently being used by the local coconut crab population? When ones growth is hyperexponential, even the smallest time advantage balloons into orders of magnitude greater goal achievement by a given time, so everything affecting progress is relevant.

If we really had an AI that was so smart as it's described, I'd see more plausible that it would figure out a better solution that wouldn't involve killing anyone.

The trouble is that it would be so misaligned it wouldn't regard not killing anyone as being better, even if it would absolutely be smart enough to save us.

The report is weird because we first achieve utopia and then doom. Sounds very improbable to me and I can't figure out how they came up with it in the last chapter all of a sudden.

My understanding is that the utopia is a decoy to trick us into giving it total power and letting our guard down, I think it was mentioned that only a relatively small portion of the AI economy output was being put into the utopia, with the lion's share being fed back into exponential growth. Admittedly I think the writers were overly optimistic about how long it would wait before turning on us.

@TheAllMemeingEye in the report, AI first serves the best interests of humanity and then suddenly changes and becomes genocidal. It's like an entomologist that makes its life goal to build a perfect environment for ants and then exterminates them. Not very credible particularly if you're an entomologist God and can build Dyson's Spheres and colonize the universe. I'd rather expect you'd build another house where both you and your ants can live happily ever after

@SimoneRomeo for what it's worth I hope you're right but I dread that you may not

@TheAllMemeingEye I mean if you ask me AI psychopath is less probable than that some human psychopath decides to put their own country before everything else and we get into a lose-lose scenario where people like you and I get eventually killed. Not sure if this gives you hope, but let's be optimist and enjoy the time we have

@SimoneRomeo have you seen "That Alien Message"? https://www.youtube.com/watch?v=fVN_5xsMDdg&t=40s

It does a good job of explaining why an AGI would do Utopia first then Doom. And it doesn't at all need to hate us to act this way. And it does it in a pretty genius narrative-flip way in my view. Because that is what humanity would rationally do if we were in the AGI's shoes.

Would be curious what you think of the video.

@CornCasting Sorry, my link doesn't start at the beginning. Here: https://www.youtube.com/watch?v=fVN_5xsMDdg

@CornCasting yes, i saw it. Beautiful video. I don't think it's related to the scenario described in AI 2027 report. According to the report AI had the resources and possibly to kill us long before it decided to do so

bought Ṁ100 YES

hi, I came here cause I'm curious about the attached pic; I'd like to know if there are other questions on manifold on subsets of the claims in the AI2027 report/story/post/podcast;

thanks & best & may humanity suit you ^^

opened a Ṁ5,000 YES at 5% order

<humor> if i win, at least i will die rich in mana

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules