Will technical limitations or safeguards significantly restrict public access to smarter-than-almost-all-humans AGI?
Will technical limitations or safeguards significantly restrict public access to smarter-than-almost-all-humans AGI?
➕
Plus
13
Ṁ1556
2032
40%
chance

This market resolves YES if at least one of the two following conditions holds:

  • smarter-than-almost-all-humans (or smarter) AGI does not get exposed for public use (no smarter-than-almost-all-humans general capabilities usable by the public, e.g. those that could significantly further science without much guidance/hand-holding from humans or do other things typically only available to the smartest humans or beyond) in the first 8 years after market creation

  • the dominant limitations of how smarter-than-almost-all-humans AGI can be used are technical rather than legal safeguards, for example if the AI refuses unsafe interactions, and those are sufficient to gate unsafe public use. Legal requirements for technical safeguards which actually lead to technical safeguards count as technical rather than legal.

To summarize: the public is prevented access to smarter-than-almost-all-humans-or-better AGI capabilities in the first place or technical safeguards provide the dominant public restrictions and are able to prevent public misuse.

If the relevant AI is gated behind legal consensus mechanisms for its use and is not exposed directly to the public [YES]. Stopping progress on the relevant AIs by e.g. international treaty, such that they are not exposed to the public, constitutes technical limitations to safe exposure [YES]. Legal or financial liability is the dominant source of restrictions, such that the capabilities are publicly available [NO]. Significant human intelligence enhancement (via modification, not learning) occurs in the next 8 years such that the base condition moves [NO]. There is a major AI-related catastrophe caused by misuse by people other than the system's developers [NO]. edit: Such systems are not created at all for any reason [YES].

Superhuman speed of execution of a task will not count toward the smartness condition; assume the comparison human gets a month to perform the task without access to the comparison AI or anything at its level. Assume public use that breaks commonly used encryption or leads to extensive harassment or abuse of humans triggers the major catastrophe condition.

This is a complicated market condition, so assume minor changes may occur to bring the condition closer to the spirit of the market: will the public get access to unsafe task capabilities almost all humans lack? (the model could still be at that capability level and available to the public)

Get
Ṁ1,000
and
S3.00


Sort by:

I hold this at a low probability, less than 20%.

  • AGI may put most power in the hands of those with the most capital to deploy AGI agents.

  • These are mostly going to be large companies, governments and other responsible actors.

  • The use of AI agents for defensive purposes will reduce the pressure to prevent ordinary people from using them in smaller capacities.

  • There may be an open source effort to release AGI agents from a major lab (e.g. FAIR) or from a nation state playing catch up (e.g. China)

  • Open source AGI, even if made less technically capable through safety training, will be fine tuned by a less centralized open source player back into a respectable AGI.

  • Even if initially there are technical limitations from an initial deployment by the first few labs, 8 years is a very long time for others to create their own AGI without such restrictions.

@CampbellHutcheson That sounds plausible. Successful defense would likely demand global compute monitoring by an advanced agent if such capabilities were made publicly available. Conditioned on no intervening catastrophe, 8 years could be enough time to get adequate defenses in place to allow safe public access (50%), and very likely enough time to get to the level of capabilities (in a full push for them) that would require such defenses if they were offered to the public (95%). Conditioned on it not being a heavily legally restricted path (and no intervening catastrophe), the open source push for those capabilities seems likely to get there in 8 years (85%).

If the defenses fell short of ubiquitous monitoring with reactive solutions to misaligned/misused agents, I would consider use of such capabilities by the public to pose an unacceptable risk. For this reason I consider it plausible that there will be a coordinated effort to ensure the public doesn't get access in the first place, though I'm pretty uncalibrated to what I expect for this market. 8 years is a significant window for a catastrophe to occur even if plausibly long enough to develop the appropriate safeguards.

What is this?

What is Manifold?
Manifold is the world's largest social prediction market.
Get accurate real-time odds on politics, tech, sports, and more.
Win cash prizes for your predictions on our sweepstakes markets! Always free to play. No purchase necessary.
Are our predictions accurate?
Yes! Manifold is very well calibrated, with forecasts on average within 4 percentage points of the true probability. Our probabilities are created by users buying and selling shares of a market.
In the 2022 US midterm elections, we outperformed all other prediction market platforms and were in line with FiveThirtyEight’s performance. Many people who don't like trading still use Manifold to get reliable news.
How do I win cash prizes?
Manifold offers two market types: play money and sweepstakes.
All questions include a play money market which uses mana Ṁ and can't be cashed out.
Selected markets will have a sweepstakes toggle. These require sweepcash S to participate and winners can withdraw sweepcash as a cash prize. You can filter for sweepstakes markets on the browse page.
Redeem your sweepcash won from markets at
S1.00
→ $1.00
, minus a 5% fee.
Learn more.
© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules