Will Bing Chat be the breakthrough for AI safety research?
Plus
13
Ṁ963Dec 31
9%
chance
1D
1W
1M
ALL
Bing Chat seems to make even more headlines than ChatGPT did on its release, though not necessarily for the right reasons. Most reports seem to focus on its lack of emotional control and alignment.
Will this generate so much public interest and raise so much alarm that AI safety and AI alignment will become mainstream and receive significantly more interest and resources than before?
I will attempt to resolve based on media and/or industry reports. The resolution date is currently set to the end of 2025, but I will resolve earlier if there's clear evidence one way or another. If there's a clear uptake in AI safety interest starting shortly after Bing Chat's release, this would count even if it's hard to prove a causal relation.
This question is managed and resolved by Manifold.
Get
1,000
and3.00
Related questions
Related questions
Is Bing Chat conscious?
18% chance
Will it be easy to conjure Sydney chatbot on Bing or other platforms by January 2025?
97% chance
By 2026, will a proeminent chatbot with some access to the internet do something actually harmful and unintended?
68% chance
Will a OpenAI, Anthropic, Google or Meta release an AI chatbot that has ads in the responses in 2025?
57% chance
Will an AI chatbot overtake Google as the most used search engine by 2030?
45% chance
Will there be serious AI safety drama at Google or Deepmind before 2026?
60% chance
Will a major AI lab claim to use activation steering in its main chat assistant by EOY 2025?
35% chance
Will it be revealed by 2030 that Bing Sydney's release was partially a way to promote AI safety?
5% chance
Will ChatGPT or Bing be the most popular LLM chatbot at the end of 2024?
82% chance
Will Bing's chat model get shut down before 2024?
9% chance