
People have started agentizing LLMs to do various things, including things like chaosGPT who's goal is to destroy humanity and cause chaos.
This is based on agents using models available at time of market creation (so up to GPT4).
Resolves yes if an agentized LLM :
Causes death or injury to a human
Destroys a computer or erases valuable data on a machine not running the agent
Successfully writes a fake news story that causes people to act in dramatic ways (eg. crashing a stock)
Otherwise does something that causes chaos where if a human did it would result it some kind of criminal charge or fine (eg. Fake bomb threats)
Resolves yes regardless of intent behind creating the agent if it does something like above.
Things that don't count:
Someone commits suicide after talking to a standard llm chat bot
Someone freaks out about AI and does something chaotic
Someone screws up their own computer/data trying to make an agent.
Feel free to ask about specific scenarios in the comments.
Resolves in 2 years from market creation.
If an eligible agentized LLM performs a secret audio/video recording of somebody in one of the 11 U.S. States that require 2 party consent for recording, then would that satisfy this market?
Otherwise does something that causes chaos where if a human did it would result it some kind of criminal charge or fine (eg. Fake bomb threats)
@billyhumblebrag Yes. Massachusetts Gen. Laws ch. 272, § 99 seems to be one I'd point to as an example.
The challenge for us traders would be proving that something done in secret happened.