Asking clarifying questions is encouraged. I will attempt to resolve objectively.
YES if there is a reasonable amount of evidence that LLMs experience qualia, NO if there is a reasonable amount of evidence that LLMs do not experience qualia.
If some LLMs experience qualia and others do not, the question will resolve YES.
Qualia can be any sort of qualia (perception of color, emotion, or even something humans cannot experience).
Resolution does not require absolute proof, but reasonable evidence. For example, the endosymbiosis theory of mitochondria origin would resolve YES. Panspermia or mammalian group selection would not resolve (at the present moment).
Any LLM shown to experience qualia counts. If some LLMs experience qualia and others do not, the question will resolve YES. AlphaZero would not count as an LLM. Reasoning models and large concept models would count as LLMs.
Update 2025-07-19 (PST) (AI summary of creator comment): The creator has clarified that they will not resolve this market based on consensus alone; it will be only one of multiple aspects they assess.
Update 2025-07-20 (PST) (AI summary of creator comment): The creator has clarified the definition of an LLM for this market:
The presence of a transformer architecture is not sufficient on its own for a future AI to count as an LLM.
An AI with a Llama-style architecture trained only on fMRI data would not count.
An AI with a recognizable LLM architecture that is simply larger or uses a different optimizer would count.
Update 2025-07-20 (PST) (AI summary of creator comment): The creator has clarified that the market could resolve YES based on an AI developed in the future, even if it is proven that no LLM existing today has qualia. Such an AI would need to be recognizable as an LLM by today's understanding.
Update 2025-07-20 (PST) (AI summary of creator comment): In response to a hypothetical question about whether other humans experience qualia, the creator stated it would resolve YES. Their reasoning was based on a combination of factors including:
Shared biological architecture
Signaling of qualia
Universal acceptance
Update 2025-07-22 (PST) (AI summary of creator comment): The creator has clarified their standard for 'reasonable evidence':
Resolution will be based on a convergence of evidence for or against LLM qualia.
The threshold could be met when explaining all the evidence away becomes a notably less parsimonious explanation than simply accepting the conclusion (either YES or NO).
Update 2025-07-23 (PST) (AI summary of creator comment): The creator has specified that the market will not resolve YES based on a model's claim to have qualia as the sole piece of evidence. This is true even if the claim is repeated and unprompted.
@MaxE Can I improve the resolution criteria? Do you have any questions/ clarifications / criticisms?
@SCS I disagree with the idea that we can clearly define a class of physical systems that have experiences. I didn't mean for it to sound like you did something wrong on the market side.
@MaxE also, I worry that you will resolve it yes because a model says it does with no other evidence. That is why we believe other humans are sentient, after all, and I don't know how much you anthropomorphize LLMs.
@MaxE I will not resolve yes based on a model claiming to have qualia (even without guidance, and even repeatedly) with no other evidence. Also, this will not resolve if there is no evidence for or against.
@NivlacM This is a good question. Though we don't have mature theories of why qualia emerges, the fact that other humans share our own biological architecture and self-report and signal qualia means we can be confident that other humans do have qualia just like our self. Additionally, it's universally accepted, and your own experience serves as an existence proof that at least one human experiences qualia.
Therefore it would resolve yes but for reasons that might not be applicable.
The market question allows moving goalposts.
We accept that other people experience qualia because it is the simpliest model, not because it is proven in any way. It cannot be proven until there is measurable definition for qualia, and there is none such definition.
@Henry38hw
The description says:
"Resolution does not require absolute proof, but reasonable evidence."
I am referring to the definition here, though of course current techniques can't measure it perfectly:

@SCS you can't have evidence until the definition is strict.
That is like checking whether "person A subjectively loves person B" by only looking at their actions and/or MRI. But if you have a struct definition of love as for example chemical concentration shifts and specific brain activity, then you can have some evidence.
If no measurable definition is provided, then any "evidence of love" can be neglected: "A is just friendly towards B", "A has some strategic interest in those actions".
@Henry38hw The "proof" aspect wasn't for other people experiencing qualia, but any person experiencing qualia. Since you are a person, and you experience qualia, this proves at least one person experiences qualia.
@Henry38hw Since we are unlikely to have a single, definitive qualia-meter, resolution will be based on a convergence of evidence for or against LLM qualia. The threshold for "reasonable evidence" is when explaining all the evidence away becomes notably less parsimonious explanation than simply accepting the conclusion (in either direction). See the example in the description of mitochondria endosymbiosis.
@SCS here is the confusing part:
Since you are a person, and you experience qualia, this proves at least one person experiences qualia.
... you experience qualia ...
You take it as a fact, before it is proven or has any evidence towards it.
It looks like the bible paradox. "Bible is true, because it is the word of god. It is the word of god, because that fact is written in The bible".
What is the point of the proof, if it is self-referencial?
@Henry38hw It's more through observation. For example, a market saying "there are no red apples" would resolve NO if I observe in my hand a red apple. Similarly, a market saying "at least one human experiences qualia" would resolve YES because I observe that I am experiencing qualia right now (e.g., able to perceive the red of an apple).
Let's say it's the year 2034, and we've unraveled the mysteries of consciousness.
Neuroscientists and cognitive philosophers agree that a) no LLM that existed in 2025 experienced qualia and b) the first AI to experience qualia was built in 2033 and it incorporated transformers as an integral part of its architecture.
Does this resolve yes?
@JustKevin Unlikely. Transformers are not sufficient. For example, Llama-style but trained on fMRI data alone would not count. I would expect larger architecture changes to exist by 2033 too. However, if what they trained in 2033 is clearly an LLM (e.g. Llama architecture) but they needed way more parameters, or merely a different optimizer, then that would count.
@SCS So to clarify, this question could resolve 'yes' even if it is demonstrated that no LLM that currently exists today experiences qualia?
@JustKevin technically possible, yes. But it would be recognizable as an LLM by today's understanding.
A couple clarifying questions:
How will you test this objectively if qualia are, by definition, experienced subjectively? What criteria will you use to resolve YES or NO? How will outside observers verify your results?
Which LLMs will you test? Do more than one need to experience qualia, or will a positive result from any LLM resolve as YES?
Why is this question slated to close by end of year?
Why not just do this question as a poll?
- YES if there is a reasonable amount of evidence that LLMs experience qualia, NO if there is a reasonable amount of evidence that LLMs do not experience qualia.
- Outside observers will be able to verify the results via publicly available information, such as peer-reviewed studies or other reliable information (primary and secondary sources) available at the time.
- Though qualia is subjectively experienced, this doesn’t necessarily preclude reasoning about and doing science on qualia (as has already been done for both humans and AI).
- Resolution does not require absolute proof, but reasonable evidence. For example, the endosymbiosis theory of mitochondria origin would resolve YES. Panspermia or mammalian group selection would not resolve.
- Qualia can be any sort of qualia (perception of color, emotion, or even something humans cannot experience).
- The end of the year was the longest option that popped up when I created the question. I’ve adjusted it to 2035.
- I’m unlikely to be the one to personally test if LLMs experience qualia. The testing would almost certainly come from multiple outside sources. Any LLM shown to experience qualia counts. If some LLMs experience qualia and others do not, the question will resolve YES.
- AlphaZero would not count as an LLM. Reasoning models and large concept models would count as LLMs.
- I’d like to see how the probability changes over time and in response to new information, and I believe a question has more incentive for people to more actively update, whereas a poll might lean more towards measuring a single point of time.
@SCS But what's a "reasonable amount of evidence"? And how would we have it within the next few months? You cite to some scientific concepts that seem to have an overall consensus for or against them now, but consensus over big questions in the scientific community takes decades or centuries to develop. The "phlogiston theory" of fire was once gaining consensus in the 1690s and was replaced by the concept of oxygen-fueled fire over about 70 years. Eugenic, white-supremacist theories of race were similarly popular with scientists until they were debunked over several decades. But we're supposed to sort out whether a computer program can feel feelings by January? Even supposing some good papers get published by then, I doubt we can all agree which ones establish "reasonable evidence" for or against the proposition. I just don't think this is a question we can possibly answer with any objectivity.
@Sammytz9Ru What exactly a reasonable amount of evidence is is hard to quantify, but there will almost certainly be (for example) others arguing for or against it in a serious way at some point at labs or in academia, and we may see new and promising frameworks emerge that appear equipped to answer this sort of question.
I don’t think we’d have it in the next few months (that would be very surprising). I edited the close date to be at the end of 2035 (I may increase it further later).
To be clear, I won’t resolve this based on consensus alone; I’ll resolve it based on if I think there is a reasonable (we can be “pretty sure”) amount of evidence (likely via multiple avenues) in support or against. Consensus is one of multiple aspects that I would assess.
@SCS I think those are reasonable answers. At the very least, I think you ought to tag this as an Unranked market. Your criteria is inherently subjective (it resolves YES or NO depending on what you, personally, think will be "reasonable evidence"). That's fine, and I love betting on and making markets like that, but I think that's what the Unranked tag is for.
@Sammytz9Ru sounds good! I've never heard of that so I didn't know to add it. Is there a way to improve the resolution / make it more objective? I could base it off of one piece of evidence (e.g. consensus) but that might add additional baggage beyond the core question, e.g. P(qualia or not is determined)*P(consensus forms), then it would still be a bit subjective anyway.
@SCS I think the only simple way to make this into an objective question would be to choose an independent lab or org doing research on whether LLMs experience qualia and then make the criteria "YES if org determines one or more LLMs experience qualia, NO if they make the opposite finding or cannot reach conclusive finding either way." That way the result can be objectively verified by the community without putting on a whole symposium of computer scientists and philosophers to make a decision. I don't know whether there is a single org or lab trying to test that specific question, though, and you would also have be ok with that org's specific assumptions about consciousness and what it means to "experience" qualia.