If The United States Is Conscious, Then Why Not An LLM?
Last updated: 7/18/2025 | Originally published: 7/16/2025
One of Jean Baptiste Vérany’s chromolithographs of cephalopods
I recently realized I may be one of the only people on Earth to hold this philosophical position: LLMs have no conscious understanding, but they may have phenomenal consciousness. That may sound like a contradiction, but I mean something subtly different by each use of “conscious.”
In the former case, I’m referring to “natural language understanding” as defined in Bender and Koller’s “Climbing towards NLU” — a mapping from speech utterances to communicative intents, aided by a shared “standing meaning” for utterances. For instance, if I talk about “walking my dog,” you understand what a dog is (because you’ve built a mental model of “dog” by interacting with them in the world and noticing that other English speakers refer to them as “dogs”), and you understand what my dog is (because you’ve read me talking about it before), and you understand why I’m telling you about walking my dog (whether that’s to explain why I was late to a meeting or, in this case, as a hypothetical example).
Large language models by definition cannot have this form of understanding, because they are traditionally trained on text and only text. They can learn word2vec-style associations between words — that a queen is the female equivalent of a king, or that Rooibos is the name of Russell’s dog — which, at the scale of “everything ever written,” enables LLMs to fake a comprehensive world model. However, though an LLM can “understand” a dog as a bundle of trained associations with other terms, it has no way to connect this to a “real” dog, and it has no way to connect the associated terms with their referents, and so on. In the context of the original “Climbing towards NLU” paper, that is the definition of a stochastic parrot — a system that can convincingly mimic having a mental model of the world, by producing coherent English language responses, that nevertheless has no grounded understanding in the real world and so cannot have communicative intents.
In the latter case, I’m referring to “phenomenal consciousness” — the “What Is It Like To Be A Bat?”-ness of experience, that there is something it is like to experience reality from a particular viewpoint. When I see a red door, there’s some difficult-to-define sense that I know I’m seeing the color red, which I can introspect about if necessary.
Large language models may exhibit phenomenal consciousness in this sense. Consider my all-time-favorite philosophical paper, Eric Schwitzgebel’s “If Materialism Is True, the United States is Probably Conscious”. It argues for the proposition in the title — that if strict materialism is true, then the United States may be and in fact probably is phenomenally conscious in the above sense, that there is “something it is like” to be the United States. If the United States is conscious, then why not an LLM?
Notice that these two definitions are orthogonal. Hypothetically, a system could have conscious understanding and intent without any phenomenal experience — a p-zombie. On the other hand, we could imagine a system that has phenomenal consciousness with no grounding in the real world, no communicative intents, and no natural-language understanding. I’m one of the few people who would argue an LLM might belong in the latter camp.
Reply by email!