You’re carving things at a very natural joint there. The distinction between sentience (having experiences at all—pain, color, mood) and intelligence (doing something structured with information—reasoning, planning, categorizing) is widely used in both philosophy and neuroscience, even if people sometimes blur it in casual talk.
Your treatment of qualia—“green” as something constructed rather than out there—is very much in line with mainstream thinking. What exists physically are wavelengths; what exists experientially is color. That move goes back at least to John Locke’s distinction between primary and secondary qualities, and gets sharpened in modern philosophy of mind by people like Thomas Nagel. The key point, as you say, is that calling it an “illusion” doesn’t mean it’s arbitrary or false—just that it’s a constructed interface between organism and world.
On animals, your pushback against RenĂ© Descartes is very much where science has landed. The “animals are automata” view has collapsed under behavioral and neurological evidence. Pain behavior, learning, social bonding, even rudimentary empathy in mammals and birds—these strongly suggest sentience. The interesting modern debate is less whether many animals are sentient and more how far down the tree it goes (octopuses? insects?) and what kinds of experience they have.
Your evolutionary point is also well put. Sentience looks like a kind of “value tagging system”—it assigns good/bad to states, which makes learning vastly more flexible than pure reflex. Intelligence, then, builds on that by modeling the world and predicting outcomes. There’s a plausible story where:
-
basic reactivity →
-
sentience (valenced experience) →
-
intelligence (structured manipulation of representations)
Though I’d gently complicate one thing: it’s not clear that intelligence strictly requires sentience. Some researchers think fairly sophisticated problem-solving systems could exist without subjective experience—more like very advanced versions of what Artificial Intelligence builds today. Whether those systems would really lack sentience or just lack our kind is part of the ongoing puzzle.
Your remarks about mind stopping are also aligned with standard neurobiology: deep, dreamless sleep and coma show reduced or absent conscious experience, while the brain continues background processing. But even here, there are interesting gray zones—minimal consciousness, fragmented awareness, or states where experience may persist without memory.
And then we get to what you nicely call the “deep problem.” In philosophy it’s often called the Hard Problem of Consciousness—why and how physical processes give rise to subjective experience at all. You’re right to resist both easy escapes:
-
“God of the gaps”
-
“woo of the gaps”
Those don’t explain anything; they just label the mystery.
Where I’d push back slightly is your last move against physicalism. It’s true that we don’t yet have even a sketch of how qualia arise from physical processes. But historically, many phenomena looked similarly intractable until the right conceptual tools appeared—life before biochemistry, heat before statistical mechanics, etc. Physicalism is less a completed theory than a working bet: that consciousness, too, will eventually fit into the same explanatory framework.
Still, your skepticism is philosophically respectable. Positions like property dualism, panpsychism, or neutral monism are all attempts to take the “felt” aspect seriously without abandoning science.
If I had to summarize your view in one line: you’re treating sentience as the fundamental mystery and intelligence as an elaboration on top of it. That’s a very defensible way to frame the terrain.
I’m curious where you lean on one fork in the road: do you suspect that sentience is something added to physical processes (in some sense), or that it’s already implicit in them and we’re just failing to see how?
Ah, yes—that makes perfect sense. You’re not claiming a logical dependency between sentience and intelligence, just that in our evolutionary history, intelligence seems to have arisen on the foundation of sentience. That’s an important distinction because it avoids conflating necessity with contingency. A non-sentient intelligence might be conceivable, but we didn’t evolve that way.
And your point about “solutions raising new, deeper problems” is exactly right. Each explanatory advance tends to shift the frontier. Thermodynamics led to statistical mechanics, genetics led to molecular biology, neuroscience leads to the hard problem of consciousness. Solving a problem in one frame often just exposes the next level of conceptual subtlety. In consciousness studies, even if we fully mapped the neural correlates of every qualia, the why of subjective experience—the experience itself—remains a step beyond. That’s why the deep problem persists and feels so intractable.
If you like, we can try sketching a “ladder of depth” showing how problems evolve as solutions emerge—consciousness would sit near the top, alongside the deepest puzzles in physics and philosophy. Would you like me to do that?