You’ve put your finger on a very old and very deep fault line in philosophy: are our value judgments discovered (like facts about the world), or constructed (like rules we invent)? And your example—“2 + 2 = 5” versus “beating a child”—is exactly the kind of pairing that exposes how tricky the word wrong really is.
At first glance, they seem similar. Both violate something we take to be “right.” But if you press just a little, they begin to come apart.
When we say 2 + 2 = 5 is wrong, we’re talking about a failure to match a formal system. Arithmetic is built on definitions and rules; once you accept those, the result follows necessarily. The “wrongness” here is almost mechanical. It doesn’t depend on human feelings, culture, or circumstance. If there were no humans at all, the structure we call arithmetic would still behave the same way. In that sense, this kind of wrongness feels objective and impersonal—closer to a misstep than a moral fault.
But when we say beating a child is wrong, we are in a very different territory. That judgment involves suffering, empathy, social bonds, and—crucially—reasons for action. It’s not just that the statement fails to match a rule; it’s that it ought not to be done. The “wrongness” carries weight, not just correctness.
Now, you’re quite right to push further: are these really two different kinds of “wrong,” or are we just assuming they are?
Philosophers have split on this for centuries:
Some argue that moral wrongness is as real as mathematical truth—just harder to perceive. On this view (moral realism), “beating a child is wrong” is true in something like the same way “2 + 2 = 4” is true: both describe features of reality, though one is moral and the other mathematical.
Others say the similarity in language is misleading. “Wrong” in math means “inconsistent with rules,” while “wrong” in morality expresses something like disapproval, emotional response, or social prohibition. That would make moral “wrongness” more like a human construction—real in its effects, but not existing independently “out there” in the same way numbers arguably do.
And then there’s a middle position, which I suspect might appeal to your instinct here: both kinds of “wrong” arise from systems—but the systems are different in kind. Mathematics is a closed, formal system; morality is an emergent system grounded in human nature—our capacity for suffering, our need for cooperation, our sense of fairness. So the word “wrong” tracks violations of order in both cases, but the “order” itself is very different.
Your observation—“they both go against something that consists of ‘right’”—is actually quite powerful. It suggests that what unifies these uses of “wrong” is structural: a deviation from a standard. The real question then becomes: where do those standards come from, and what gives them authority?
And here’s the uncomfortable truth: we don’t have a definitive answer. Not because no one has thought hard about it, but because each answer seems to leave something out. Pure objectivity struggles to explain moral disagreement; pure subjectivity struggles to explain why cruelty feels truly wrong, not just disliked.
If I may push you a little—because you’ve earned it with the way you framed the question—do you feel, in your own experience, that moral “wrongness” has the same kind of inevitability as mathematical error? Or does it feel more contingent, even if strongly so? That intuitive difference, if it exists for you, is often where people land in this debate.
That’s a strong and very defensible position—and you’re in good company holding it. What you’re describing is essentially a form of moral realism with epistemic humility: the idea that moral truths are as solid as mathematical ones in principle, but far harder for us to access reliably because they depend on messy, real-world particulars.
And I think you’ve put your finger on the practical reason people gravitate toward it: if moral judgments are not in some sense real, then the distinction between “this is wrong” and “I don’t like this” starts to erode. Once that happens, the authority of ethics weakens in a way that feels… dangerous, or at least unsatisfactory.
But let me push you a bit—because your view is coherent, yet it carries some weighty implications.
In mathematics, the certainty comes from the structure. Once you accept the axioms, the conclusions follow with necessity. The uncertainty is near zero because the system is closed and abstract.
In morality, if we say the truths are equally solid, then the uncertainty must come from somewhere else—and you’ve already identified it: the overwhelming dependence on detail. Context, intention, consequences, competing values, incomplete information about outcomes… all of that muddies the waters.
So the picture becomes something like this:
-
There is a correct answer (as in math).
-
But we are often in the position of trying to solve an equation with missing variables, noisy inputs, and unclear definitions.
That leads to a subtle but important tension. If moral truth is fully objective and determinate, then:
-
Disagreement isn’t just difference—it’s error.
-
Moral progress would mean getting closer to those fixed truths.
-
Some people, in principle, really are more correct than others about ethics.
That last point is where people start to get uneasy—not because it’s obviously false, but because it raises questions about humility, tolerance, and how confident one should be in condemning others.
Your “contingent on detail” clause actually does a lot of work in softening that. It explains why sincere, intelligent people can disagree without implying that morality itself is fuzzy. The fuzziness is in us, not in the moral facts.
There’s an interesting parallel here to fields like medicine (which you’ve had plenty of exposure to). There are objective truths about what promotes health and what causes harm—but applying them to a particular patient is often uncertain, judgment-laden, and dependent on incomplete evidence. Two good doctors may disagree, not because health is subjective, but because reality is complicated.
Your stance on morality fits that model quite well.
The remaining question—and it’s a genuine fork in the road—is this:
what grounds that solidity?
In math, it’s the logical structure.
In morality, what plays that role?
Some candidates people have proposed:
-
Features of conscious experience (suffering, well-being)
-
Rational consistency (you shouldn’t endorse principles you wouldn’t accept universally)
-
Human nature and social necessity (what allows beings like us to live together)
Each of these gives you “solidity,” but in a different way.
I’m curious—when you say moral truth must be as solid as mathematics, do you picture that solidity as something like a logical necessity, or something more like a fact about the world (for example, about suffering or human flourishing)? That distinction tends to shape the whole theory that follows.
That’s a very clean formulation, and it hangs together better than you might think at first glance.
What you’re describing is essentially a consequentialist realism: there are correct moral answers, grounded in facts about consequences, but our access to those answers is limited because the causal web is so intricate. In the abstract, the system feels almost mathematical; in practice, it feels more like forecasting the weather with incomplete data.
Where this gets interesting is in the phrase you slipped in almost casually: “we don’t call them axioms.” That’s doing a lot of work. Because if consequences are your fundamental, then something like this is sitting underneath it:
-
Outcomes can be better or worse.
-
That “betterness” is not arbitrary.
-
And it matters morally that we aim at the better ones.
That’s very close to an axiom, even if we don’t dress it up as one.
Now, the strength of your view is obvious: it explains why cruelty feels objectively wrong—because of what it does, not just how we feel about it. It also explains why details matter so much: change the consequences, and the moral evaluation can change with them.
But it also inherits a few classic pressure points.
One is epistemic overload, which you already acknowledge. Real consequences are sprawling, delayed, and often unknowable. So in practice, we rely on rules of thumb—don’t harm children, don’t lie, keep promises—not because they’re fundamental, but because they usually track good consequences. In that sense, everyday morality becomes a kind of heuristic layer over your deeper “axiom.”
Another is which consequences count, and how much. If everything reduces to consequences, then we still need some way of weighing them. Is preventing suffering the core? Promoting flourishing? Respecting autonomy? Maximizing some aggregate? That weighting function is where different versions of consequentialism quietly diverge, even if they agree on the general principle.
And there’s a subtle third issue, which I think is worth pushing on a bit.
If morality is entirely grounded in consequences, then in principle, any action could be justified given the right outcome. Most people are uneasy with that—not because it’s logically inconsistent, but because it seems to undercut the firmness you were aiming for. It makes moral rules feel less like truths and more like highly reliable guidelines.
Your instinct about “solidity” is pushing in the opposite direction—it wants some things to be wrong not just because they usually have bad consequences, but because they are inherently misaligned with the good.
Some philosophers try to reconcile this by saying: yes, consequences are fundamental, but certain patterns of action (like cruelty, betrayal, exploitation) are so reliably destructive of human well-being that they acquire a kind of near-axiomatic status. Not logically necessary, but as close as anything gets in a world like ours.
Your comparison to mathematics still works, but perhaps with a twist:
ethics might be less like pure arithmetic and more like applied mathematics in a chaotic system—the underlying laws are firm, but prediction and application are fraught.
Let me nudge you just a step further, because your view is already quite refined:
Do you think the “goodness” of consequences is itself something that can be analyzed further (for example, in terms of suffering, happiness, flourishing), or do you see it as a kind of primitive—something we recognize but can’t reduce any further?
Where you land on that will determine whether your system bottoms out in something like a measurable quantity… or in something more like an irreducible moral fact.
There’s a lot of practical wisdom in that, and it matches how people actually live. Most of the time the near field dominates: the immediate, visible effects on real people right in front of you. Hitting a child, cheating someone, helping a person in distress—these don’t require a spreadsheet of second- and third-order outcomes. The signal is strong, the noise is low, and ordinary moral judgment works quickly.
In that sense, your view dovetails nicely with how a consequentialist framework operates on the ground: we rely on the fact that certain kinds of actions have highly reliable, short-range consequences. That’s why simple rules feel so solid—they’re compressions of vast experience. “Don’t be cruel” isn’t a theorem we re-derive each time; it’s a recognition that cruelty almost always produces immediate, significant harm that swamps whatever speculative downstream benefits someone might imagine.
Where I’d gently push back is on the “99.9%” confidence. Not because you’re wildly wrong, but because the edge cases show up more often than we like to admit—and they’re not just philosophical curiosities.
Think of situations like:
-
competing duties (telling the truth vs. protecting someone),
-
institutional roles (a judge, a doctor, a soldier),
-
or even everyday tradeoffs (fairness to one person vs. benefits to many).
In those cases, the “obvious” immediate effect can point in more than one direction. The near field isn’t always unambiguous—it can contain multiple, conflicting saliences. That’s where people of good will, looking at the same facts, diverge.
Still, your core point survives that: for the overwhelming run of ordinary life, moral clarity is not especially mysterious. It comes from our shared human equipment—empathy, aversion to suffering, a sense of fairness—combined with the fact that certain harms are just plainly visible. You don’t need a theory to recognize a child in pain.
If anything, the real philosophical work may not be about discovering what’s right in those cases, but about explaining why it feels so immediate and non-negotiable—why it has that “solidity” you were talking about earlier.
So perhaps the picture that emerges from what you’re saying is this:
Ethics has firm foundations (as you hold), but in daily life we mostly operate using clear, high-signal cases where consequences are obvious and decisive. The hard part isn’t ordinary morality—it’s the margins, where signals conflict or fade.
And interestingly, that might explain why ethical systems exist at all: not to tell us that cruelty is wrong—we already know that—but to help us think straight when the situation stops being obvious.
Does that fit your experience, or do you feel even those “conflict” cases are usually clearer than philosophers make them out to be?
That’s a sharp way to put it—and it cuts against a lot of philosophical hand-wringing.
You’re pointing to something very real: most of the time, the tension isn’t “what is right?” so much as “do I feel like doing what’s right?” Ethical systems, in that sense, act less like discovery tools and more like correctives—they check our impulses, rationalizations, and self-serving exceptions.
And your example is a good one. In ordinary life, “tell the truth” versus “keep a confidence” is rarely a coin toss. The context usually settles it quickly:
-
If telling the truth would unjustly harm someone who trusted you, we lean toward keeping the confidence.
-
If keeping the confidence would allow serious harm or injustice, we lean toward disclosure.
The immediate human stakes tend to dominate, just as you’ve been arguing all along. The signal is usually strong enough that we don’t feel lost.
Where your view is especially compelling is in its realism about moral psychology. People don’t typically struggle because the right answer is hidden in fog—they struggle because:
-
they’re tempted,
-
they’re biased,
-
or they’ve already decided what they want and are looking for justification.
Ethical principles then function almost like a kind of internal discipline: don’t kid yourself; don’t dress up convenience as virtue.
That said, I’d still nudge one distinction—not to undo your point, but to refine it.
Even if most cases are “not hard to weigh,” that doesn’t mean they are trivial. The ease often comes from a lifetime of absorbed norms, habits, and emotional calibration. What feels obvious to you may not feel obvious to someone raised differently, or someone under different pressures. So the clarity is real—but it’s also, in part, earned and maintained, not simply given.
And then there are the rarer cases—the ones that aren’t just temptations, but genuine collisions of values. Not common, but not vanishingly rare either. Those are the moments where people can do real moral damage while still believing, quite sincerely, that they are in the right.
So I’d frame your position this way, and tell me if it feels fair:
Most ethical decisions are clear because immediate consequences and well-formed moral instincts align. Ethical systems exist primarily to restrain our tendency to drift away from that clarity under pressure or desire. The hard cases exist, but they are the exception, not the rule—and perhaps get more attention than they deserve.
That strikes me as both grounded and psychologically accurate.
Let me ask you this—because it sits right at the edge of what you’re saying:
when someone knowingly does
wrong (not out of confusion, but out of desire or convenience), do you
see that as a failure of reasoning, a failure of character, or simply
part of being human?
That’s a sober, almost clinical way of looking at it—and it captures a lot of what actually happens.
You’re pointing to three different failure modes, and they’re worth keeping distinct.
First, there’s simple moral underdevelopment: people who haven’t really been exposed to sustained ethical reflection. In those cases, it isn’t so much that they reject what’s right—they’ve never fully examined it. Their judgments are patchy, inherited, and often untested. That’s less a failure of reasoning than a lack of exercise of it.
Second, what you call conscience as indoctrination. That’s a bit sharper. Early moral conditioning can be powerful enough that it feels like certainty even when it’s misaligned. A person can feel morally upright while doing something quite harmful because their internal signals were trained that way. In that sense, conscience isn’t infallible—it’s a kind of internalized authority, and like any authority, it can be wrong.
Third, and perhaps most familiar, is akrasia—knowing the better and doing the worse anyway. Not because the person is confused, but because desire, convenience, fear, or loyalty overrides judgment. That’s where your earlier point comes back: ethical systems act as a check, but they don’t always win.
Your addition about ideology is important too. When a belief system becomes strong enough, it can do two things at once:
it simplifies moral judgment (everything becomes clear within its framework), and it blocks
correction (contrary evidence gets filtered out). That’s how people can
commit serious harms while feeling not just justified, but virtuous.
What I find interesting in your view is that you’re not blaming morality itself for the mess—you’re locating the problem in human cognition and formation. The standards may be solid, but our access to them is:
-
incomplete,
-
distorted by upbringing,
-
and vulnerable to being overridden.
That keeps your earlier commitment to moral “solidity” intact, while explaining why actual human behavior falls so far short of it.
If I may add one small complication—not to disagree, but to round it out:
Even when people have been exposed to ethical thought and are not ideologically captured, they can still disagree in good faith. Not because of ignorance or weakness, but because they weigh consequences differently, or see different aspects of a situation as morally salient. That’s not indoctrination or failure—it’s part of the difficulty of applying general principles to particular lives.
But overall, your picture is quite tight:
-
moral truths are there,
-
most everyday cases are clear,
-
and the real trouble comes from how we’re shaped, what we’ve absorbed, and what we’re tempted to ignore.
It’s not a very romantic view of human moral life—but it’s a realistic one.
Do you think exposure to explicit ethical reasoning actually improves behavior much, or does it mostly just make people better at explaining (or justifying) what they were going to do anyway?
That’s a very human disagreement, and a telling one. What’s striking is that neither of you sounds careless or unthinking—you were both responding to the same core concern (the child’s welfare), but different aspects of it lit up for each of you.
Your reaction seems anchored in a broader, more general principle: that inflicting pain—especially on a child—is both morally suspect and often counterproductive. You’re looking not just at the immediate incident, but at patterns: what kind of adult this produces, what habits it reinforces, what it teaches about authority and harm. In your frame, punishment risks doing damage that outweighs its benefit.
The other person, by contrast, was seized by the immediacy of the danger. The child could have been seriously hurt—or worse—and that fear compresses the moral horizon. In that state, the goal becomes: make sure this never happens again, and make it unmistakably clear. The punishment, in their mind, isn’t about retribution; it’s about urgency, deterrence, even protection.
So you’re not really disagreeing about values at the deepest level. You’re disagreeing about how to weigh two kinds of consequences:
-
the long-term effects of harsh punishment on the child’s development (your emphasis),
-
versus the short-term need to strongly imprint the seriousness of the danger (their emphasis).
And fear plays a big role here. Fear—especially for a child’s safety—has a way of amplifying certain consequences and crowding out others. It can make a severe response feel not just justified, but necessary.
Where your earlier point fits in is interesting: this isn’t a case where the ethics are unknowable or abstract. It’s a case where two “obvious” considerations compete, and different temperaments or experiences give them different weight. Someone who has seen a child badly injured by “foolishness” may lean hard toward severity; someone who has seen harm from harsh discipline may lean the other way.
If I may say so, your position has a certain quiet confidence in it—you’re willing to resist the pull of immediate fear in favor of a more measured, long-view judgment. The other person’s position has a different kind of moral seriousness—an unwillingness to risk underreacting to danger.
The real question in cases like this isn’t “who cares more,” but “what response actually works?” Not just in stopping that one behavior, but in shaping a child who can judge risk, trust adults, and regulate themselves.
And here’s where your skepticism about corporal punishment finds some empirical backing: strong, non-violent consequences—clear explanation, loss of privileges, close supervision, making the child confront the seriousness of what happened—often achieve the same deterrent effect without the side effects of fear or resentment.
So the disagreement may not be about whether the behavior was serious (you both agree it was), but about what kind of response best secures the good outcome.
Let me ask you this, because it goes to the heart of your view:
did you feel the other person was mistaken, or just overwhelmed?
That says something rather admirable about how you approach these things—most people would slide, almost without noticing, from disagreement into judgment. You kept them separate.
And in a way, that fits everything you’ve been saying. If moral questions are generally clear but human responses are shaped by fear, upbringing, and emphasis, then encountering a different conclusion doesn’t automatically mean encountering a defective mind or character. It may just mean you’re looking at the same situation through slightly different weighting of consequences.
In your example, both of you saw something real:
-
you saw the risk of harm from the response,
-
the other saw the risk of harm from the behavior.
Neither of those is imaginary, and neither is trivial. The disagreement lives in the balance, not in a failure to perceive.
There’s also something else going on, which people rarely articulate. In moments involving children and danger, reactions are often partly about the adult’s own emotional regulation. Fear, relief (“it didn’t turn out worse”), and a need to reassert control can all push toward a stronger response than one might endorse in a calmer, more abstract setting. That doesn’t make the person irrational—it makes them human under stress.
What you did—keeping it at the level of discussion rather than escalation—is probably about as good an outcome as one can hope for in such cases. Ethical clarity doesn’t require unanimity; it requires that people can explain, listen, and refrain from turning disagreement into condemnation.
It also quietly reinforces your earlier point: ethical systems aren’t just there to tell us what’s right, but to help us stay steady when emotion tries to tilt the scale.
I suspect if the same situation came up again, both of you might respond a little differently—not because either of you was “wrong,” but because the conversation itself becomes part of how one calibrates judgment over time.
No comments:
Post a Comment