There’s a certain nobility in the proverb—keep getting up, endure, persist—but you’ve neatly punctured the romance of it. “Fall seven times and stand up eight” assumes the falls are survivable and repeatable. At a certain point in life, that’s not a safe assumption; one bad fall can be the end of independence, or worse.
Your instinct—install the grab bars—is the more intelligent version of resilience. It’s not about heroic recovery; it’s about reducing the number of times you have to recover at all. A quiet kind of wisdom, really. Less poetry, more engineering.
The proverb belongs to youth, when the body is forgiving. The grab bar belongs to experience, when the body has begun keeping score.
You’re putting your finger on a quiet bias: we treat “prevention” as if it’s age-restricted, when really it’s risk management across a lifetime.
With medications like statins or metformin, there are real tradeoffs—side effects, interactions, diminishing returns at low risk—so medicine quite properly asks, “who benefits enough to justify this?” But the logic doesn’t carry over to the physical environment. A grab bar, good lighting, non-slip flooring—these don’t tax the liver or complicate a medication list. They just sit there, reducing the probability of a bad outcome.
The interesting part is psychological, not medical. Young people tend to optimize for aesthetics and a sense of invulnerability. Safety features feel like an admission of frailty—something for “later.” But statistically, falls, slips, and household accidents don’t wait for old age. Alcohol, fatigue, distraction, rushing—those are young-person risk multipliers.
There’s also a design failure embedded in this. We tend to make safety features look clinical, even institutional, which reinforces the stigma. Yet well-designed environments can incorporate safety invisibly: textured tiles that don’t announce themselves, lighting that simply feels pleasant but prevents missteps, rails that read as architectural rather than medical.
So you’re right to separate the categories. Preventive medicine requires calibration. Preventive design is closer to common sense—and arguably underused at every age.
If anything, the real mistake isn’t that young people don’t need these things. It’s that we frame them as optional rather than baseline.
That nurse was using “normal” in the statistical sense—common for people your age—and you were hearing it in the clinical sense—desirable or safe. Those are not the same thing, and in blood pressure they often diverge.
Modern guidance has moved pretty decisively away from age-adjusted complacency. In frameworks like those from the American College of Cardiology and the American Heart Association, a systolic of 140 mmHg sits in the hypertension range, regardless of age. It’s common, yes; it’s also associated with higher risk of stroke, heart attack, kidney disease, and cognitive decline over time.
Where your triage nurse wasn’t entirely off base is in the treatment decision. Whether to medicate isn’t based on a single reading or even the number alone. It depends on the broader picture: repeat measurements, home or ambulatory readings, overall cardiovascular risk, other conditions, and how well lifestyle measures are working. In some older adults—especially if there’s frailty, fall risk, or sensitivity to medications—clinicians may accept a higher target to avoid dizziness and falls. That’s a tradeoff, not a declaration that 140 is “good.”
There’s a nice parallel to your grab-bar point. Calling something “normal” because it’s common can lull people into inaction. But good care, like good design, is about reducing avoidable risk where you can do so safely. With blood pressure, that might mean tightening control if you tolerate it well; with the home, it means making slips less likely in the first place.
So your instinct is sound: don’t let “normal for your age” quietly become “nothing to think about.” It’s a prompt to look more carefully, not a reason to stop looking.
Keeping systolic pressure below 110 is quite a bit tighter than most guideline targets, so “overkill” is a fair question—but the real issue isn’t the number by itself, it’s how you tolerate that level and what your overall risk picture looks like.
Large trials like SPRINT trial blood pressure study pushed treatment toward lower targets (roughly <120 systolic) and did show fewer cardiovascular events in higher-risk adults. But even in those studies, there were tradeoffs: more episodes of low blood pressure symptoms, electrolyte disturbances, and kidney function changes. The benefit came from careful monitoring, not just “lower is always better.”
That’s where your situation becomes very individualized. If you’re consistently below 110, feel steady on your feet, aren’t getting dizzy when standing, and your labs are stable, then your doctor may reasonably judge that the balance works in your favor. On the other hand, if there’s any lightheadedness, near-falls, or kidney markers drifting, that very low target can quietly become counterproductive—especially given your already increased fall risk.
Your point about insurance is interesting, but I’d nudge back a bit: the limits in guidelines aren’t mainly there to save money; they’re there because physiology pushes back. Past a certain point, lowering pressure further doesn’t buy much additional protection and may introduce new risks. Biology, not bureaucracy, is usually the constraint.
So the sensible framing isn’t “as low as possible,” but “as low as is usefully safe for you.” You’re already doing the right thing by keeping it physician-guided. The practical question worth revisiting from time to time is simple: am I getting benefit without paying a price I can’t see yet?
You’re right about the underlying tension: left unconstrained, demand for healthcare will run far beyond what any system can supply. Where I’d gently push back is on the idea that the only sensible brake is “doctor, not insurer.”
Doctors absolutely should be the ones making clinical judgments for the individual in front of them. But clinicians aren’t naturally positioned to set system-wide limits on their own. They’re advocates for the patient in the room—which is exactly what you want—but that advocacy can drift toward doing “one more thing” because the marginal cost isn’t theirs to bear. Scale that across millions of encounters and you do, in fact, get something close to infinite demand.
That’s why most functioning systems end up with a layered approach. Bodies like the National Institute for Health and Care Excellence in the UK or the U.S. Preventive Services Task Force in the U.S. try to draw evidence-based lines about what’s worth doing at a population level. They’re not treating you; they’re setting a baseline so that care is at least broadly rational and equitable. Insurers, for better or worse, enforce some of those boundaries financially. Governments do it through budgets and coverage rules.
The risk on the insurer side is obvious—you get blunt instruments, denials, and sometimes perverse incentives. The risk on the “doctor alone decides” side is quieter but just as real: variation, overuse, and inequity depending on who you happen to see and how assertive you are.
Your own situation is a nice microcosm. Keeping your systolic under 110 might be perfectly reasonable for you, under supervision, with no side effects. But if that became a blanket standard for everyone, it would likely lead to more harm than benefit in many people. So the system has to say, in effect, “this is generally enough,” while leaving room for a physician to say, “this particular patient is different.”
So I’d frame it less as doctor versus insurer, and more as: doctors should control individual care within guardrails set by evidence and resource realities. None of the players get it perfectly right, but when one dominates completely, things tend to go off the rails in a different direction.
You’ve boxed the problem neatly into its corner: every single decision-maker has a built-in bias.
The insurer rations with a financial lens.
The doctor advocates with a patient-in-front-of-me lens.
The bureaucrat standardizes with a population lens.
None of those is “appropriate” on its own—and yet you can’t run a system without all three perspectives showing up somewhere.
What actually works, imperfectly, is tension. You want guidelines shaped by evidence (the population view), clinicians who can deviate when the individual case warrants it (the bedside view), and some mechanism that prevents the whole thing from drifting into unlimited spending (the resource view). When those forces check each other, you get something that is at least stable, even if never fully satisfying.
The danger is when one voice becomes dominant. A purely insurer-driven system becomes stingy and adversarial. A purely physician-driven system tends toward overuse and inconsistency. A purely bureaucratic system can become rigid and indifferent to edge cases. Each failure mode is well documented.
Your earlier examples—blood pressure targets, preventive measures like grab bars—fit right into this. The guideline might say “<120 is enough for most.” The doctor might say, “for you, we can safely go lower.” The system quietly hopes that not everyone pushes to the extreme, because the aggregate cost (and risk) would rise.
So the uncomfortable conclusion is: there isn’t a single “appropriate” authority. There’s only a balance of imperfect ones, each correcting the others’ blind spots. It’s less elegant than a clean principle, but closer to how complex systems actually hold together.
That kind of freedom cuts both ways. It removes a lot of friction—which can be a real advantage—but it also quietly shifts more responsibility onto you to decide where the sensible limits are.
With prescription drugs in particular, the guardrails in places like the U.S. aren’t only about insurers saying “no.” Agencies such as the U.S. Food and Drug Administration set what can be marketed at all, and clinical guidelines from groups like the American Heart Association try to define when use is likely to help more than harm. Those layers are imperfect, but they exist because the downside risks are often subtle and delayed—kidney strain, electrolyte shifts, interactions, cumulative effects that don’t announce themselves right away.
Where your situation becomes interesting is that you can, in effect, operate a bit like your own health system. That works well if you keep some discipline around it: regular monitoring, sticking to drugs with a clear rationale, and being willing to pull back if something drifts the wrong way. It sounds like you already do that with physician involvement, which matters more than the regulatory environment.
I’d just nudge one caution. The same instinct that leads you to install grab bars—reduce risk before it shows up—can, if applied too aggressively to medication, drift into “more intervention must be better.” Biology doesn’t always reward that. Sometimes the safest system is the one with fewer moving parts.
So your advantage isn’t just access; it’s the ability to choose selectively. Used that way, it’s a strong position. Used indiscriminately, it can become its own kind of risk.
That pattern you’re seeing—minimal explanation unless the patient signals they want it—is fairly common in places where clinicians are managing language gaps, time pressure, and a wide range of health literacy. It works passably for some patients, but it does mean the burden shifts to people like you to ask and verify.
Your approach—ask directly, cross-check, and take clear risks seriously—is the right instinct. The only thing I’d add is a bit of structure so you’re not relying on memory or on whoever happens to be on duty that day. When you’re handed medications, it’s worth getting three basics nailed down every time: the name, the dose, and the purpose. If those aren’t volunteered, asking for the packaging or having it written down can make a big difference later. Even a quick photo of the box or blister pack gives you something to verify at home.
With drugs like Ibuprofen, your caution is well placed. It’s over-the-counter in many countries, which gives a false sense of harmlessness, but the real risks—stomach bleeding, kidney strain, effects on blood pressure—show up with dose, duration, and individual susceptibility. That’s exactly the kind of medication where knowing what you’re taking (and how much) matters more than the label “common.”
One subtle risk in a system like this is duplication or interaction—two different pills that turn out to be in the same class, or a combination that nudges blood pressure, electrolytes, or kidney function in the wrong direction. That’s where a simple running list—kept by you—becomes more reliable than any single encounter.
You’ve already carved out a good middle path: respect the clinician’s experience, but don’t outsource understanding. In an environment that defaults to opacity, that’s about as close as you can get to informed consent in practice.
You’re doing something quite sensible there—building your own “information layer” to compensate for a system that doesn’t naturally provide one. Having an independent source that flags interactions, side effects, and dosing ranges is exactly the kind of guardrail that travels well across borders.
On naming, you’re in good company. Using generics (the international nonproprietary names) is the cleanest way to avoid confusion, especially when brands vary by country. There actually are rules in many places that try to make the generic prominent—regulators like the U.S. Food and Drug Administration and the European Medicines Agency require the nonproprietary name to appear on labeling, often with standardized strength and form. But you’re right that in practice branding can still dominate the visual space, and that’s where mistakes creep in.
Your idea—“generic name first, then manufacturer”—is essentially how pharmacists and formularies already think, just not always how packaging is presented to patients. A stricter, more uniform display (generic name large and central, manufacturer secondary) would probably reduce duplication and class-confusion errors, especially in settings where patients receive unlabeled or repackaged pills.
One small refinement: it helps to track not just the generic name, but also the dose, formulation, and maker. Bioequivalence standards exist, but they allow a range, and different manufacturers can use different fillers or release profiles. Most of the time that doesn’t matter; occasionally it does—particularly with drugs that have a narrow therapeutic window or specific absorption quirks.
So your system—generic names, independent cross-checking, and a willingness to ask questions—puts you ahead of the curve. If anything, the gap you’ve noticed isn’t a lack of rules so much as inconsistent execution and design. And in medicine, as in your grab-bar example, good design often does more for safety than another layer of rules.
The U.S. is unusually expensive across a lot of sectors, but pinning it mainly on the plaintiff bar doesn’t really hold up once you look at the numbers.
Malpractice and other litigation costs are a small slice of total healthcare spending—often estimated at a few percent when you include “defensive medicine.” That’s not trivial, but it’s nowhere near large enough to explain why the U.S. spends dramatically more than peers while getting mixed outcomes. It’s a factor, not the driver.
The bigger forces are more structural and a bit less visible:
-
Prices, not volume. Americans don’t always use more care; they pay much higher prices for the same services, procedures, and drugs. Hospital charges, specialist fees, and branded medications are all priced higher.
-
Administrative complexity. Multiple payers, billing systems, prior authorizations, and coding layers create overhead that other countries largely avoid.
-
Market power. Hospital consolidation and certain specialist groups can negotiate higher rates. On the drug side, patent protections and limited price negotiation keep prices elevated.
-
Payment incentives. Fee-for-service rewards doing more, especially high-margin procedures, rather than optimizing outcomes per dollar.
-
Fragmentation. Care is less coordinated, which can mean duplication, gaps, and uneven quality.
-
Non-medical costs. In education and services more broadly, you see things like “cost disease” (labor-intensive sectors get pricier over time), heavy administrative layers, and pricing strategies that lean on loans/subsidies—all of which can inflate sticker prices without clear quality gains.
Legal risk does shape behavior—clinicians may order extra tests to cover themselves, and product liability affects how devices and drugs are developed and labeled. But if you removed that entirely, the U.S. would still be an expensive system because the core pricing and structure would remain.
Your experience—having more direct control and fewer intermediaries—highlights the flip side: you lose some guardrails but gain simplicity and price transparency. Neither model is cleanly “better”; they just fail in different ways. The trick, if there is one, is keeping the useful protections without letting them metastasize into cost drivers.
I get the intuition: if litigation risk is high, organizations respond by building layers of documentation, compliance, and “defensive” processes. That does create friction and cost. So you’re not imagining a relationship there.
Where I’d push back is that it’s more like a secondary amplifier than the root cause.
Even in countries with very limited plaintiff litigation, you still see bureaucracy grow in healthcare, education, and large institutions once systems become complex and money flows through them. That suggests there’s something more basic going on than legal exposure alone.
In the U.S. case specifically, tort risk does contribute to “defensive medicine” and documentation burden—but studies generally find it explains only a modest portion of overall cost growth. The larger drivers sit earlier in the chain: what services are priced at, how billing works, and how fragmented the payer system is.
There’s also a subtle dynamic here: litigation risk doesn’t just “add rules,” it interacts with existing incentives. For example:
-
If payment rewards more procedures, then legal risk nudges providers toward more documentation and more testing, not fewer procedures.
-
If pricing is opaque, then legal compliance processes become another billing category rather than a cost control mechanism.
-
If responsibility is diffuse (many insurers, many hospitals), then legal defensiveness spreads across all of them, multiplying paperwork.
So the plaintiff bar is better understood as a force that shapes how bureaucracy behaves, not the main engine that creates high baseline costs.
A useful way to frame it is:
-
Pricing structure determines how expensive the system is.
-
Administrative complexity determines how much overhead sits on top of that.
-
Litigation risk influences how cautious and process-heavy everyone becomes inside that structure.
If you change only the third, you still have an expensive system—it just becomes slightly less defensive at the edges.
Your broader point about “rules and institutions fertilizing bureaucracy” is actually quite solid as a general principle. The disagreement is just about which fertilizer matters most in this case.
I hear the skepticism. “Studies show” can sound like a conversation-stopper, especially when it’s used as a substitute for explaining mechanisms or acknowledging uncertainty.
You’re also right about one part of your observation: the legal system is lawyer-dominated by design. It’s adversarial, and that structure naturally gives a lot of power to trained advocates. That does shape institutions, incentives, and behavior well beyond courtrooms—especially in large, regulated systems like healthcare, finance, and education.
Where I’d gently separate things out is this:
First, lawyers are influential actors, but not a single coordinated force. The “plaintiff bar” isn’t one entity with unified incentives—it’s millions of individual practitioners, firms competing with each other, insurers on the other side, corporate defense firms, regulators, judges, and legislators. Those groups often push in different directions at the same time. The net effect is messy, not centrally directed.
Second, litigation risk is real but unevenly distributed. It’s highest in specific domains (medical malpractice, product liability, mass torts), and much lower in many everyday systems. Where it matters, it does encourage documentation, defensive behavior, and risk aversion. But that still doesn’t fully explain broad cross-sector cost differences, because those exist even in areas with relatively low tort exposure.
Third, on the “studies” point: you’re right to ask “who funded it?” That’s always relevant. But it’s also worth noting that in healthcare economics especially, there’s a fairly large body of work produced across competing institutions—academia, government agencies, insurers, and international comparisons—so it’s not all coming from a single incentive structure. You don’t have to trust any one study; the more robust conclusions tend to come from where different methods and incentives converge on similar patterns.
On your personal observation about ambitious students leaning toward law: that matches a real selection effect—law has historically been a pathway to income, status, and influence, especially in systems like the U.S. That doesn’t automatically translate into “therefore lawyers are the root cause of system-wide cost inflation,” but it does reflect why legal reasoning and risk management are so deeply embedded in institutional life there.
So I’d phrase the disagreement this way: you’re seeing a real amplifier (legal risk and legal culture) and extending it toward being a primary engine of cost and complexity. The evidence-based view is more boring but also more robust: it’s one strong force among several, and in some domains it meaningfully shapes behavior—but it doesn’t dominate the entire system in the way it can feel from the inside.
If you want, we can pick one concrete domain—drug pricing, hospital billing, or malpractice—and pull it apart mechanistically. That’s usually where the intuition either sharpens or gets corrected fastest.
You’re circling two classic reform ideas in tort law: professional adjudication instead of lay juries, and tightening contingency fees. Both have real arguments behind them, but both also change the system in ways that are easy to underestimate.
On tribunals versus juries: many systems already move in that direction for exactly the kind of complexity you’re thinking about. Specialized courts, judge-only trials, arbitration panels, and medical review boards exist precisely because ordinary juries can struggle with highly technical causation questions (like medical negligence or pharmaceutical harm). A tribunal model can increase consistency and reduce “lottery-like” variability in awards.
The tradeoff is that juries are not just “amateur decision-makers”; they’re also a democratic legitimacy check on a system that otherwise becomes insulated. Once you fully professionalize fact-finding, you often get more predictability—but also more deference to institutional actors and, sometimes, less perceived accountability from powerful defendants (hospitals, insurers, manufacturers). Different countries balance that differently rather than eliminating the tension.
On contingency fees: they are indeed a major structural feature in the U.S. system. They serve a purpose—allowing plaintiffs without resources to bring cases that would otherwise be impossible—but they also create incentives for volume, settlement pressure, and sometimes borderline claims. Tightening them or capping them (as some jurisdictions do) tends to reduce litigation volume and shift the balance toward more pre-screening of cases by lawyers themselves.
But here’s the key structural point: if you restrict contingency fees without replacing access to funding, you don’t just remove excess litigation—you also risk filtering out valid but resource-intensive cases. In other words, you reduce both noise and some signal.
So your proposal is internally coherent: it would likely produce a system that is more professionalized, more predictable, and less driven by large uncertain payouts. The honest question is what you’re optimizing for:
-
Lower variance and lower cost? Your direction helps.
-
Maximum access for under-resourced plaintiffs? It can hurt unless replaced with something like public funding or legal aid expansion.
-
Error reduction overall? That’s ambiguous—juries overreach sometimes, but institutions under-recognize harms in other ways.
Your underlying instinct—that the current U.S. system is too litigation-incentivized and too variable—is widely shared even among people who disagree on solutions. The disagreement is mostly about where you accept loss: error, access, cost, or legitimacy.
Fair pushback. “Democratic legitimacy check” can sound like a slogan if it’s stated too grandly.
A more precise way to put it is this: juries are less about expertise and more about social acceptability of outcomes. They embed a lay perspective into decisions that otherwise would be made entirely by professionals—judges, insurers, regulators, or arbitrators.
That matters for a few concrete reasons, not mystical ones:
First, trust in outcomes. People are more likely to accept an adverse result if they believe “people like me” could have reached it, not just insiders. That’s not democracy in a political sense so much as procedural buy-in.
Second, constraint on institutional drift. Purely professional systems tend to converge on internal norms—efficient, consistent, but sometimes detached from lay intuitions about harm, responsibility, or fairness. Juries occasionally act as a corrective to that.
Third, distribution of uncertainty. Jury verdicts introduce variability, sometimes uncomfortably so, but that variability also makes it harder for any one actor (including repeat institutional defendants) to perfectly game expectations.
Now, your skepticism is justified because the flip side is real:
-
Juries are inconsistent.
-
They can be influenced by framing, emotion, and narrative skill.
-
They may struggle with technical causation in medicine or engineering.
-
They can produce outlier awards that feel arbitrary.
So if someone calls them a “democratic legitimacy check” as if that were their primary or noble function, that’s overstated.
A cleaner description would be: juries are a mechanism for injecting lay judgment and unpredictability into a professionalized legal system, at the cost of precision and consistency.
Whether that’s worth it depends on what you think the system is optimizing for. You’re clearly weighting predictability, cost control, and technical coherence pretty heavily—so it makes sense that juries look like noise rather than legitimacy from your perspective.
That’s a sharper and, honestly, more grounded critique than the usual “juries are emotional and random” line.
You’re probably right in a narrower sense: most jurors do take the responsibility seriously, and their decisions are often more predictable than people assume. The variability in outcomes is frequently less about carelessness and more about information asymmetry.
Where your “out of their depth” framing lands well is in domains like medical causation, pharmacology, engineering failure, or actuarial risk. In those cases, jurors are asked to evaluate chains of reasoning that even trained professionals debate. So what happens is not random judgment—it’s structured interpretation of incomplete understanding. They do their best with what they’re given, but what they’re given is often already simplified, adversarially filtered, and technically compressed.
That’s the key distortion: juries aren’t making decisions in a vacuum; they’re making decisions on expert narratives that have already been shaped by litigation strategy. So you get something closer to “which simplified story is more convincing” than “who is correct in a technical sense.”
That’s also why outcomes can feel predictable within a given case type. Once you know the framing, emotional anchors, and baseline sympathy, you can often guess the direction of the verdict. It’s not chaos—it’s constrained reasoning under load.
Where this gets interesting is your earlier reform idea. If you replace juries with tribunals or judges, you reduce that “depth mismatch,” but you don’t eliminate the underlying problem—you shift it. Instead of lay comprehension gaps, you get institutional perspective narrowing, where technical correctness can drift away from public intuitions about harm and responsibility.
So I’d phrase your insight like this: juries aren’t unreliable in the sense of being erratic; they’re reliable within a system that often exceeds their informational bandwidth.
And that’s a very different critique than the usual caricature.
Workers’ compensation is actually a really good test case for this discussion, because it shows what happens when you replace tort litigation with an administrative system.
At a high level, most U.S. states run workers’ comp as a no-fault insurance scheme: if you’re injured on the job, you get predefined benefits (medical care + partial wage replacement), and in exchange you largely give up the right to sue your employer for damages. Disputes go through administrative hearings rather than juries.
So your instinct is right: it dramatically reduces ordinary civil litigation in that space.
But the interesting part is what it trades for:
First, predictability goes up.
Outcomes are more standardized. There’s less variance between cases,
fewer jackpot-style awards, and more formulaic compensation. That aligns
with your preference for constraint and engineering-like design.
Second, legal friction goes down—but bureaucracy goes up in a different form.
Instead of juries and plaintiff lawyers, you get claims administrators,
medical utilization review, impairment ratings, procedural rules, and
often very detailed documentation requirements. People don’t escape
structure—they just move from adversarial structure to administrative
structure.
Third, disputes don’t disappear; they relocate.
A significant amount of contention in workers’ comp is about:
-
whether the injury is “work-related”
-
degree of impairment ratings
-
duration of disability benefits
-
approval of treatments
These disputes often end up in administrative hearings that can still be quite lawyer-heavy—just less theatrical than tort trials.
Fourth, cost control improves, but sometimes at the expense of generosity or responsiveness.
Because benefits are scheduled and capped, the system is more
predictable for insurers/employers. But injured workers may feel
constrained if their situation doesn’t fit neatly into the
classification system.
So what workers’ comp really demonstrates is not “lawyers removed → system becomes clean,” but rather:
When you remove tort litigation, you don’t remove legal reasoning—you convert it into administrative classification.
That actually connects back to your earlier point about tribunals. Workers’ comp is essentially a tribunal-heavy model. And it works reasonably well at controlling variance—but it does so by accepting a certain level of rigidity and procedural complexity.
So I’d frame it this way: it supports your intuition that adversarial jury systems are not the only way to organize compensation. But it also undercuts the idea that removing litigation simplifies the system overall. It just changes the shape of the complexity.
That’s a clean moral position, and it’s also basically the premise on which workers’ compensation systems were built in the first place.
The historical bargain is often called the “grand compromise”: workers give up the right to sue in exchange for guaranteed, faster, no-fault compensation, and employers give up exposure to unpredictable jury awards in exchange for predictable liability costs. So in most modern systems, you’re already aligned with how it was designed to work in principle.
Where it gets messy is execution rather than philosophy.
Even in a no-fault system, people still end up effectively “fighting the system” because the real disputes shift from fault to things like whether the injury is classified as work-related, what level of impairment applies, which treatments are approved, and how long benefits last. So the conflict doesn’t disappear—it becomes administrative instead of adversarial.
Your “end of story” view is strongest at the design level: yes, it is hard to justify a system where a worker must litigate against their employer just to get basic support after an injury. That’s exactly the problem workers’ comp was meant to solve.
The remaining tension is more technical and unavoidable: once you promise compensation without fault, you still have to define boundaries—what counts, how much, for how long, and under what evidence. And wherever boundaries exist in a system with money involved, there will be disputes about them.
So the real question isn’t whether workers should have to sue their employer. In modern systems, they generally don’t. The harder question is whether the administrative substitute for suing is fair, responsive, and consistent enough that it doesn’t recreate the same feeling of contest—just in a different form.
That context matters a lot—and it also explains why your view is so crisp: you’ve seen the system from the risk-bearing end, not just the claimant-facing side.
And I think your core claim is basically defensible in comparative terms: workers’ compensation systems are generally more efficient, more predictable, and cheaper than tort-based personal injury litigation for workplace harm. That’s one of the main reasons they exist and persist.
A few nuances, though, not as pushback so much as sharpening:
First, “fairer” depends on what axis you care about.
Workers’ comp is often fairer in the sense of universal access and speed—people actually get something without needing to prove negligence. But it can be less fair in the sense of individualized justice, because outcomes are scheduled, capped, and sometimes blunt relative to the real variation in human injury and recovery.
Second, from the reinsurance perspective you’ll know this well: workers’ comp is “cleaner risk” but not “simple risk.” It’s highly structured, but it pushes complexity into classification, medical causation disputes, and long-tail disability management. So it’s efficient at the macro level, but still full of edge cases at the micro level.
Third, it’s interesting that even in your framing—efficiency, fairness, cost—the system wins, yet it still doesn’t completely eliminate the adversarial feel. That’s the recurring pattern across all these systems: you can reduce litigation, but you can’t eliminate disagreement about boundaries without also accepting either rigidity or discretion.
So I think your conclusion holds in the way that actually matters for policy design: workers’ comp is a strong example of how removing tort litigation can improve system performance overall, even if it replaces it with a different kind of bureaucracy.
Where I’d gently refine it is just this: it’s not that legal process is replaced by something simpler—it’s replaced by something more engineered. And from what you’re saying, that engineered version is closer to your preference: bounded, predictable, and economically rational, even if not perfect in edge cases.
You’re right to be skeptical of the idea that a system should “eliminate the adversarial feel.” That phrase is a bit of a category error if taken literally.
What people usually mean—often sloppily—is not “remove all conflict or dissatisfaction,” but “reduce unnecessary friction created by uncertainty, ambiguity, or strategic obstruction.” In other words, not the feeling itself, but the avoidable procedural hostility that sometimes surrounds claims.
But your framing is more grounded: workers’ compensation is not trying to be emotionally harmonious. It’s trying to be a bounded exchange system for defined injuries, and in that domain it does pretty well.
And your point about gaming and entitlement cuts both ways in a useful way. Any system that pays benefits without fault will inevitably attract:
-
borderline claims
-
over-assertion in ambiguous cases
-
asymmetry in how people perceive fairness
The design question is not “can we eliminate that,” because we can’t. It’s whether the system is robust enough that those behaviors don’t dominate outcomes or overwhelm costs. Workers’ comp largely passes that test in most jurisdictions, which is why it has persisted for over a century despite constant criticism from all sides.
Where the “adversarial feel” language becomes relevant in practice is usually in edge-case disputes: prolonged disability determinations, causation disagreements, or treatment approvals. Those are the points where the system stops feeling like an automatic compensation mechanism and starts feeling like a contest again. But even there, the adversariality is not inherent—it’s a symptom of uncertainty about facts, not a moral failure of the system.
So I think your position is coherent and historically grounded:
workers’ comp is a successful example of replacing tort-style moral litigation with structured economic allocation of risk.
If anything, the real lesson from your experience in reinsurance is exactly what you’re implying: once you accept that no system will perfectly track individual moral expectations, the goal shifts to containing variance and cost while keeping the rules intelligible enough that participants can live with them.
The “future perfect system” idea is charming, but in practice every stable system ends up being a compromise between accuracy, cost, and administrability. Workers’ comp just sits in a relatively successful part of that triangle.
No comments:
Post a Comment