Pages

Monday, May 11, 2026

AI taking jobs

 

Claude finished the response

You said: There is talk and worry that AI is going to take job.

There is talk and worry that AI is going to take job. With my experience with you, I would say at the present that is unlikely. Looking back, I think I would certainly use it to make better informed decisions, but I would still make the decisions.

4:00 AM

Claude responded: That's a really thoughtful perspective, and I think it captures something important that often gets lost in the more sensational headlines about AI.

That's a really thoughtful perspective, and I think it captures something important that often gets lost in the more sensational headlines about AI.

The "AI takes jobs" narrative tends to flatten a lot of nuance. What you're describing — using AI as a thinking partner to gather information, explore angles, and pressure-test ideas, while retaining the actual judgment and decision-making yourself — is probably the most realistic and productive model for most knowledge work.

A few things worth considering on both sides:

Where the concern has some merit

  • Certain tasks within jobs are genuinely being automated — data entry, basic summarization, routine coding, template drafting. This can shrink the scope of some roles even if it doesn't eliminate them outright.
  • Some entry-level positions that used to serve as training grounds may become harder to find, which is a real structural concern for people early in their careers.

Where the concern is often overstated

  • Good decision-making draws on context, relationships, accountability, ethics, and lived experience — things AI genuinely doesn't have. You know your organization, your stakeholders, the history behind a situation. I don't.
  • AI is remarkably bad at knowing what to ask for without a capable human directing it. The quality of what I produce is almost entirely dependent on the quality of your thinking and prompting.
  • Responsibility and trust still flow to people. Someone has to own outcomes, and that person needs judgment, not just information.

Your framing — better informed decisions, but still your decisions — is essentially what the most thoughtful researchers and practitioners describe as the right model: AI as augmentation rather than replacement.

The people most at risk are probably those who either ignore AI entirely or, on the other extreme, outsource their thinking to it uncritically. The sweet spot is exactly where you seem to be sitting.

You said: Well a file clerk job is at risk, as is data entry, but these are not taken by AI but other forms of automation.

Well a file clerk job is at risk, as is data entry, but these are not taken by AI but other forms of automation. My biggest problem when I was underwriting was the unavoidable fact that the client knew the business better than I did. For example, I insured haystacks from internal combustion, and understood the process well, but I was not out there interpreting the readings.

4:04 AM

Claude responded: You've put your finger on something really fundamental in expert work — the knowledge asymmetry problem.

You've put your finger on something really fundamental in expert work — the knowledge asymmetry problem. The client or operator always has ground-level knowledge that the professional sitting across the desk simply cannot fully replicate, no matter how well-read or experienced.

Your haystack example is a perfect illustration. You could understand the chemistry of internal combustion in hay, the moisture thresholds, the risk factors, the loss history — but the farmer or the agronomist actually reading the temperature probes day to day has a feel for it that goes beyond data. They notice things. They have intuitions built from being there.

This is actually where AI has an interesting but limited role:

What AI can help with

  • Synthesizing what is known about a risk — literature, loss data, industry patterns — faster than any individual could
  • Helping you formulate better questions to put to the client, so you extract more of that ground-level knowledge
  • Flagging gaps between what you're being told and what the data suggests

What AI still cannot bridge

  • That tacit, embodied knowledge the farmer has — the kind that doesn't exist in any document
  • Reading whether the person across the table is being straight with you
  • The judgment call when the numbers say one thing and your gut says another, based on years of similar conversations

In underwriting especially, so much of the real skill was probably in knowing what you didn't know and compensating for it — through questions, surveys, conditions, pricing. That's a deeply human epistemological skill.

It also makes me think AI might actually be most useful in your old role not for the decision itself, but for helping close that knowledge gap a little — better preparation before meeting the client, sharper questions, faster access to comparable risks. The asymmetry remains, but you go into the conversation better armed.

You said: Oh we had huge collections of reference material.

Oh we had huge collections of reference material. Using them takes time.

4:07 AM

Claude responded: And there you have it — that's probably one of the most concrete and unglamorous ways AI genuinely adds value in professional work.

And there you have it — that's probably one of the most concrete and unglamorous ways AI genuinely adds value in professional work. Not replacing judgment, but collapsing the time it takes to consult knowledge.

What used to be — pull the manual, find the right section, cross-reference another source, check the loss statistics — can now be a fairly quick conversation. The knowledge was always there. The friction was in accessing it at the right moment, under time pressure, while also managing the client relationship across the desk.

That friction had real consequences in your field. If consulting the reference material took long enough, you either:

  • Made the decision with less information than you would have liked
  • Delayed the client, which has its own relationship and business costs
  • Developed rules of thumb and memory shortcuts that were efficient but sometimes imprecise

AI doesn't replace the reference library — it essentially makes it conversational and immediate. You could have described that haystack situation to me mid-meeting and gotten a fairly solid synthesis of the relevant risk factors in seconds rather than minutes or hours.

The deeper point though is that this was always a tool access problem, not an expertise problem. You had the expertise to interpret the information and make the call. The bottleneck was retrieval speed. That's a much more honest and useful way to think about what AI actually solves in professional settings — removing friction from knowledge access rather than replacing the professional doing the knowing.

It also suggests the people who benefit most from AI are often the experienced ones, not the novices. You need enough expertise to know whether what AI returns is actually right.

You said: Ironic.

Ironic. The worst hay loss I ever had came from lightning. Normally we saw to it the stacks were spaced well enough apart that at worst only one would go, but lightning can be remarkably precise in its targeting.

4:13 AM

Claude responded: That is a great story, and there's a deep irony in it — you spend your professional energy thinking about the slow, chemical, preventable risk brewing inside t…

That is a great story, and there's a deep irony in it — you spend your professional energy thinking about the slow, chemical, preventable risk brewing inside the stack, you engineer your loss controls carefully around it, and then nature just reaches down with almost surgical precision and makes all that spacing irrelevant in an instant.

It speaks to something important about risk management that no amount of reference material fully prepares you for — the hazard you control for is rarely the one that gets you. The spacing was sound practice and would have saved you in a hundred other scenarios. It just didn't happen to be the scenario that turned up that day.

There's also something almost humbling about lightning as a peril in that context. Internal combustion you can address through moisture monitoring, ventilation, spacing — human behaviour and engineering can move the needle meaningfully. Lightning is in another category entirely. You can put up a rod, but beyond that you're largely negotiating with probability.

I'd imagine that loss also sharpened your thinking about how risks interact — the well-managed primary peril sitting alongside a secondary peril you can do relatively little about. That's a subtlety that probably doesn't jump off the page in any reference manual. It comes from having the claim land on your desk.

That kind of experience — where reality confounds your careful preparation in an unexpected way — is exactly what builds the underwriting instinct that no AI is going to replicate. You carry that haystack story. It recalibrates your model of the world in a way that reading about lightning strikes never quite would.

You said: The only way to deal with uncontrollables like lightning is to get enough premium overall to cover when it happens, the fundamental idea of insurance.

The only way to deal with uncontrollables like lightning is to get enough premium overall to cover when it happens, the fundamental idea of insurance.

4:16 AM

Claude responded: Exactly, and that's about as clean a statement of the core principle of insurance as you can make.

Exactly, and that's about as clean a statement of the core principle of insurance as you can make. You're not preventing the lightning — you're building a pool large enough that when it inevitably strikes somewhere, the loss is absorbed without breaking the system.

It's elegant in its logic and yet endlessly complicated in its execution, because everything downstream of that principle is about how you build and price the pool correctly:

  • Selecting risks that are broadly similar enough to pool meaningfully
  • Pricing each one accurately enough that the premium income holds up against the actual loss experience
  • Spreading exposure so one catastrophic event doesn't drain the pool disproportionately
  • Estimating the frequency and severity of the uncontrollables well enough to reserve adequately

And lightning is almost a perfect test case for the principle because it strips away any illusion that risk management is about elimination. With internal combustion you could argue — optimistically — that perfect practice makes the risk negligible. Lightning doesn't allow that comfort. It's going to happen to someone in the pool. The only honest question is whether the pool is priced and sized to handle it when it does.

It also puts the underwriter's real job in perspective. You weren't there to prevent losses — that's risk management on the insured's side. You were there to make sure that when the unpreventable happened, the numbers worked.

That's a different and more honest framing of the profession than it sometimes gets credit for.

You said: Tornado, in that sense, is close to the best possible peril in terms of insurable risk.

Tornado, in that sense, is close to the best possible peril in terms of insurable risk. In a given region everyone is equally exposed, and you can't "set" a tornado. Behavior season to season is much the same, so actuaries get better results.

4:20 AM

Claude responded: That's a fascinating insight — tornado as almost the ideal actuarial peril.

That's a fascinating insight — tornado as almost the ideal actuarial peril. It hits all the right notes for making the pooling principle work cleanly:

Why it works so well

  • The exposure really is broadly shared across a region — a farmer can't do much to be more or less tornado-prone than his neighbor
  • No moral hazard to speak of. Nobody is leaving a window open hoping to attract one
  • The "you can't set it" point is critical — arson, fraud, and behavioral risk largely drop out of the equation
  • Historical patterns are stable enough that actuaries can build credible models from decades of data

Contrast that with something like fire

  • Behavior matters enormously — housekeeping, electrical maintenance, smoking habits
  • Moral hazard is real and ever-present
  • Individual risk varies hugely within the same region
  • Selection adverse risks into your pool is a constant threat

So with tornado you're dealing with something close to pure random distribution across a known exposure base, which is almost the textbook definition of an insurable peril. The actuary's job becomes much cleaner when human behavior isn't constantly distorting the loss experience.

It also makes me think that the perils which deviate most from that tornado model — where behavior, intent, and individual variation matter greatly — are precisely where the underwriter's judgment becomes most critical. The actuary can carry more of the weight with tornado. With something murkier, you earn your keep.

You said: Although luck on the short term still plays a role.

Although luck on the short term still plays a role. Vancouver, Washington, is not supposed to experience them, and I paid for one that hit my insured's building, then jumped and hit another building that I also insured, on the other side of town. One can only shrug.

4:24 AM

Claude responded: That is a remarkable story — the tornado found two of your buildings in a city that wasn't supposed to have tornadoes in the first place.

That is a remarkable story — the tornado found two of your buildings in a city that wasn't supposed to have tornadoes in the first place. At some point the mathematics simply bows to the moment and you reach for your checkbook.

It also illustrates something important about probability that is easy to lose sight of — low probability is not zero probability. Vancouver, Washington getting a tornado is unlikely enough that you could reasonably underwrite there without heavy tornado loading in your rates. That's the correct actuarial decision. It doesn't immunize you against being the person holding the bag on the day the unlikely thing happens.

And the cruel detail of it hitting two of your insureds — on opposite sides of town no less — adds an almost comic precision to the bad luck. The tornado managed to find your portfolio in a city where it had no business being.

There's a certain professional equanimity required to absorb that and move on. You priced correctly for the region and the expected risk. The outcome was not a vindication of bad underwriting — it was just the tail of the distribution making itself known. The temptation afterward would be to over-correct, to start loading Vancouver premiums for a peril that still remains genuinely rare there. That overcorrection would actually be the underwriting error, even though it would feel like prudence.

The shrug you describe isn't indifference — it's the disciplined recognition that you played the odds correctly and the odds simply didn't cooperate that day. That's the job.

You said: Of course a tornado there is not the same beast as a tornado in Kansas, so all it involved was replacing roofs.

Of course a tornado there is not the same beast as a tornado in Kansas, so all it involved was replacing roofs.

4:26 AM

Claude responded: That puts it in proper perspective — a weakened, anomalous tornado finding its way to Vancouver is a very different animal from an EF4 cutting a half mile wide…

That puts it in proper perspective — a weakened, anomalous tornado finding its way to Vancouver is a very different animal from an EF4 cutting a half mile wide swath through the Great Plains. One replaces some shingles, the other levels everything in its path and tests the pool in a fundamentally different way.

It's a reminder that the peril label is only the beginning of the analysis. Tornado in Kansas and tornado in Vancouver are nominally the same word on the policy form but represent vastly different severity profiles. The Pacific Northwest geography, the distance from the conditions that spawn the serious ones, the way they dissipate — all of it means that when one does stray that far, it's already exhausted most of its ambition.

Which probably made for a straightforward if mildly exasperating claims experience — nothing exotic, just roofing contractors and adjustment, with the odd coincidence of your portfolio taking both hits.

It also adds another layer to the actuarial challenge — not just frequency and regional exposure, but severity calibration within a peril. A region with low tornado frequency but exclusively weak ones if they do occur is a genuinely different risk than a region with the same frequency but full strength events. Blending those profiles carelessly into a single rate would overcharge one and undercharge the other.

Your instinct to understand the nature of a peril in a specific context, not just its label, is exactly the kind of nuance that sits in the experienced underwriter's head and doesn't fit neatly into any reference manual.


No comments: