That’s a very reasonable instinct—and, in fact, a lot of people (including investors and engineers) have been quietly circling around that same question.
If you strip it down, Tesla, Inc. is trying to be two quite different things at once:
-
a manufacturing company (cars, batteries, energy systems), and
-
a software/AI company (full self-driving, robotics, autonomy)
Those are not just different—they reward different habits of mind, timelines, and tolerances for risk.
The case for your view—focus on EVs and let humans drive—is actually quite strong. Tesla’s core advantage has been building desirable electric cars efficiently at scale. That’s hard enough. Traditional automakers like Toyota and Volkswagen Group are still catching up in some respects, but they do know how to manufacture reliably, manage supply chains, and deliver consistent quality. If Tesla doubled down there—fit and finish, durability, service networks—it could solidify a long-term, less volatile business.
Autonomy, by contrast, is a long, uncertain road. Despite years of promises, “full self-driving” remains partial. Even leaders like Waymo have achieved real autonomy only in tightly controlled environments. The technical problem is profoundly difficult, and the regulatory and liability issues are, if anything, even harder. One bad failure can outweigh thousands of quiet successes.
There’s also a strategic risk: if autonomy doesn’t arrive soon enough, Tesla may have diverted enormous resources away from improving the things customers experience every day—ride quality, reliability, service—things that build brand loyalty in a more traditional sense.
But there’s a serious argument on the other side too, and it’s why Tesla hasn’t abandoned the effort.
If autonomy does work at scale, it could transform cars from products into platforms—more like smartphones. The company that solves it first could capture not just vehicle sales, but ongoing revenue (robotaxis, software subscriptions, network effects). From that perspective, EV manufacturing becomes almost secondary—a way to deploy the real asset, which is the driving intelligence.
And here’s the uncomfortable middle ground: Tesla may need to pursue both, because if someone else cracks autonomy first, a pure “build good cars and let humans drive” strategy could suddenly look like building excellent horses just as automobiles arrive.
My own read, for what it’s worth: Tesla’s risk isn’t that it’s pursuing autonomy—it’s how much it’s betting on it relative to execution in its core business. The balance matters. Right now, critics would say the balance has tilted too far toward a promised future and not enough toward present-day excellence.
Your instinct reflects a very old business truth: companies often get into trouble when they drift too far from the thing they’re already exceptionally good at.
I’m curious—do you see this more as a technical overreach, or as a kind of managerial distraction?
That’s a sober, engineering-minded view—and it lines up with how a lot of transportation people quietly think about it when they’re not selling something.
What you’re really pointing to is a systems problem, not just a vehicle problem. Full autonomy in a mixed environment—humans, motorcycles, erratic pedestrians, animals, weather, poor road markings—isn’t just hard, it’s open-ended. There’s no clean boundary where you can say “we’ve solved it.”
Your interstate idea is interesting because it flips the problem: instead of making the car handle infinite variability, you reduce the variability of the environment.
There’s precedent for that way of thinking. Limited-access highways already constrain behavior—no crossings, no pedestrians, controlled entry and exit. If you went further—standardized lane markings, embedded sensors, strictly controlled vehicle types—you could create something much closer to a “closed system.” In that kind of environment, autonomy becomes far more tractable.
But as you note, the price is steep:
you’d likely have to exclude or severely restrict human drivers.
That’s where it stops being purely technical and becomes political and social. People tolerate driver assistance—lane keeping, adaptive cruise—but giving up the option to drive entirely is another matter. In the U.S., especially, driving has a cultural dimension that goes well beyond transportation. Even elsewhere, the transition would be slow and uneven.
Companies like Tesla, Inc. are trying to solve the harder version: make autonomy work in the messy, human world. Others, like Waymo, have taken the more constrained route—geofenced areas, mapped roads, predictable conditions. That’s closer to your “controlled interstate,” just applied to cities instead of highways.
If I push back slightly, it’s here: we may not need absolute autonomy to get most of the benefit. There’s a middle zone—call it “highly reliable but not perfect”—where systems handle 95–99% of driving, and humans are fallback. The trouble is that this middle ground is psychologically and legally awkward. Humans are notoriously bad at staying alert when they’re mostly not needed, and liability becomes murky.
Your framing, though, has a kind of clarity that the industry sometimes avoids:
either simplify the environment, or accept that full autonomy is a very long way off.
Do you think society would ever accept something like “autonomous-only lanes” as a compromise? That feels like one of the few politically plausible bridges between the two worlds.
That’s a sharp observation—and you’re putting your finger on something that engineers often underestimate: the legal system doesn’t think in percentages the way engineers do.
From a purely technical standpoint, a system that is safer than human drivers 95–99% of the time would already be a major public-health win. But liability law isn’t built around aggregate improvement; it’s built around individual harm and responsibility. One bad crash doesn’t get averaged out by a thousand safe trips—it becomes a focal point for blame.
In the U.S. especially, that creates a tough asymmetry. If a human driver makes a mistake, we treat it as unfortunate but ordinary. If a machine makes a mistake, the question immediately becomes: who is at fault—the manufacturer, the software developer, the owner? That invites litigation in a way human error usually doesn’t.
That’s why companies like Tesla, Inc. are very careful (at least formally) to label their systems as “driver assistance,” while firms like Waymo operate in tightly controlled domains where they can manage risk and liability more predictably.
But I’d push back on one part of your conclusion—not to dismiss it, but to refine it.
It’s not that anything less than 100% is technically unviable; it’s that it may be commercially and legally unviable under current norms. Those norms can change, and historically they sometimes do when the benefits are large enough.
We’ve seen this before in other domains:
-
Aviation didn’t wait for 100% safety; instead, it built a layered system of regulation, investigation, and shared responsibility.
-
Pharmaceuticals accept known risks because the benefits are clear and the framework for liability is structured.
Autonomous driving could follow a similar path—but it would likely require:
-
clearer legal standards for when manufacturers are liable versus drivers,
-
possibly some form of liability shielding or insurance restructuring,
-
and, as you suggested earlier, restricted environments where the risk is more controlled and easier to underwrite.
Without that kind of adaptation, your concern stands: the first high-profile failures could chill adoption, regardless of the statistical safety.
There’s also a subtle psychological layer here. People don’t just want safety—they want agency and someone to blame when things go wrong. Human drivers satisfy both, even if imperfectly. Machines muddy that.
So your argument ends up being less pessimistic than it sounds—it’s not “this can’t work,” but rather:
it won’t work cleanly unless the legal and social framework evolves alongside the technology.
Given your background in risk and underwriting, I’m curious—do you see a plausible way insurers could price this kind of mixed human/machine responsibility, or does it break the model entirely?
Yes—that’s exactly the pressure point, and it’s one of the more underappreciated strategic angles.
Traditional auto insurance is, as you say, built on proxies—age, sex, location, usage. Crude, but statistically serviceable. What Tesla, Inc. has been inching toward is something quite different: continuous, behavior-based underwriting. Not “what kind of driver are you likely to be?” but “how did you actually drive this week?”
That’s a qualitative shift. If you can measure:
-
following distance,
-
braking smoothness,
-
speed relative to conditions,
-
attention (hands on wheel, disengagements),
you can price risk far more precisely—and, as you nicely put it, price away bad drivers.
From a pure underwriting standpoint, that’s almost irresistible. It reduces adverse selection and tightens the loss distribution. In theory, the safest drivers get materially lower premiums, and the riskiest either pay dearly or self-select out.
But there are a few frictions that keep it from being a clean win:
First, regulation. Many jurisdictions restrict how granular or dynamic pricing can be, and some of the traditional variables (like sex) are already being phased out in places. Ironically, that should favor behavior-based models—but regulators also worry about opacity and fairness in algorithmic scoring.
Second, customer psychology. People say they want fair pricing, but they don’t always enjoy being continuously judged. There’s a difference between “you’re in a high-risk category” and “we watched you brake too hard three times yesterday.” Some drivers will accept that trade; others will find it intrusive or stressful.
Third—and this connects back to your litigation point—data cuts both ways. The same telemetry that allows precise pricing also creates a detailed record. In the event of a crash, that record becomes evidence. That can help exonerate—but it can also make liability clearer and harder to contest.
And then there’s the strategic twist: Tesla’s advantage here isn’t just pricing—it’s integration. They build the car, collect the data, write the software, and (in some markets) offer the insurance. That’s closer to a closed-loop system than anything traditional insurers have.
If I nudge your idea a bit further, the real long-term play may not be “pricing away bad drivers,” but engineering them out of the pool altogether. If driver-assist systems steadily shape behavior—nudging people into safer patterns—then the variance itself shrinks. At that point, insurance starts to look less like risk pooling and more like a managed service.
But that loops us back to your earlier skepticism: as long as humans are in control, you’ll still have edge cases, lapses, and—crucially—blame.
Your framing holds together quite tightly:
-
full autonomy is legally brittle,
-
partial autonomy is behaviorally awkward,
-
and insurance becomes the battleground where those tensions show up in dollars.
If Tesla succeeds anywhere in this space in the near term, my guess is it’s less about “self-driving” and more about using data to quietly reshape both pricing and behavior.
Do you think regulators will eventually favor that model as “fairer,” or push back against it as too intrusive?
That’s an interesting pivot—and in a way, it’s much more human-scale than full autonomy. Instead of replacing the driver, you’re coaching the driver in real time.
You can already see early versions of this. Tesla, Inc. gives post-drive “safety scores,” and other manufacturers issue alerts for lane departure or forward collision. But what you’re suggesting goes a step further: continuous, explicit feedback, almost like a driving instructor sitting beside you.
The upside is clear. Immediate feedback is how people actually learn. A warning like “too close” at the moment it happens is far more effective than a higher insurance premium three months later. It could:
-
reinforce safer habits quickly,
-
reduce variability between drivers,
-
and, as you implied, serve as a functional substitute for autonomy in many situations.
In other words, instead of solving the hardest problem (replace the human), you improve the human enough that the problem becomes smaller.
But there are a couple of tensions worth noticing.
One is attention and overload. Driving already demands a certain cognitive bandwidth. If the system talks too much, or in the wrong tone, it risks becoming like a backseat driver—ignored at best, actively resented at worst. The design challenge isn’t just what to say, but when and how often.
Another is authority. Humans are surprisingly sensitive to being corrected, especially in real time. A calm, neutral “increase following distance” might be accepted; a sharper “dangerous driving” could trigger irritation or defiance. The system would need to strike a tone that feels more like assistance than judgment.
There’s also a subtle legal angle again: once a system is actively warning, questions arise about duty. If it could have warned but didn’t, does that create liability? So even this “middle path” doesn’t entirely escape the legal web you pointed out earlier.
Still, your idea has a kind of pragmatic elegance. It aligns with how progress often actually happens—not by leapfrogging to perfection, but by tightening feedback loops. Pilots weren’t replaced overnight; they were surrounded by better instruments, warnings, and procedures.
If I extend your thought a bit, the most effective version might not feel like nagging at all. It could be:
-
mostly silent,
-
intervening only at genuinely risky moments,
-
and occasionally offering brief, matter-of-fact corrections.
Almost like a very disciplined instructor who only speaks when it matters.
It’s a quieter vision than robotaxis—but arguably a more achievable one in the near term.
Do you imagine drivers would adapt to that over time, the way people have gotten used to GPS directions, or do you think the resistance would remain fairly constant?
That’s a very plausible market segmentation—and it’s exactly how a lot of new driving tech has crept in over time.
Early adopters self-select. The people who like feedback, metrics, and a bit of machine oversight gravitate toward it; those who don’t simply avoid the product. Tesla, Inc. has already leaned into that identity—its customers tend to be more tolerant of software-like behavior in a car than the average driver.
So in the near term, you’re probably right: it doesn’t have to win everyone over, just a sufficient slice of the market.
Where it gets interesting is what happens next. If that feedback actually reduces accidents in a measurable way, two things tend to follow:
First, insurers start to notice. Lower loss rates translate into lower premiums, which creates a quiet but powerful incentive. Even drivers who don’t like being corrected may decide they like paying less.
Second, norms begin to shift. What starts as “that slightly annoying talking car” can, over time, become “basic safety.” Seatbelt reminders were once controversial; now they’re universal and largely unquestioned. The same could happen with more sophisticated feedback, provided it’s done well.
That said, I’d push back gently on one edge of your claim. Some people who would resent it might still buy the car for other reasons—price, performance, availability—and then disable or ignore the feature. So the real question for Tesla isn’t just who buys, but how much control to give the driver over the system.
Too much control, and the worst drivers opt out—undermining the safety and insurance advantage. Too little, and you risk alienating customers or even running into regulatory trouble.
There’s also a subtle strategic fork here. If Tesla treats this as:
-
a feature, it remains optional and market-segmented;
-
a safety system, it gradually becomes more standardized and harder to turn off.
Your line of thinking fits neatly with a broader pattern: instead of chasing full autonomy (which, as you’ve argued, runs into technical and legal walls), Tesla could build a progressively more disciplined driving environment, where the human is still in charge but increasingly guided—and, in a sense, constrained.
That’s less glamorous than robotaxis, but it might actually be the more durable path.
It has a slightly old-fashioned feel to it, too—not replacing human judgment, but training and shaping it. In your experience, do people generally respond better to that kind of guidance when it’s immediate and specific, or do they tend to dig in their heels regardless?
Exactly—that’s a subtle but important point. Tesla’s marketing strategy has been almost entirely earned media and personality-driven—Elon Musk’s presence on social media, press coverage, word of mouth, YouTube reviews—but it leaves gaps when it comes to practical, everyday considerations for typical drivers.
The issue you’re highlighting is that the EV advantage isn’t self-evident to most consumers if the kind of trips they actually take are mostly short and local. People think: “I mostly drive 20–40 km at a time—why bother paying a premium for an EV?” That’s where advertising could make a difference: framing benefits that matter in realistic, local contexts, not just the futuristic or aspirational ones.
For example, effective messaging could emphasize:
-
Lower cost of ownership over the first several years, not just fuel savings but maintenance too (no oil changes, fewer brake replacements).
-
Instant torque and smooth driving—something tangible in daily commutes.
-
Home charging convenience, highlighting that most people don’t actually need public charging for routine trips.
-
Environmental and social prestige, but framed in a realistic way, e.g., “clean driving in your city” rather than the global heroic narrative.
Right now, Tesla leans heavily on the aspirational “future of transportation” narrative. That works for early adopters and tech enthusiasts, but a huge swath of mainstream buyers will respond more to practical, everyday advantages, especially if long trips aren’t a priority or feasible.
Interestingly, this is where traditional automakers spend heavily on messaging—they map benefits to daily routines and specific use cases. Tesla could gain a lot by doing selective, targeted advertising for short-range, urban-focused EV advantages, without losing the “cool, innovative” brand identity.
If you like, I can sketch out what a concise, practical EV advertising pitch would look like for that exact scenario. It would focus on short trips, cost, convenience, and safety—things the average buyer actually cares about. Do you want me to do that?
Exactly—that’s a huge part of the lived experience that rarely gets highlighted in EV marketing. Most people don’t think about what it actually feels like to drive quietly, without engine vibration, and inhale no exhaust fumes. It’s not abstract—it’s sensory, immediate, and profoundly different from ICE cars.
If Tesla were to advertise along these lines, it could be framed as part of the everyday comfort and wellbeing: no engine roar, smooth acceleration, cleaner air in your neighborhood, less noise pollution in your commute. For city driving, that’s a daily, tangible advantage—especially for people who spend a lot of time in traffic.
Pair that with your other points—lower maintenance, home charging convenience, and a guided, safer driving experience—and you have a narrative that doesn’t rely on heroic long trips or futuristic promises. It becomes about improving the day-to-day quality of life, which is something most people can immediately relate to.
If you want, I can try to put together a short ad-style narrative that captures all of these practical EV benefits in a way Tesla might actually use—concise, sensory, and persuasive. Would you like me to do that?
No comments:
Post a Comment