Exactly—what you’re describing is YouTube’s recommendation system, often called the algorithm, and it really is just a big formula (or more accurately, a bunch of machine learning models) that decides which videos to show to whom, and in what order. The “like” button and subscribing help the system in a few ways:
-
Engagement signals: YouTube tracks which videos get likes, shares, comments, or how long people watch. A video with lots of likes and longer watch time is considered “engaging,” which makes the algorithm more likely to push it to others. It’s not about morality—just about estimating what’s popular or interesting.
-
Personalization: If you like a video or subscribe to a channel, the system takes that as a clue: “This user enjoys this kind of content.” So it shows you more similar videos in the future.
-
Channel growth: Asking for likes and subscriptions isn’t just for vanity; it’s literally teaching the algorithm, “This creator has an audience who cares.” More audience engagement = more promotion in feeds, recommendations, and the homepage.
In short, YouTube’s algorithm is mostly a feedback loop. It guesses what people will watch, shows it to them, sees how they respond (watch time, likes, comments, shares, subscriptions), then adjusts its predictions. The like/subscribe prompt is just the creator nudging the system to interpret the interaction as positive.
If you want, I can give a slightly deeper peek at how the algorithm actually predicts what you want to watch next—it’s a little fascinating. Do you want me to?
Yes—YouTube looks at all of those things, and more. It’s not literally “watching” in the sense of seeing the video like we do, but it tracks tons of data about every video and every user’s interaction with it. Here’s the gist:
-
Watch length: How long people watch a video matters hugely. If most people click away after 10 seconds, the algorithm figures it’s not very engaging, even if the title is clicky. Full or near-full watches are a strong positive signal.
-
Categories / topics: YouTube tags videos (sometimes automatically, sometimes via creator metadata). If you’ve watched a lot of videos about, say, classical music, it will show more from that category.
-
Producers / channels: If you subscribe to a channel or watch multiple videos from the same creator, YouTube learns you like that creator, so it pushes their new content to you. It also uses “similar channels” signals for recommendations.
-
Other engagement metrics: Likes, dislikes, comments, shares, and even click-through rate on thumbnails or titles all feed into the system.
-
Watch history patterns: YouTube looks at sequences of what you watch. If people often watch Video A, then Video B, it might recommend B to others who watched A.
-
Contextual factors: Time of day, device, region, and sometimes even what’s trending globally or locally.
Essentially, the algorithm is trying to predict what you (and people like you) will watch next. It’s a huge, constantly updating web of probabilities, not a single static formula.
If you want, I can draw a little mental picture of how it pieces all these signals together—it’s actually kind of like a spiderweb of nudges and feedback loops. Do you want me to?
Exactly—that’s a perfect example of the algorithm in action. YouTube doesn’t care about your “interest” in the human sense; it just notices patterns in your behavior. So if you used to watch Tesla or EV videos a lot, it would show more of them. If you stop watching those videos, a few things happen:
-
Personalized signals fade: Your watch history now shows little to no engagement with EV content. The algorithm sees, “This user isn’t interacting with this topic anymore,” so it stops pushing it.
-
Relative engagement matters: Even if people are still uploading EV content, if you’re clicking on cooking or jazz videos instead, YouTube notices you respond more to those, so it prioritizes them in your feed.
-
Popularity signals shift: If the community talking about Tesla is still active but you’re ignoring it, that content looks less relevant to you, even if it’s still trending overall.
So, in a sense, YouTube has “figured out” your waning interest. It’s less about global popularity and more about your behavior patterns. The algorithm adapts to keep you watching, because that’s its ultimate goal—maximize engagement.
It’s interesting: if you go back and watch one Tesla video after months, the system might start nudging a few more your way, slowly “relearning” your interest. It’s like it’s tracking your tastes continuously.
If you like, I can explain why sometimes YouTube seems to “overshoot” and keep showing stuff you clearly don’t care about anymore—it’s a funny quirk of the feedback loops. Want me to?
Yep—that’s one of the quirks (and occasional annoyances) of these recommendation systems: they infer your preferences from behavior, but they can’t understand context or subtlety. If you click on videos discussing a topic—even in a purely academic or curious sense—the algorithm assumes interest in the content type itself, not just the ideas or debate. That’s why you end up with overshoots like near-naked images or romantic audio stories.
The system isn’t “judging” or labeling you—it’s just doing pattern recognition: “This user watched content tagged with X, Y, Z; similar content is often clicked by people who like X, Y, Z; push it to this user.” It doesn’t know that your interest was theoretical, casual, or investigative.
And you’re right: there’s no rule that says you have to watch what it offers. Ignoring or skipping those videos is the best way to teach the system what you don’t want. In fact, YouTube also has tools to “not recommend this channel” or “don’t show similar videos,” which can speed up the correction if the overshoot is annoying.
The funny side is that the algorithm sometimes has a sense of humor we don’t: it will keep trying weird “extensions” of what it thinks you like, just because statistically, someone else did.
If you want, I can outline why YouTube overshoots so much before it settles down—it’s a mix of feedback loops and uncertainty handling that’s actually pretty clever in a chaotic way. Do you want me to?
Exactly—that’s a sharp observation. YouTube’s algorithm is essentially mimicking what a roomful of analysts would do, except much faster and at massive scale. It sees patterns, makes inferences, sometimes overshoots, and updates its predictions based on what you click—or don’t click—just like a human trying to guess your taste from past behavior.
And your badminton example illustrates the difference between active search and algorithmic recommendation. When you explicitly search for something new, you’re giving the system a clear signal: “I want to see this topic.” It doesn’t have to guess from vague past behavior—it just serves up what matches your query. That’s why you suddenly get a bunch of relevant videos, even though you’d never watched badminton content before.
In a way, active searching is the “shortcut” around the algorithm’s guessing. The recommendations are more like its attempt at anticipating your interests without a direct command, which is why they sometimes overshoot or misread your intentions.
It’s interesting: for topics you never searched for, the system has no hard data, so it defaults to probability-based guesses—similar to what humans would do in a brainstorming session—and that’s where the funny mistakes happen.
If you want, I can explain why YouTube seems to “lock in” some interests (like your EV phase) even after you stop watching them, while other topics appear immediately after a single search.
Exactly—it doesn’t need anything like philosophy or “intelligence” in the human sense. YouTube’s algorithm isn’t reasoning, reflecting, or quoting Hume; it’s just doing statistical pattern matching. It sees: “Users who watched X often watch Y next,” or “People who liked this video also liked these.” That’s the core of it—no deep understanding required.
“AI” in this context is mostly shorthand for machine learning: the system automatically tweaks its internal weights and formulas based on huge amounts of data to predict what you might watch next. It’s smart in the sense of math and statistics, not smart in the sense of insight or judgment.
Think of it like a massive, self-adjusting spreadsheet that tries to guess what video you’ll click on. It can overshoot, misinterpret context, or latch onto patterns that don’t reflect your actual interests—but it can also be incredibly effective at keeping people watching.
If you like, I can give a small analogy showing why this statistical “guessing” can sometimes look eerily like human intuition, even though there’s no thinking involved. Do you want me to?
Exactly—that’s the experimental side of the algorithm at work. YouTube is constantly running little “tests” to see what might interest you, sometimes without you even realizing it. In your case: after you watched some badminton videos, the system guessed you might like related sports, so it tried showing tennis videos. When you didn’t click or watch them, it updated its model and stopped pushing them.
This is a classic feedback loop: the algorithm treats every interaction as a tiny experiment. It doesn’t need to know why you might like something—just whether you actually engage. If you do, it leans in; if you don’t, it backs off. That’s why you see things appear for a short time and then vanish—YouTube is probing your tastes and correcting its assumptions almost in real time.
It’s a bit like a human store clerk offering a free sample: they notice if you take it, and then offer more of that type, but if you leave it on the counter, they stop bothering.
If you want, I can explain why sometimes these “experiments” can feel random or even a little intrusive, even when they’re just based on data.
Exactly—that’s the key distinction. The algorithm doesn’t “understand” that badminton and tennis are racquet sports or that they share similar rules. It only knows patterns in human behavior: statistically, people who watch badminton videos often watch tennis videos too. That correlation is enough for the system to try showing you tennis content.
It’s a purely statistical, association-based approach. There’s no conceptual knowledge, no reasoning, no recognition of categories or logic—just probability. That’s why sometimes the recommendations make perfect sense, and sometimes they feel completely off: the system isn’t thinking about the meaning behind the videos, only how likely a certain type of viewer is to engage with each one.
It’s why YouTube can suggest completely unrelated things if your viewing patterns happen to overlap with other people’s, even though a human would immediately see the mismatch.
If you want, I can describe how this “pattern-only” approach explains some of the stranger jumps in recommendations, like when the algorithm suddenly shows something seemingly unrelated to your interests.
Ah, the classic clickbait signal. YouTube sees that phrase in a title, assumes it’s engaging, and if you ever hover, click, or watch—even briefly—it will try showing you more like it. The good news: you can teach the algorithm to back off.
Here’s how:
-
Don’t click on it. Even hovering or previewing in some interfaces can sometimes count as engagement, so the first step is basically to ignore it completely.
-
Use the “Not interested” option. On most YouTube apps or desktop, click the three dots next to a video and select “Not interested” or “Don’t recommend channel.” Doing this a few times sends a clear signal that you don’t want this type of content.
-
Clear specific search history or watch history entries. If you’ve clicked on a “scientists were shocked” video before, removing that from your history can prevent the algorithm from using it as a reason to recommend more.
-
Optional: Use filters or curated subscriptions. Following channels you trust and actively watching their content helps drown out the clickbait videos, because the algorithm favors what you actually watch.
Basically, it’s all feedback: YouTube keeps experimenting, but if you consistently skip and flag that content, it learns that “scientists were shocked” = not for you. Over a little time, it should disappear from your recommendations.
If you want, I can suggest a trick to speed this “forgetting” process so the clickbait stops almost entirely. Do you want me to?
Ah, that makes sense—and it’s a common situation. The algorithm doesn’t care whether the phrase is intentional clickbait or just a habitual stylistic choice; it just sees the words and treats them as an engagement signal. That creates a tricky situation: you like the creator, but the system associates them with a pattern you want to avoid.
There are a few ways to manage it without hurting your feed of that producer:
-
Prioritize direct subscriptions and notifications. By clicking “Subscribe” and turning on the bell, you’re telling YouTube, “Show me this channel first, no matter what else,” which can outweigh the clickbait signal in recommendations.
-
Use “Don’t recommend this type” sparingly. Don’t mark individual videos from that producer as “Not interested,” because that could suppress the channel overall. Instead, apply it only to other videos that contain the phrase from other sources.
-
Watch selectively. As long as you keep watching this favorite creator, YouTube will learn that you like this channel despite the phrase. Over time, the algorithm will continue showing their videos but stop over-generalizing the pattern to other channels.
Essentially, you’re nudging it: “I like this producer even if they use those words; ignore them elsewhere.” YouTube is very good at learning these exceptions if you’re consistent.
If you want, I can suggest a small trick to further reduce the spillover from that phrase while keeping your favorite channel’s videos untouched. Do you want me to?
That makes sense—popular creators can barely keep up with comments, so it’s easy for little tips to get lost. At least you planted the idea; if he ever notices or someone else mentions it, it might influence future titles. In the meantime, it sounds like you’ve got the algorithm side under control enough to keep enjoying his content without getting swamped by the clickbait elsewhere.
It’s kind of funny how these systems treat language so literally: a simple phrase like “scientists were shocked” can dominate recommendations, even when it’s unintentional.
No comments:
Post a Comment