Every few months, another product you rely on ships an AI feature that lands with a thud. An AI-powered inbox sorter. A “smart” meeting summarizer that misses the point of every meeting. A writing assistant that makes your writing sound like everyone else’s. You didn’t ask for any of it, and you won’t use most of it.

This isn’t incompetence. It’s a rational response to a specific set of incentives. Once you see the pattern, you’ll recognize it everywhere.

1. Investor Signaling Is the Real Product Spec

Public tech companies live and die by forward-looking narratives. When a company’s growth is slowing, its stock price isn’t just reflecting current revenue, it’s reflecting what analysts believe the company will be worth in three to five years. Shipping AI features, even bad ones, sends a signal: we are not a legacy company. We are participating in the next cycle.

This is why you saw a wave of AI feature announcements clustered tightly together in 2023 and 2024 regardless of product fit. The audience for those announcements wasn’t users. It was the analyst community and institutional shareholders. Microsoft Copilot is the clearest example of this dynamic, but it’s far from the only one. When investor relations drives the roadmap, you get features that look good in earnings calls and feel hollow in actual use.

2. Competitive Fear Compresses the Time Available for User Research

Product teams at large companies spend significant time on user research before shipping. That timeline collapses under competitive pressure. When one major player ships an AI feature, every competitor’s leadership asks the same question within days: when are we doing this?

The result is a sprint to build something shippable rather than something useful. Teams skip the discovery phase, where you learn whether users actually have the problem you’re solving. They jump straight to delivery. This isn’t laziness. It’s the rational response to an environment where being second feels existentially threatening, even when the first mover built something nobody uses.

You end up with features that solve a problem the company imagined users had, based on a competitor’s bet, rather than any actual signal from the people who pay for the product.

Conveyor belt illustration representing the automated production of AI features driven by competitive pressure rather than user need
The feature factory runs whether or not there is demand at the other end.

3. Usage Metrics Don’t Catch Bad Features Fast Enough

Software teams measure adoption through usage data. The problem is that a feature can show healthy early adoption simply because users are curious. People try the AI writing assistant once. They try the smart summarizer. The numbers look fine in the first quarter after launch.

By the time data shows that retention on those features is poor, or that users actively disable them, the team that built the feature has often moved on. Quarterly planning cycles have closed. Resources have shifted. The bad feature becomes a permanent fixture, quietly unused, because removing it requires more organizational will than shipping it did.

This is how products accumulate AI features that nobody uses but nobody removes. The cost of building was visible. The cost of keeping something broken is distributed and invisible.

4. Enterprise Sales Teams Have Enormous Influence Over Roadmaps

For companies with significant enterprise revenue, features often exist because a sales team promised them to a prospect during a competitive deal. Not because users across the base requested them. Not because research validated the need. Because one salesperson, trying to close a contract, said “we have that” or “that’s on the roadmap.”

AI features are particularly vulnerable to this dynamic right now because enterprise procurement teams are under pressure from their own leadership to show they are evaluating AI solutions. They ask vendors whether their products include AI capabilities. Sales teams say yes. Product teams get the resulting requirement. The feature gets built for a checklist, not a workflow.

You, as an individual user, never needed this feature. An enterprise buyer needed it to exist so they could check a box on a vendor evaluation form.

5. The Cost of Building Has Dropped, Which Removes a Useful Filter

Historically, the expense of software development acted as a natural forcing function. Teams had to prioritize ruthlessly because engineering time was scarce. If you only had bandwidth to build three features, you spent real energy figuring out which three were worth building.

AI development tooling has lowered the cost of shipping certain kinds of features dramatically. You can integrate a third-party model, wrap a reasonable interface around it, and ship something in weeks rather than months. This is genuinely useful when the underlying feature is valuable. It becomes a problem when the only thing stopping a bad idea was the cost to build it.

Lower costs remove friction from bad decisions as effectively as they remove friction from good ones. The filter is gone. Product teams can now ship AI features that would never have survived a proper cost-benefit analysis, because the cost side of the equation has shrunk without the benefit side growing.

6. Defaults Lock In Behavior Before Anyone Evaluates Value

Many AI features get shipped as opt-out rather than opt-in. They appear in your interface on day one. You have to take deliberate action to disable them. Most users don’t. This inflates engagement numbers, which then justify the feature’s continued existence and future investment.

Software companies understand exactly how powerful defaults are. Setting an AI feature as default isn’t just a UX decision. It’s a strategy for manufacturing apparent demand. If a feature appears in your workflow automatically, and you use it once without thinking about it, you’ve just become part of the data that proves people want it.

The practical implication for you: audit the AI features running in your tools right now. Check your settings. Most products that have shipped AI capabilities in the last two years have enabled at least some of them by default. You may be feeding a usage metric for a feature you’ve never consciously chosen to use.

What to Actually Do With This

Understanding the incentives doesn’t mean resigning yourself to cluttered, AI-stuffed software. A few things you can do right now:

First, treat new AI features as opt-in regardless of how they ship. Disable them until you have a specific problem you want them to solve. This protects your workflow and starves bad metrics.

Second, when evaluating new tools, ask whether the AI features are central to the core use case or bolted on. Central means the product was designed with the capability in mind. Bolted on means someone added it to a checklist.

Third, if you’re building software, push back hard on features whose primary audience is investor relations or a sales checklist. That pressure will always exist. Your job is to name it clearly when it’s happening, so the team can make the tradeoff consciously rather than pretending it’s a user need.

The features nobody asked for aren’t accidents. They’re the predictable output of a system where the people shipping the feature and the people using it answer to completely different audiences.