Back to blog

Your Product Does More Than Your Users Think

By Alon Binman

When I was a Solution Architect at Mixpanel, I sat with hundreds of product teams who wanted the same thing: to understand how users were actually moving through their product.

The answer was almost always the same, and almost always uncomfortable.

A small set of features did most of the work. Everything else — the features teams had spent quarters building, launching, and announcing — sat mostly unused. Not because the features were bad. Because users never found them, never understood them, or never had a reason to try.

I'd watch product leaders stare at those dashboards, already doing the math in their heads: we built all of this, and they're using 10% of it.

That pattern isn't unique to Mixpanel customers. It's the default state of B2B SaaS.

The feature adoption ceiling is lower than most teams realize

The data is consistent across every major adoption analytics vendor.

Pendo's 2024 benchmarks show that just 6.4 out of every 100 features account for 80% of click volume across their customer base. The remaining 93.6% of features compete for the other 20%. And Pendo was saying the same thing in 2019 — back then, the finding was that 80% of features were rarely or never used. Five years later, the shape of the curve is unchanged.

Userpilot's 2024 product benchmarks put the mean core feature adoption rate across B2B SaaS at 24.5%. That's the core features — the ones product teams explicitly identified as central to the product's value. Three out of four users, on average, don't use them.

These numbers aren't describing broken products. They're describing the baseline. If your analytics look like this, you're not failing — you're normal. Which is exactly the problem.

Users renew on the value they experienced, not the value you built

This is where the adoption gap stops being a product metric and starts being a revenue metric.

Amplitude's retention analysis across their customer base found that users who adopt 70% or more of a product's core features are roughly twice as likely to stay as users who don't. That's not a surprising result. It's the mechanical consequence of how software value works: a user who's only using the onboarding flow and one core report is evaluating the smaller version of your product at renewal — because that's the only version they've ever seen.

You built a product that solves ten problems. They're using it to solve two. When the renewal conversation comes, they're weighing the cost against the two. The other eight don't exist to them.

The companies that retain and expand best aren't the ones with the best products. They're the ones that close the gap between the product that was built and the product that gets experienced. That distinction is almost always invisible in engagement metrics. A user who logs in every day and uses 15% of the product looks identical to a user who logs in every day and uses 80% of it — until one renews and the other doesn't.

The gap is widening, and it's not slowing down

Here's what changed in the last eighteen months.

AI-assisted development — Cursor, Claude Code, Copilot, and the ecosystem around them — compressed how long it takes to build and ship a feature. Teams that used to ship monthly ship weekly. Teams that shipped weekly ship in days. The output velocity is genuinely unprecedented.

But shipping velocity and discovery velocity are two different curves. The first one got much steeper in 2024. The second one didn't move.

The in-product guidance layer that was supposed to help users keep up — tours, tooltips, checklists, announcement modals — still relies on someone manually building, targeting, and maintaining every piece. A feature shipped on Tuesday can take weeks to get proper in-product guidance, if it gets any at all. Meanwhile, the product has moved on to the next three features.

The 6.4-out-of-100 pattern didn't appear because users are lazy. It appeared because discovery infrastructure was built for a world where products changed slowly. That world is gone, and the gap it left behind is widening every sprint.

The right question isn't "are users happy"

Most teams evaluate adoption through the proxies they have: NPS, CSAT, weekly active users, time in app. Those tell you whether the users you're retaining are satisfied with what they're doing.

They don't tell you what those users aren't doing.

A user can be satisfied and still using 15% of your product. They can be active every day and still working around missing features they don't know exist. Engagement metrics can't separate "using the product fully" from "using the part they figured out in week one and nothing else."

The question that actually predicts retention is harder to measure with traditional tools: do users know what they're missing? And the honest answer, most of the time, is no — because nothing in the product is set up to tell them.

Closing that gap at scale isn't a content problem. You can't write your way to it. It's a systems problem: the product has to recognize, in real time, when a user is working harder than they need to, and intervene with the right guidance at the right moment. For every user. Every session.

That's the shift we're building Deway around — and it's the reason we think the next generation of adoption tools isn't a better tour builder, but a different category of software entirely.

Your product does more than your users think it does. The job is to close that gap — before they renew on the smaller version of it.


Alon Binman is the co-founder of Deway (deway.ai), an AI-native autonomous adoption layer for SaaS products. Before Deway, Alon spent 15+ years at the intersection of product and customer success, including roles as a Product Manager, founder, data and product strategy consultant, and Senior Solution Architect at Mixpanel. You can reach Alon on LinkedIn.