Finding (and Fixing) Your Funnel Bottlenecks: A Revenue Leader’s Guide

Chris Calkin
5 March 2025

Finding (and Fixing) Your Funnel Bottlenecks: A Revenue Leader's Guide

Over my career, I've kept a list of all the metrics that have mattered. By matter, I mean at some point or another, the crux of the big issue or success (pipeline generated, revenue earned, CAC, etc.) came down to it. This tracking helps ensure we're looking at the right data when we need to diagnose problems, rather than wasting time on metrics that don't impact outcomes.

The challenge is that each business leader tends to focus on metrics they understand. Marketing looks at traffic, SEO, ad spend, backlinks, and conversion events. Sales watches activity metrics and win rates. Customer Success tracks time to value and renewals. Everyone says they care about the full funnel, but most people stick to what they know.

A successful CRO needs to be that full-funnel leader. That means breaking out of these silos to find the true bottlenecks in your funnel. Here's my approach:

The Metrics That Actually Matter

The metrics conversation can escalate quickly, with engaged contacts, MQLs, SQLs, win rates, and other acronyms swirling around. The most important step is to filter out the noise and focus on metrics that truly matter. Ultimately, these are the ones I’ve personally seen drive the greatest impact.

1. Total accounts engaged (either visited website or talked to a sales person)

Clear, unbiased indication of how many companies are engaging with you.

2. Quality pipeline (opportunities that advance to 30%+ likelihood to close)

Reliable pipeline number that is universally agreed upon and can’t be inflated/deflated

3. Total closed won

The good stuff. Some will care more about $$, others on quantity, depending on biz stage

4. Net Revenue Retention

Are your customers net expanding or contracting?

5. CSAT/NPS

Are your customers getting what they want/willing to advocate for you?

6. Magic Number ((Current ARR - Prior Period ARR)/Prior Period S&M Spend))

How efficiently are you converting sales and marketing spend into revenue?

SaaS Magic Number | Formula + Calculator

What's useful about these metrics is they tend not to be biased and give you a complete sense of the GTM funnel. I can mess with my win rate as a sales leader by adjusting qualification standards. I once watched a sales leader celebrate a win rate that shot from 22% to 41% in a single month. The champagne came out until we discovered they'd simply instructed the team to mark any challenging deals as "unqualified" rather than "closed lost." Same sales performance, totally different metric.

Finding Real Bottlenecks in Four Steps

1. Data Source Validation

The first thing I ask is: where is this data from and is it consistent with how we've historically measured it?

If I'm looking at pipeline generation week over week and notice a 30% drop, the issue might not be performance—it could simply be a reporting hiccup. Someone changed a filter, and suddenly we're comparing apples to orangutans.

Example: CSAT Survey Changes A client’s CSAT score suddenly fell from 4.8 to 3.6. The support team was getting grilled until someone realized they had recently changed their survey targeting from “users who completed onboarding” to “all users including those who abandoned.”

File:Paris Tuileries Garden Facepalm statue.jpg - Wikipedia

This is why it’s also important to have a source of truth. When I look at LinkedIn conversions in their UI, my ROAS is going to look vastly stronger than if I look in an unbiased platform where I’ve defined attribution consistently. In my experience, the data warehouse (Snowflake, Databricks, BigQuery, etc.) is the best source of truth but the data still needs to be made available in other systems of action (Hubspot, Salesforce, Zendesk, etc).

Quick Hits: Data Source Validation Checklist

  • Document exactly how each metric is calculated and from what source
  • Verify tracking code implementations regularly (monthly at minimum)
  • Do spot-checks comparing system-generated numbers with manual calculations
  • When a metric changes dramatically, first check if any reporting parameters changed
  • Create a change log for any modifications to report definitions or data collection

2. Root Cause Investigation

Once you’ve verified your data is accurate, it’s time to find out why something is happening. Start with a systematic breakdown and don't forget human error.

Example: The European “Growth” Mirage Earlier in the summer we’d seen European signups jump 40%. Initially exciting until we noticed an unusual spike in Gmail addresses rather than business domains. The cause? Summer school programs having students create free accounts. Without investigation, we might have poured resources into what wasn't actual market growth.

To find root causes effectively:

  1. Break down the metric by every possible dimension (region, segment, channel, etc.)
  2. Look for patterns or anomalies that stand out
  3. Compare to historical trends and identify exactly when changes occurred
  4. Examine what else changed around that time (processes, tools, team members)
  5. Get subjective context from the team - even though it can't be fully relied on, it can point you in the right direction

3. Business Impact Assessment

Usually if you’re investigating a metric, you already think it’s having a negative impact. But this step is about double-checking if it’s actually better than it appeared, has no real impact, or is much worse than you thought.

Strategic Win Rate Targets: I purposely target new business win rates at between 20 and 30 percent. These percentages give me the best combination of engaging with buyers early in the funnel and deals likely to close. If I see the win rate drop or rise, I check if we’re just engaging earlier (generally good) or if there’s a real problem with sales effectiveness (bad).

Screenshot 2025-03-04 at 3.41.36 PM

 

Example: Role Transition Impact: I recently transitioned our CSMs to AMs and gave them a net revenue retention target. If I look at my average performance per quota carrying rep now, it looks worse than before. But those headcount were previously just overhead—now they’re actually contributing to revenue at the same cost. What initially looks like performance decline is actually increased efficiency.

Quick Hits: Business Impact Assessment Framework

  • Always trace the financial impact of any metric change (revenue, cost, margin)
  • Create “so what” analyses that connect operational metrics to business outcomes
  • Consider when something requires action, further monitoring, or can be ignored
  • Distinguish between leading indicators (require action) and lagging indicators (require analysis)
  • Check if organizational changes have altered how metrics should be interpreted
  • Compare the impact across customer segments to identify specific problem areas

4. Action Planning with Accountability

The most important step: what are we going to do about it and who is going to do this work?

Example: Website Traffic Recovery When we identified tracking issues affecting our website metrics, we assigned our marketing ops lead to implement weekly automated checks with specific deadlines. They created alerts for unusual patterns and a dashboard showing device-specific trends, preventing similar issues in the following quarters.

The work needs a deadline, but most success comes from following up on that assignment personally. I use Linear for tracking these initiatives because it forces clarity on deliverables and makes follow-up concrete.

Quick Hits: Effective Action Planning

  • Assign a single owner (never a team) with clear decision-making authority
  • Set specific, measurable outcomes, not just activities
  • Establish firm deadlines with interim checkpoints
  • Having a clear format to solve common issues improves operational efficiency
  • Schedule personal follow-ups rather than relying on status reports
  • Document the change and when necessary, clearly inform all impacted stakeholders

The Right Cadence Matters

One of the worst things is using data sporadically. Executives tend to fall into one of two camps: dashboard addicts or quarterly tourists.

I do daily checks of big metrics (15 minutes tops) and dive on any that look funky. Then I have weekly reviews of all metrics. This rhythm helped us catch that new trials weren't routing correctly—we fixed it in hours instead of discovering it weeks later when our pipeline was empty.

Quick Hits: Establishing an Effective Review Cadence

  • Block 15 minutes every morning for critical metric scanning
  • Create a rotation schedule for deep-dive topics (don't always focus on the same area)
  • Set clear agendas and pre-read requirements for cross-functional reviews
  • Distinguish between operational reviews (tactical) and strategic reviews (directional)
  • Implement a “metrics moment” at the start of every team meeting
  • Create a decision log that connects metric insights to actions taken

Make Data Accessible

McKinsey research shows we spend 1/5 of our time just searching for information. That’s insane.

At Census, we’ve always had data accessible to everyone in their system, but we’ve gone beyond this with proactive notifications so teams have data not just where they need it but when they need it.

At a previous company, we connected our usage data in Looker with critical CRM information. This seemingly simple integration improved our self-serve to sales revenue growth by about 25%.

Quick Hits: Making Data Truly Accessible

  • Invest in a self-service analytics layer (like Hex, Mode, Looker) that doesn’t require SQL knowledge
  • Create a metrics glossary with clear definitions everyone agrees on
  • Build templated dashboards that answer the most common business questions
  • Ensure your team has invested in ETL and Reverse ETL technology
  • Make sure everyone who needs data understands how to access and filter it without help
  • Make your data warehouse the single source of truth, then sync to tools

Build a Metrics Culture

Your team will naturally focus on the metrics they’re compensated for while ignoring others. I’ve found it effective to start with their core purpose, then have them map all the factors that impact their success and all the areas they influence.

This exercise helps them see the interconnections. A BDR who understands how their qualification process affects not just their MQL-to-SQL conversion but also downstream win rates and even customer activation times will make better decisions than one who only watches their meeting booking numbers.

Quick Hits: Developing a Metrics-Driven Culture

  • Ensure compensation is both team-specific and always aligns with larger business metrics
  • Consistently share metrics with the entire team and get their input on how they are related
  • Have your teams always present with metrics, whether they are an IC or a manager

The Bottom Line

I've found that businesses that win are the ones that can quickly identify bottlenecks, understand what's causing them, and take action to fix them. This isn’t just about having fancy tools—it’s about starting with metrics that actually matter, reviewing them consistently, and building a culture where your team acts on data instead of just looking at it.

Quick Hits: Getting Started Today

  • Map out your core metrics AND all adjacent metrics, doing an inventory of what you have, where it’s found, and how it’s defined
  • Schedule your first 15-minute daily metrics scan for tomorrow morning
  • Make sure you have a system for logging and acting on errors, inconsistencies, or issues with a reasonable SLA
  • Document any metrics inconsistencies you discover between teams or tools
  • Create a simple dashboard with just your core metrics to share with leadership