Drowning In Tickets? It's time to talk about the last mile transform

Ellen Perfect
3 March 2025

Data teams are drowning

Every week, I see a new Reddit thread from a data engineer asking: is this normal? Does anybody else feel this way? They talk about feeling overwhelmed, having too much to do over too broad a mandate. And overwhelmingly, they get told that they need to say no more and manage stakeholders better.

And that’s sound advice for somebody who wants to make it in business. But there’s a deeper problem here.

  • In a 2020 survey, it was found that 97% of data teams are strapped for capacity, with many taking on responsibilities outside of their JDs
  • A 2024 survey found that 68% of data workers are overwhelmed by the sheer friction of accessing and managing data platforms
  • That same 2024 survey found that data workers are spending over 60% of their working time dealing with data requests

Data teams are systemically overwhelmed, and a lot of that burden is coming from too many data requests that are too high-friction to fulfill. 

Data is a product manager now - and people want end to end products

The role of the data team has evolved in recent years. Many companies are adopting the mentality that the data team functions as an internal product manager, delivering solutions (data) at scale to stakeholders.

But if this is the case, they face a problem that’s very familiar to any PM at a growing company: the tradeoff between customization and capacity. Because data work takes a whole lot of work. But product managers only get credit for tangible outputs they ship that solve a problem:

  • Nobody really cares what’s going on in the DWH - they only care about how much effort they have to put in to use data
  • Your executives don’t care about the models you write or the time you spend perfecting them for stakeholders - they care about the revenue attached to the campaigns your data feeds

Usable data means something different to every person and platform

“Wouldn’t it be cool if…”, “How difficult would it be to…”. These are common questions for anybody dealing with data tickets. If you work at a company with a strong data culture, your stakeholders are always going to be looking to push the limits of what’s possible with data. And that’s a good thing.

But the tradeoff is: everyone’s innovative idea will be different. A column in the right format, a cleanup of data in the CRM, adding a rollup calculation to feed marketing automations. All relatively small changes, but all unique tickets that can add up.

People and platforms each have specific needs related to the actual columns that get activated, and the case structures and formats that they require. And when the specific need isn’t met, you haven’t delivered a complete product.

Data teams lose time throughout the data lifecycle - but rarely look at the full picture

To talk about how to ease the burden on data teams, we need to talk about what’s slowing them down. The common ones we see are:

  • Overly complex data stacks: Having too many tools to integrate and maintain takes a lot of time and a lot of people. Navigating them to find good data and port it to places where it’s usable is a major drain on time. Gartner even estimates that 60% of a data team’s time is wasted looking for data.
  • Lack of data quality: Even if all the data is easily accessible, if the data isn’t unified into trusted golden records, 
  • The friction in executing: Even with golden records in place, the need to customize and experiment with data will remain. If the only options for working with data are the large-scale platforms where golden records are built and that require strict access limitations and high testing & verification times, the only way to execute on small data requests will be through big processes. 
  • Sheer volume of requests: As long as data teams are the only access point for executing data tasks, a company’s ability to innovate will only be able to scale linearly with the team’s time.

The go-to solutions ignore real-life friction

This isn’t a new problem. Data teams have been overwhelmed for a while, and the back and forth about which tickets really matter is a huge point of friction between teams. But I see a number of common solutions get recommended that don’t quite meet the mark. 

  • Platform Consolidation is never simple: The frequent argument is that if you can get one tool for everything so that everything is perfectly interoperable, you won’t have to chase data around. The problem is that that’s not how large organizations work. Every platform transition leaves some tech debt behind, every big platform takes work to maintain, and you can’t SaaS your way out of ad hoc needs.
  • Top-down solutions solve strategic problems, not tactical ones: Having a strong data leader and a healthy layer of senior data staff to set priorities and work alongside their stakeholders to plan their data needs is absolutely essential for scaling a dat organization. But tickets often come from small requests. I’ve never seen an organization so aligned that it doesn’t have any ad hoc asks or little priority discrepancies. So the solution needs to go deeper than stronger leadership.
  • The human urge to tinker will break process rules: If you’re a well-aligned organization with all the right SaaS tools and all the best governance policies, you still can’t control for all the “quick little experiments” your stakeholders want to explore. Nor should you. Doing cool, experimental work is awesome, and it’s what makes working at a company with a strong data culture great. But a little calculation here and a sync there all add up. Which leads us to:
  • Self-serve strategies degrade your source of truth: The increasingly popular solution is the slough of AI-powered, no-code tools for working with data. These tools are aimed at empowering business users to execute their own data needs. They can be a powerful part of a tech stack, but if they're not all connected and governed, varying enrichments or column additions fracture your view of the customer and cause data disagreement.

Investing in Agility

The modern data stack is designed around the premise that companies need large tech stacks of best in breed tools to execute wide-reaching strategic data projects. And it supports that. But big, complex tech stacks built for big, complex tools don’t sustainably support small, low-stakes changes.

But what if not all data work has to be so dense?

We work with a lot of organizations that run lean teams, and that care a lot about efficiency. But they also care about fostering that environment of innovation that powers their competitive edge.

Increasingly, there are large enterprise platforms selling large-scale data unification solutions to solve problems upstream. And more of the marketing automation platforms are offering self-serve tools for business users to segment and enrich downstream.

But in the middle, there's still the barrier of adhoc work. Here's what we've seen help get companies out of their ticket backlog:

  • AI can help -- but it can also distract: AI commands that replace complex SQL or regex can be a life saver. But falling for the trap of AI platforms that do everything in a black box can be costly. Stick to tools that allow you to process data via carefully controlled prompts. 
  • Use agile formats: We use a liquid template system for cleaning and formatting data that would otherwise take time. We pass customer events and other larger datasets through our AI via jsons so that we can analyze trends at scale.
  • Keep platforms connected: When you make changes or allow self-service access, sync everything back to the warehouse. Connecting every change to a source of truth and keeping self-service activity within your realm of governance can prevent costly silos and keep you from losing time to lost or rogue data.