I recently did something a lot of RevOps leaders are doing right now.

I connected ChatGPT to HubSpot and prompted it to analyze our closed-lost deals from the past 12 months. I was looking for patterns, common objections, blind spots etc. that I could use to keep improving our sales motion.

The output was impressive.

Clear themes. Confident conclusions. Exactly the kind of analysis that would normally take hours of spreadsheet work or a half-day deep dive in reporting.

I almost started immediately putting together a shortlist of recommendations to tee up for action.

The Insights Looked Legit

The AI surfaced trends around common loss reasons, deal velocity and where prospects were stalling. Nothing jumped out as obviously wrong – in fact, most of it felt right.

This is where things get dangerous.

When AI gives us nonsense, we question it. When it gives us something plausible, we tend to trust it.

But in my case, one takeaway didn’t sit quite right. As I thought more about it, I realized it lacked context I knew should have been there. So I started poking.

The Data Wasn’t What I Thought It Was

On paper, I had asked AI to analyze “the last 12 months of closed-lost deals.”

In reality, that 12-month window spanned multiple operational changes:

  • We split our pipeline to separate longer term developing opps
  • Those deals had different stages and sales cycles
  • We integrated an AI notetaker into HubSpot
  • We evolved how loss reasons were captured

None of those changes were wrong, but they meant that a deal closed 11 months ago was captured very differently than a deal closed last month.

To me that context was apparent. Not so in the trends that AI produced.

Confident Output, Inconsistent Inputs

Here’s what actually happened:

Older deals had sparse or unstructured notes. Newer deals had richer, AI-generated summaries. Fields technically existed across the entire dataset — but their meaning had evolved.

So the AI did exactly what it was supposed to do: It found patterns across the data it was given.

The problem was that I had handed it multiple versions of the truth and asked it to treat them as one system. As a result, I got an impressive-looking but ultimately misleading analysis.

Some insights were missing critical context, while others overweighted newer data without saying so. A few would have pushed us toward changes that felt data-backed but would have been strategically wrong.

This Is the Real Risk for RevOps

The danger in using AI based on inconsistent data is that it will give you a reasonable answer that’s wrong in ways that are hard to detect unless you know your data history cold.

RevOps leaders live in directional insight. We don’t need perfection to make decisions. We need enough signal to act.

AI accelerates that process. But when your data foundation isn’t consistent, it also accelerates bad decisions.

In my case, it could have meant:

  • Fixing the wrong sales problem
  • Misdiagnosing why deals were lost
  • Reinforcing assumptions instead of challenging them

This Wasn’t an AI Problem

It’s tempting to blame the tool, but as we’ve said before, chances are your AI isn’t hallucinating. The truth is simpler and more uncomfortable:

  • The AI didn’t know our process had changed
  • I hadn’t enforced consistency across time
  • And I hadn’t defined which data was actually “analysis-ready” in my prompt

What I Changed After This

A few practical shifts came out of this experience:

  1. I stopped treating time ranges as neutral
    Any meaningful AI analysis now starts with “what changed during this period?”
  2. I defined data breakpoints
    Major process changes are now treated as analytical fault lines that needed to be accounted for.
  3. I’m stricter about field meaning
    If a field’s purpose evolves, the historical data gets normalized or excluded.
  4. AI insights are hypotheses, not answers
    If the data foundation isn’t solid, the output doesn’t get operationalized.

I know – this isn’t glamorous stuff. Way more fun to talk about all the crazy new AI features out there. Nevertheless, all of it matters more now than it did before AI.

AI Raises the Cost of Bad Data

Messy data used to slow reporting. Now it produces fast, convincing answers that feel strategic and sometimes aren’t. We all have an expectation that AI should make EVERYTHING fast, so it almost feels unnatural to slow down for manual analysis.

If you’re a RevOps leader using AI inside HubSpot or other CRMs, the question isn’t whether the tools work but whether they deserve to be trusted.

If they can’t be trusted, that’s broken Data Trust.