The CMO-CFO Gap: Building a Business Case for AI Search

Author Perspective

“In my recent product development travels interviewing Organisations there appears to be a widening gap in boardrooms: marketing reports high activity, but finance sees soft outcomes. In an AI-first world, traditional attribution is broken – failing to ‘see’ the research happening before a site visit. To close this credibility gap, we must stop reporting vanity metrics and start measuring the authority indicators that CFOs will actually fund.”

Key Takeaway

A winning business case reframes performance towards authority indicators that can be defended as pipeline-leading signals and away from vanity metrics (rankings, impressions).

  • Rankings are stable, influence is not
  • AI summaries reduce clicks and visibility
  • Last-click attribution mislabels AI influence
  • CFOs fund measurable, leading indicators
  • Use CAC, velocity, and quality levers
  • Track authority indicators across the journey
  • Pilot quickly with agreed decision gates
  • Report outcomes in finance language

Outline

  • Why rankings stopped predicting pipeline
  • What “attribution blindness” looks like now
  • The new ROI logic CFOs accept
  • Authority indicators for board reporting
  • A practical 90-day pilot plan
  • Signals, thresholds, and decision gates
  • Common pitfalls that weaken the case
  • How to present this in one page

Introduction

Most marketing teams can explain activity, but fewer can defend impact in terms a CFO will accept. In an AI-influenced discovery environment, this gap widens because traditional attribution often cannot “see” the research that happens before a buyer ever visits your website.

The internal conversation needs to shift from “How are we ranking?” to “Where are we being trusted and cited when buyers ask high-intent questions?” That is the heart of bridging the CMO-CFO gap: replacing vanity metrics with authority indicators that correlate with downstream pipeline health.


Why the CFO stopped believing “rankings”

For years, the implicit deal was simple: invest in SEO, earn rankings, receive clicks, and convert demand. That causal chain is now materially weaker.

Large-scale behavioural research shows that a substantial share of Google searches end without a click to the open web. SparkToro’s 2024 study estimates that, per 1,000 Google searches, only a minority of clicks go to the open web. (SparkToro) Search Engine Land’s coverage explains the practical implication: the search experience increasingly resolves intent on the results page, not on your website. (Search Engine Land)

This shift is amplified by AI-generated summaries. Pew Research Center analysis found that when an AI summary appears, users are less likely to click traditional links than when no AI summary appears. (Pew Research Center) Even where your rankings remain strong, your opportunity to influence a buyer via a site visit can decline.

From a CFO’s perspective, this creates a rational objection: “If rankings do not reliably produce measurable demand, why should we keep funding them as if they do?”


The rep-free reality makes the measurement gap worse

Independently of search changes, the buying process itself has continued moving away from sales-led discovery. Gartner has forecast that the majority of B2B sales interactions will occur in digital channels and has also reported a growing preference for seller-free experiences. (Gartner)

In practical terms, buyers self-educate longer, consult peers and third parties more, and approach suppliers later with firmer preferences. The result is a “dark funnel” problem: critical influence happens before your analytics can see it.

This is why marketing and finance teams increasingly talk past each other:

  • Marketing reports strong activity (impressions, keyword positions, content output).
  • Finance sees softer pipeline and cannot connect spend to outcomes.

Attribution blindness: how AI-influenced demand gets mislabelled

Most organisations still rely on a combination of last-click attribution, channel groupings, and platform-specific reporting. That stack was never perfect, but it becomes actively misleading when buyers get answers inside AI interfaces.

Common symptoms:

  1. More “Direct” traffic that does not behave like Direct
    Spikes in Direct sessions alongside falling Search CTR often indicate untracked influence (for example, AI summaries, private sharing, dark social, internal forwarding).
  2. Assisted conversions grow, but the story is unclear
    Multi-touch reports show “something happened,” but cannot attribute it to specific high-intent questions answered by AI systems or third-party sources.
  3. Brand search stagnation even when awareness is increasing
    Buyers do not always need to search your brand if AI already provides the shortlist.

To a CFO, this looks like measurement weakness, not market evolution. So the business case must start by naming the problem in finance terms: the organisation is flying blind on a growing portion of the journey, which increases forecast risk and waste.


The new ROI logic: three levers CFOs already accept

Instead of trying to “prove” AI influence with a single perfect metric, build the case on levers finance recognises and can model with internal data:

1) Reduce CAC drag

When organic influence declines, organisations typically compensate with paid spend, SDR effort, and longer nurture, which increases blended CAC.

How to model it (simple, defensible):

  • Establish baseline blended CAC (Sales and Marketing cost / new customers).
  • Identify current substitution costs (incremental paid spend, SDR headcount, agency fees).
  • Create a scenario range: “If we restore early-stage influence, what portion of substitution cost can we avoid?”

The CFO does not need certainty. They need a sensible model with transparent assumptions and decision gates.

2) Improve sales cycle velocity

When buyers arrive better educated and more confident, cycles tend to compress. Gartner’s research directionally supports the broader shift to digital-first interactions. (Gartner)

How to model it:

  • Baseline average cycle length by segment.
  • Identify the stages most affected by self-education (early discovery, shortlist formation).
  • Measure whether better “pre-sold” inbound improves stage-to-stage conversion and time-in-stage.

3) Increase conversion quality, not volume

In a zero-click world, raw session counts matter less than intent quality. Your business case should explicitly prioritise:

  • Higher opportunity-to-close rates
  • Higher average deal size (where relevant)
  • Lower sales time per closed deal

This is also where marketing earns credibility: you are not promising “more traffic.” You are committing to “more qualified demand and lower waste.”


Authority indicators: what to measure instead of vanity metrics

Your goal is to define a small set of authority indicators that (1) plausibly lead pipeline and (2) can be monitored consistently.

Use a balanced set that covers discovery, trust, and journey coverage (not just Google):

1) High-intent question coverage

A defined set of high-intent questions (by stage) that you expect buyers to ask.

  • Example: Stage 2 questions often include “cost of inaction”, “business case template”, “ROI model”, “risk and compliance implications”.

Measure: how often your point of view appears in answers (and whether it is cited).

2) Citation presence and quality

Track whether your owned assets (and credible third-party references about you) are used as sources in AI-generated answers.

Measure: citation frequency, consistency, and the credibility tier of co-cited sources (for example, standards bodies, regulators, analyst research, reputable industry publications).

3) Share of voice in AI answers (category-level)

Not competitor name-dropping in your content, but internally you still need a benchmark: “Of the answers produced for our category questions, how often do we appear?”

Measure: presence rate by question set and stage.

4) Funnel-stage visibility

Where do you disappear: Problem, Business Case, Selection, Implementation, Optimisation?

Measure: stage coverage and “drop-off” points.

5) Content extractability readiness (technical)

This is not “more content.” It is whether your best content is structured so systems can reliably extract and cite it.

Measure: structured headings, clear definitions, FAQ coverage, schema usage, canonical consistency, and entity consistency.

These indicators align to the Topic 2 outline requirement: move from rankings to board-defensible authority metrics.


A lightweight 90-day pilot plan CFOs will fund

CFOs are more likely to approve a controlled pilot than an open-ended “AI optimisation programme.”

Step 1: Scope (Week 1)

  • Choose one offer (or one product line) and one ICP segment.
  • Select 10-15 buyer questions that map to Stage 2 concerns (business case and ROI).

Step 2: Baseline (Weeks 1-2)

  • Capture current Search Console CTR trends, non-brand organic sessions, assisted conversions, and Direct traffic quality.
  • Run the question set across the major AI experiences relevant to your buyers and document which sources are cited.

Step 3: Define signals and thresholds (Week 2)

Agree in advance what “improvement” means. Examples:

  • Increase presence on the question set from X to Y
  • Improve stage coverage (reduce Stage 2 “blackout”)
  • Improve opportunity conversion rate for AI-influenced inbound (proxy measures are acceptable if consistent)

Step 4: Execute targeted improvements (Weeks 3-10)

Step 5: Decision gates (Weeks 11-12)

Hold a joint review with Finance and Revenue leadership:

  • What moved (authority indicators)?
  • What downstream changes appeared (pipeline quality, conversion rate, velocity proxies)?
  • Do we scale, refine, or stop?

This is the “defensible ROI” bridge: a bounded investment with pre-agreed decision gates.


Common pitfalls that make the CFO say “no”

  1. Vanity dashboards that report activity without a causal argument.
  2. Tool-first conversations before agreeing what to measure.
  3. Over-claiming outcomes without modelling assumptions and ranges.
  4. Ignoring the buyer journey and measuring only top-of-funnel.

A CFO will fund clarity. They will not fund hype.


Next Steps

  • Draft a one-page internal brief that defines: the problem, the economic downside of inaction, and 3-5 authority indicators you will track for 90 days.
  • Align with revenue leadership on what would count as success (for example: improvement in sales cycle velocity, higher intent inbound mix, reduced CAC pressure).
  • Continue to Topic 3 in this Blog Series to evaluate what “good” solutions must be able to measure and operationalise.

Optimise for AI Search?

About the author

Doug Johnstone helps marketing and commercial leaders build credible business cases for AI search visibility that stand up in the boardroom. In this article, Doug tackles a familiar tension: marketing can report activity, while finance sees soft outcomes. His approach reframes performance away from vanity metrics and toward authority indicators that connect to the levers CFOs will fund – CAC efficiency, sales cycle velocity, pipeline quality, and risk reduction. Doug’s work sits at the intersection of go-to-market strategy, measurement, and buyer behaviour change, with a specific focus on how AI-first discovery breaks traditional attribution. If you are trying to secure investment for GEO and AEO initiatives, Doug’s perspective is designed to help you align stakeholders, define leading indicators, and run a practical 90-day pilot with decision gates that drive confident next steps.