The Rule of Three for Data & AI Investments

The Rule of Three offers a sharper filter for AI investment: quantify value, define time to impact and prove how it will be used. CFOs are funding more AI than ever, yet most initiatives still fail to deliver measurable returns because business cases, timelines and adoption plans lack rigour.

Chief Financial Officers are drowning in AI proposals right now. Everyone’s got a pitch deck promising transformative technology, game-changing insights, and revolutionary efficiency. But only 14% of CFOs report seeing a clear, measurable impact from their AI investments so far (RGP, 2026 CFO Research Report: The AI Foundational Divide).

The gap between AI hype and AI reality is staggering. And it’s frustrating finance. We need to take action now before AI becomes a toxic term at funding time.

AI ROI and Why

MIT’s Project NANDA research (The GenAI Divide: State of AI in Business 2025) suggests that 95% of organisations are getting zero return from GenAI efforts so far. BCG’s research finds that only 22% of companies have advanced beyond the proof-of-concept stage to generate some value, and only 4% are creating substantial value.

These aren’t marginal failures. They’re complete misfires. The culprit usually isn’t the technology. It’s fundamentals. Projects launch without clear business cases. Adoption plans don’t exist. Nobody knows who’s responsible for outcomes which aren’t defined or tracked. CFOs end up funding expensive science experiments.

Nearly half of CFOs (48%) say they’re ultimately responsible for ensuring AI delivers measurable value. That’s more than any other C-suite role (RGP, 2026 CFO Research Report). Finance chiefs need a better filter for separating real opportunities from well-meaning nonsense.

Part of the problem is that AI initiatives often get a free pass that other investments never would. IBM’s 2025 CEO Study found only 25% of AI initiatives have delivered expected ROI over the last few years, but investment keeps accelerating anyway. Why? FOMO. In the same study, 64% of CEOs acknowledge that the risk of falling behind drives investment in some technologies before they have a clear understanding of the value they bring to the organisation.

The era of buying AI for AI’s sake must end.

But for many companies, it’s not over yet. They’re still approving AI projects with business cases that would never fly for conventional IT investments. The usual rigour goes out the window because nobody wants to be the executive who missed the AI revolution. Which makes the need for a simple, hard-nosed investment test even more critical.

The Rule of Three

The solution? Demand focus. Every AI or data investment should distil down to a one-page business case with three key concepts. These concepts force alignment with actual business priorities and expose whether an idea is genuinely ready for investment.

If a proposal can’t articulate its value lever, time-to-impact, and adoption plan in quantitative terms, hit pause. It’s not ready.

Rule One: Value (in Pounds and Pence)

What measurable business benefit will this deliver?

Every investment should tie to a clear outcome: increasing revenue, improving margin, or mitigating risk. “Better insights” or “enhanced customer experience” don’t cut it without quantification. CFOs want hard financial metrics, not speculation and soft targets.

The value proposition needs to answer: what and how much do we expect to save or gain from this effort?

  • Will this AI automation cut process time and save labour hours? By how much? (Cost reduction)
  • Will this ML model improve cross-sell or customer retention? How much will that make? (Revenue growth)
  • Will this system catch fraudulent transactions or ensure compliance? How much will that save? (Risk mitigation)

Spell it out in plain pound terms.

For example, Aveni reports that one 200 adviser network reduced average report creation time from 105 minutes to 15 minutes using automated suitability report writing, saving 15,000 hours annually (around £450,000 worth of advisers’ time).

But the real finance question is what happens to those hours: are you banking them to the bottom line, redeploying capacity into revenue work, using it to reduce risk/backlog, or avoiding future hiring?

That kind of concrete metric resonates. Technical descriptions do not.

If the business value can’t be articulated in one sentence with a number, keep building your case until it can.

Rule Two: Time-to-Impact

When will we see results, and when do we break even?

CFOs manage cash flows. They need to know when an investment pays back, not just if it might someday. For AI projects, this is critical. As we’ve already seen, the temptation to embark on open-ended explorations is strong, but to make AI work for you, you need a clear horizon for initial and full impact.

Spell out how success will be measured in 6-12 months. Include incremental targets: “10% usage by Q2, full deployment by Q4, with £X benefits realised in the first year.”

90 days is a realistic benchmark for initial returns in successful AI projects. Don’t promise miracles in month one, but don’t accept vague timelines stretching to year three either.

The timeline should account for training and change management, not just tech build. If time-to-impact isn’t quantified, you can’t manage the investment with proper rigour.

Rule Three: Adoption Plan

The adoption plan is often the most overlooked part. This answers: who will actually use this AI, what is the expected usage, and who is responsible for monitoring ongoing usage and value once it’s live?

Many AI business cases fail not because the technology doesn’t work, but because nobody put a plan in place to change workflows and behaviours. The hard part is redesigning how work gets done and supporting people through the change – not just shipping a model or a tool.

A CFO-oriented case for funding must include a credible adoption strategy with at least one key performance indicator for uptake. It should also spell out the expected usage pattern (who, how often, for what tasks) and name the accountable owner responsible for monitoring ongoing usage and value realisation post go-live (and intervening if adoption stalls).

  • “75% of sales reps actively using the tool weekly by end of Q1”
  • “AI recommendations applied in 50% of customer calls within 6 months”

Some financial firms report achieving 80% user adoption within 30 days, but only with deliberate effort. Training programmes. Incentives. Executive ownership. Process integration. All this should be part of the plan.

The adoption plan should name who’s responsible for value delivery once the tool is live. Because every AI project should be related to a real business value, every AI project (even the mostly technical ones) needs a business owner accountable for outcomes and an executive sponsor to champion the change. Without clear ownership, follow-up becomes an exercise.

Ask “What’s the target adoption rate, and what’s being done to ensure we hit it?”

Look for specifics: new workflow designs, user training sessions, updated KPIs to encourage use, timeline for scaling from pilot to full deployment. If the adoption plan amounts to “hoping people use it,” that’s a red flag.

A Word on Innovation

The Rule of Three is meant to address AI initiatives writ large, but what about discovery (R&D) innovation, i.e. exploration where the primary output is learning, and pilots, i.e. experiments in a defined workflow? Both can be worth funding, but they should be governed and measured differently.

The fix isn’t to ban pilots or discovery. It’s to design them to be funded as decision points with deadlines, not open-ended projects. And here, the Rule of Three still works: in pilots it’s an ROI filter; in discovery it becomes a timeboxed learning plan with a clear go/no-go decision at the end.

Hardening the Value Target

In discovery, you’re proving whether value is plausible and where it could come from. Allow a hypothesis (“Could reduce suitability report drafting time by 30–50%” or “Could cut false positives in AML triage”), but demand clarity on what will be validated in discovery: feasibility, data suitability, model quality, workflow fit, or control constraints.

In a pilot, you’re not proving the full business case; you’re proving whether the value is real. So the first rule can, in these cases, accept a range (e.g., “Potential to remove 20 – 40% of manual effort in KYC file reviews”), as long as the pilot specifies what metric will tighten that range and how. A good pilot reduces uncertainty: it turns “maybe” into “likely / unlikely” quickly.

Timebox pilots in weeks, not quarters

Discovery should be timeboxed even more aggressively than pilots because the deliverable is a decision, not a deployment. Two to four weeks is often enough to answer the first-order question (“Is the data usable?”, “Can we hit baseline quality?”, “Are controls functional?”).

If you can’t produce a go/no-go signal in 4–6 weeks, you either don’t have access to what you need (data, approvals, SMEs) or the question needs rephrasing.

Likewise, if a pilot can’t show a measurable signal in 4 to 8 weeks (sometimes 10 to 12 for heavier data access and controls), it’s not a pilot. It’s a build. The goal of a pilot is not perfection. It’s to answer one question quickly: is this worth scaling?

In financial services, the fastest pilots sit on top of already-governed data and target narrow workflows to explore specific value cases such as case triage and queue routing,  document classification, and summarisation, first-draft report generation with human review or next-best action prompts for relationship managers.

Require proof of use (or learning), not applause

In discovery, the trap is mistaking clever results for progress. The required bar for discovery is proof of learning. Define a minimum evidence bar that forces discovery to yield quantifiable understanding: a baseline benchmark, an eval set, a red-team check, a workflow walk-through with SMEs, or a compliance feasibility review.

Most pilots live or die on the demo. The best pilot succeeds because real users changed behaviour. Define a minimum usage/adoption threshold. 20 advisors use it effectively weekly. 60% of cases run successfully through the new workflow. Median handling time drops by X% for a named team.

If you can’t get usage in a pilot, scaling won’t fix it. It will amplify it.

From Hype to Results

AI initiatives hold enormous promise. But CFOs have the unenviable task of separating true opportunities from hype.

The Rule of Three (value, time-to-impact, adoption) helps quickly triage proposals and focus on those that deserve investment. It brings discipline to a space notorious for lofty promises and vague deliverables.

This framework also signals to data and AI teams what matters most: business value delivered, sooner rather than later, via solutions people will actually use.

The small minority of companies succeeding with AI are those pairing technology with human readiness and strategic clarity. They’re focusing on skills, trust, and execution. Not just algorithms.

In practical terms, leading boldly means not approving projects that can’t articulate value according to The Rule of Three.

Make your AI initiative investable, or don’t invest at all.

Sources referenced in this article include: RGP (2026 CFO Research Report, The AI Foundational Divide), MIT Project NANDA (The GenAI Divide: State of AI in Business 2025), Boston Consulting Group (Where’s the Value in AI?, October 24, 2024), IBM (CEO Study, May 6, 2025), and Aveni (How to Automate Suitability Report Writing for Financial Advisers, December 2025). The fundamental message remains constant: clarity drives value, and rigour beats hype.

Get in touch to talk about how to get the most out of your data & AI investments. Contact us or reach out below. 

Picture of Sean Russell

Sean Russell

Managing Principal & Head of AI Enablement at Ortecha

Picture of Stephen Gatchell

Stephen Gatchell

Partner & Head of AI Strategy at Ortecha

Let's talk