AI strategy & audit

Clear ROI picture; team aligned on where AI drives most value

ClaudeNotion AIPerplexityCustom evals

Why most companies get AI adoption wrong

The typical pattern looks like this: a team lead hears about ChatGPT, starts experimenting, gets excited, and signs the company up for a handful of AI tools. Six months later, the company is paying for 8 different AI subscriptions, nobody can point to measurable ROI, and the CEO is asking whether AI is “actually useful or just hype.”

This isn’t a technology problem. It’s a strategy problem.

According to Boston Consulting Group’s 2025 AI report, companies that deploy AI with a clear strategy see 1.5× more revenue impact than those that experiment without one. Yet 74% of companies surveyed described their AI adoption as “ad hoc” or “experimental” rather than “strategic.”

The problem isn’t that AI tools don’t work. It’s that most companies adopt them bottom-up, without a clear picture of where AI adds the most value, which tools are worth paying for, and how to measure success.

An AI strategy and audit engagement solves this by giving leadership a clear, data-driven view of their AI opportunity — and a prioritised plan to capture it.

What an AI audit covers

Tool and subscription inventory

The first step is understanding what you already have. Most organisations are surprised by how many AI tools their teams are already using and paying for.

A typical audit uncovers:

  • Official subscriptions — Tools the company pays for centrally (Microsoft Copilot, Notion AI, etc.)
  • Shadow AI — Tools individual employees are paying for personally or using on free tiers (ChatGPT, Claude.ai, Midjourney, etc.)
  • Embedded AI — AI features built into tools you already use (Grammarly, HubSpot AI, Figma AI, etc.)
  • Overlap and redundancy — Multiple tools doing essentially the same thing across different teams

A 2024 Gartner survey found that the average enterprise has 3.8 AI tools per team, with significant overlap. One company discovered they were paying for ChatGPT Team, Claude.ai Team, Microsoft Copilot, and Notion AI — all being used primarily for the same task (drafting emails and documents). Consolidating to a single tool with enterprise-wide access saved them $45,000 per year.

Usage and adoption analysis

Having a tool is different from using a tool effectively. The audit measures actual adoption:

  • How many employees actively use each tool, and how frequently?
  • What tasks are they using it for?
  • Are they using it well, or are they getting poor results because they don’t know how?
  • Which teams have adopted AI and which are resistant, and why?

This analysis often reveals that tool selection isn’t the problem — training is. A team might be paying for GitHub Copilot but only 30% of developers are using it regularly because the others were never shown how to integrate it into their workflow.

Opportunity mapping

This is the most valuable part of the audit. The strategist works with each department to identify where AI could have the biggest impact, scored by:

  • Potential time savings — How many hours per week could this process save?
  • Revenue impact — Could AI improve conversion rates, reduce churn, or enable new offerings?
  • Feasibility — How complex is the implementation? Does it require custom development or can it be done with off-the-shelf tools?
  • Risk — What happens if the AI makes a mistake? Is there a human in the loop?

The output is a prioritised list of opportunities ranked by expected ROI relative to implementation effort. Most companies find that 3–5 opportunities account for the majority of potential value.

Competitive benchmarking

Where does your AI adoption stand relative to your industry peers? The strategist assesses:

  • What AI capabilities are your competitors deploying?
  • Which capabilities are becoming table stakes (you’ll fall behind without them)?
  • Where could AI give you a genuine competitive advantage?

According to McKinsey’s 2025 Global Survey on AI, the gap between AI leaders and laggards is widening. Companies in the top quartile of AI adoption report 20% higher revenue growth than their peers. This isn’t because AI is magic — it’s because these companies identified the right use cases and invested deliberately.

The strategy output

A prioritised roadmap

The engagement produces a concrete, actionable roadmap — typically covering 6–12 months — organised into phases:

Phase 1: Quick wins (1–4 weeks) These are opportunities that use existing tools, require minimal setup, and deliver immediate value. Examples:

  • Roll out Claude.ai or ChatGPT Team across the company with usage guidelines
  • Enable GitHub Copilot for the engineering team with onboarding training
  • Set up AI-powered meeting notes with Otter.ai or Fathom
  • Configure Grammarly enterprise for consistent writing quality

Phase 2: Process automation (1–3 months) These involve connecting tools and building simple automations:

  • Automate lead enrichment and scoring with Clay
  • Build AI-powered customer support triage
  • Set up content repurposing workflows
  • Implement AI-assisted code review

Phase 3: Custom AI features (3–6 months) These require development work and specialist expertise:

  • Build a RAG system over internal documentation
  • Add AI-powered search to the product
  • Develop custom AI agents for specific business processes
  • Build evaluation and monitoring infrastructure

Phase 4: Strategic capabilities (6–12 months) Longer-term investments that create competitive advantage:

  • AI-powered product features that differentiate your offering
  • Predictive analytics and forecasting
  • Custom fine-tuned models for domain-specific tasks
  • AI-native workflows that fundamentally change how teams work

Tool recommendations

The strategy includes specific tool recommendations with rationale:

  • Keep — Tools that are providing clear value and should be maintained
  • Consolidate — Multiple tools that should be replaced by a single, better option
  • Add — New tools that address identified opportunities
  • Remove — Tools that aren’t providing enough value to justify their cost

Each recommendation includes the expected cost, implementation effort, and projected ROI.

Governance and policy

AI adoption raises legitimate concerns around data security, intellectual property, accuracy, and compliance. The strategy includes:

  • Acceptable use policy — What can and can’t be shared with AI tools? Most policies distinguish between public AI (ChatGPT free tier) and enterprise AI (where data stays private).
  • Data classification — Which categories of data can be processed by AI tools, and which are off-limits?
  • Quality controls — Where must a human review AI output before it goes out? (Customer-facing content, financial data, legal documents, etc.)
  • Vendor evaluation criteria — How to assess new AI tools as they emerge

How the engagement works

A typical AI strategy engagement runs 3–6 weeks:

  1. Week 1: Discovery — Stakeholder interviews across departments. The strategist talks to team leads, power users, and skeptics to understand current usage, pain points, and opportunities. They also audit existing tool subscriptions and costs.

  2. Week 2–3: Analysis — The strategist maps opportunities, researches tools, benchmarks against industry peers, and models ROI scenarios. They use tools like Perplexity for competitive research, Notion AI for synthesising interview notes, and custom evaluation frameworks to compare tool options.

  3. Week 4: Roadmap development — The findings are synthesised into a prioritised roadmap with specific tool recommendations, cost projections, and implementation timelines.

  4. Week 5–6: Presentation and enablement — The strategist presents findings to leadership, facilitates a prioritisation workshop, and delivers the final strategy document. Optional: run hands-on workshops to train teams on recommended tools.

What good looks like

Companies that execute a structured AI strategy typically see:

  • 15–30% reduction in AI tool spend from eliminating redundancy and negotiating enterprise agreements
  • 2–3 high-impact use cases identified and prioritised, with clear ROI projections
  • Higher adoption rates because employees understand why specific tools were chosen and how to use them
  • Faster decision-making because leadership has a framework for evaluating new AI tools as they emerge (rather than reacting to every new product launch)

The most important outcome isn’t any single tool or automation — it’s clarity. Teams stop debating whether to use AI and start executing on a shared plan.

Who needs this

  • Companies spending on AI without clear ROI — You’re paying for tools but can’t point to business results
  • Leadership teams asking “What should our AI strategy be?” — You need a structured answer, not more experiments
  • Companies in regulated industries — Healthcare, finance, legal — where AI adoption requires careful governance
  • Companies about to make a major AI investment — Before committing to a platform or hiring an AI team, get the strategy right
  • Companies where AI adoption is uneven — Some teams are using AI well, others aren’t using it at all

Tools referenced in this guide

  • Claude.ai — Enterprise AI assistant for writing, analysis, and research
  • ChatGPT — The most widely adopted general-purpose AI tool
  • Microsoft Copilot — AI assistant embedded in Microsoft 365
  • Notion AI — AI writing and research within Notion workspaces
  • Perplexity — AI-powered research and competitive analysis
  • GitHub Copilot — AI pair programming for developers
  • Grammarly — AI writing and editing assistant
  • Otter.ai — AI meeting transcription and notes
  • Fathom — AI meeting recorder and summariser
  • Clay — AI sales data enrichment and automation

Need help with ai strategy & audit?

Submit a brief and we'll match you with a vetted specialist. No commitment, 30-day guarantee.

Submit a brief — it's free