A few years ago, I was working inside the technology organisation of a global premium audio company supporting multiple functions including the product group. From the outside, it looked like AI was everywhere.
There were pilots across multiple teams. Data scientists experimenting with product usage data. Engineers building models to improve features. Leadership presentations showing promising results.
If you looked at the volume of activity, you would have assumed AI was already embedded.
It wasn’t.
What we had was energy. What we didn’t have was repeatability.
The Pattern We Kept Seeing
The company had strong engineering capability and serious R&D depth. Talent was not the constraint.
The pattern was this: a team would identify a valuable use case, build a model, demonstrate that it worked in isolation, and then struggle to move it into production at scale.
Questions would surface late in the process:
- What is the value of the model?
- Who owns this model once it’s live?
- How do we maintain it?
- What data standards apply?
- How does this influence the product roadmap?
Because those answers weren’t clear upfront, momentum slowed. Each success was largely self-contained.
AI activity was increasing. Organisational maturity wasn’t.
Reframing the Question
At some point, the discussion shifted from “What should we build next?” to a harder question:
What would it take for AI to be treated as a core product capability rather than a series of projects?
That change in framing mattered.
If AI is a capability, then it needs infrastructure. It needs ownership. It needs standards. It needs alignment with commercial priorities. It needs funding. And it needs to fit into the existing product lifecycle rather than sitting alongside it.
That realisation meant we had to pause some promising initiatives. Not because they were poor ideas, but because adding more isolated models would increase complexity without improving scale.
Slowing down wasn’t popular. But it was necessary.
Building Enablement Instead of Central Control
We established an AI enablement function. Not as a control tower and not to delay product development. The aim was practical: reduce friction and increase consistency.
The focus areas were straightforward:
- Define reusable AI components so teams weren’t rebuilding the same foundations.
- Integrate product lifecycle data to avoid parallel pipelines.
- Create pragmatic standards for model development, validation and deployment.
- Clarify ownership once models moved into production.
None of this was headline work. It involved workshops, documentation, design decisions and difficult trade-offs about what “good enough” looked like.
But it created a base layer that teams could rely on.
Over time, conversations changed. Teams began asking how to reuse components rather than how to start from scratch.
Anchoring AI in Commercial Reality
Another shift involved use case prioritisation.
Early AI initiatives were often driven by technical curiosity. The more sustainable approach was to link them directly to product and revenue outcomes.
We started asking:
- Does this influence feature prioritisation?
- Will consumers be willing to pay for a new feature created from AI enabled capability?
- Does it change release sequencing?
- Can it improve product quality in a measurable way?
- Is there a clear path to production?
That discipline reduced noise. It also improved credibility with leadership. AI discussions moved from experimentation updates to product impact conversations.
Responsible AI as a Design Principle
In consumer products, trust is fundamental. Governance could not be an afterthought introduced at the end of development.
Instead, responsible AI principles were embedded into workflows. Documentation, validation checkpoints and ownership were integrated into the standard process rather than layered on top. There was clear alignment of AI costs and capabilities to driving product releases and revenue acceleration.
This avoided the common situation where governance is perceived as a blocker. When expectations are clear from the start, they tend to accelerate decision-making rather than slow it down.
What Actually Changed
The most visible change wasn’t a specific model. It was behavioural.
Product teams began to treat AI as part of their normal toolkit. Roadmap decisions were increasingly informed by product usage data. The path from experimentation to production became clearer. Difficult decisions were made to enable AI and Data collections vs. component costs with clear impact analysis.
AI stopped being a separate initiative and started being part of how products were built and refined.
Scaling became more predictable because the surrounding structure was predictable.
Reflections
Looking back, a few lessons remain consistent.
First, scaling AI is primarily an operating model challenge. Without clarity on ownership, standards and decision rights, technical success remains isolated.
Second, governance feels heavy only when it is reactive. When designed is embedded as part of the system, it enables confidence and reuse.
Third, reusable components create more long-term value than individual, highly optimised models.
If I were approaching this again, I would spend even more time aligning senior stakeholders on what “AI as a product capability” truly means in practical terms. Without a shared definition, organisations default to experimentation.
AI becomes sustainable when it is treated as part of the process.
If you’d like to speak to us about AI Enablement programmes, get in touch using the form below.