AI is moving fast. Most data foundations are not.
As organisations race to adopt AI, the real blocker isn’t models – it’s manual, fragile data management that can’t keep up. Governance slows things down. Metadata is outdated. Trust is patchy. And risk keeps growing.
This webinar cuts through the hype to show how leading enterprises are automating data management at scale — safely, pragmatically and with real ROI. What you’ll learn:
What the webinar covers:
- Why manual governance is now a bottleneck for enterprise AI
- How automation transforms metadata, data quality and controls
- Where AI should — and shouldn’t — be used in data management
- How to reduce risk while accelerating AI delivery
- What “AI-ready” really looks like in practice
You can’t do that at scale without having a central catalog, an inventory of what exists.
– Justin Heller
Watch the full webinar here:
Justin Heller
SVP, Chief Data Officer at Synchrony Financial and Executive & Global Board Member at CDO Magazine
Stephen Gatchell
Partner & Head of AI Strategy at Ortecha
Opening (00:00–04:05)
Stephen Gatchell (00:04):
Hi everyone. For those that have already joined our webinar, my name is Stephen Gatchell. We’ll give it a couple of minutes for others to join, then we’ll go through some quick ground rules and jump into a fireside chat.
Stephen (00:28):
Justin, why don’t you say hi? You don’t have to introduce yourself yet.
Justin Heller (00:32):
Hello. Good to be here.
Stephen (00:38):
Justin is joining from the US and I’m coming from London. For those of you on the east coast of the United States dealing with snow this week—it’s about 50 degrees and sunny here, so I apologise for rubbing that in a little bit.
Stephen (01:07):
We’re up to about 14 people and it’s growing, so we’ll give it another minute. If anyone on the east coast wants to share how much snow you’ve had, feel free.
Stephen (01:25):
Justin, you got about two feet?
Justin (01:28):
A little more than two feet. And because it’s so cold, it’s really not melting.
Stephen (01:35):
How many meters is that? I’m in London, so I don’t know the conversion.
Justin (01:40):
I don’t have that memorized, but we’re supposed to get more snow today and then again over the weekend.
Stephen (01:56):
That’s why I’m trying to get out of here on Friday.
Stephen (02:05):
To respect everyone’s time, we’ll get started. Thank you all for joining. A few housekeeping notes: this session is being recorded and we’ll share it afterward. Please put questions in the Q&A so I don’t miss them.
Introductions (04:05–05:00)
Stephen Gatchell (04:05):
I’m Stephen Gatchell, Partner and Head of AI Strategy at Ortecha. We’re a London-based boutique consulting firm focused on data, AI, and technology strategy and implementation, with partners across the US and Canada.
My background is as a practitioner—developing data strategies, managing data science teams, and designing governance frameworks at organisations like EMC and Dell.
Justin, over to you.
Justin Heller (04:07):
Thanks, Stephen. My name is Justin Heller. I’m the Chief Data Officer at Synchrony Financial. I’ve been with the firm for nearly 11 years and have been the first and only CDO.
I won’t go deep into my background, but I’m excited to share some experiences and perspectives today. Stephen and I always have good conversations, so this should be a good session.
Stephen (04:49):
And as a side note, Justin lives near one of the greatest pizza rows of all time in the United States.
What Is Automated Data Management? (05:00–09:00)
Stephen Gatchell (05:01):
Let’s start with automated data management. This topic is accelerating in the market, largely driven by AI. Organisations want to move faster, drive innovation, and increasingly use AI to support AI. Justin, how do you define automated data management?
Justin Heller (05:35):
I think about this in the context of data democratization. A core objective for any data leader is reducing the amount of friction people have when using data.
That means enabling self-service for analysts and data users. But to do that, people need to understand what data exists, what it means, how safe it is to use, and how it flows.
And you can’t do that at scale without having a central catalog, an inventory of what exists.
Data environments change constantly. We’re always introducing new structures and repositories, and people don’t like spending time documenting things. Automation becomes a necessity.
Metadata as the Foundation (09:00–10:45)
Stephen Gatchell (09:01):
You’ve talked about harvesting and enriching metadata, and the metadata layer itself becoming critical. Can you expand on why metadata is so foundational, especially as organisations try to scale AI?
Justin Heller (09:15):
When I think about automation in data management, I start with the metadata layer. That layer is what unlocks analytics and AI.
If your vision is for analysts—or any data users—to interact with data through a chat-like interface, that interface needs metadata. It needs to know what data exists, what it means, and how it’s commonly used.
Metadata is an enabler for data management automation and even more.
Semantic Processing and Privacy (10:45–15:30)
Stephen Gatchell (10:46):
You mentioned semantic language processing earlier. Can you expand on that a bit? I think people define it differently, so it would be helpful to explain how you see it in relation to metadata.
Justin Heller (11:07):
Let me explain it through a privacy use case. With modern privacy laws, consumers have the right to know what personally identifiable information an organisation has about them.
To respond, organisations need to locate where that PII exists. That becomes extremely difficult at scale without automation.
Semantic processing looks at naming conventions, abbreviations, profiling information, definitions, and rules to assess the probability that a data element contains PII. Over time, as confidence is confirmed, those patterns reinforce each other.
Structured and Unstructured Data (18:30–21:30)
Stephen Gatchell (18:31):
Does this conversation change at all when you think about structured versus unstructured data, especially given how much AI is now built on unstructured information?
Justin Heller (19:15):
It’s really the same conversation. With structured data, you work at the data-element level. With unstructured data, you work with information types like files and documents.
The goal is still understanding the knowledge stored within the data, and that’s where metadata provides the context across both.
The Evolving Role of Data Stewards (21:30–26:30)
Stephen Gatchell (21:31):
We’ve talked a lot about automation and scale. How do you see this changing the role of data stewards and subject matter experts?
Justin Heller (21:55):
Historically, stewards manually created definitions and mappings, but that doesn’t scale. With automation, the steward shifts from being the primary contributor to being the human in the loop.
AI can propose definitions and mappings based on patterns and context, and stewards validate and refine them. That’s how you scale quality without overwhelming people.
Data Catalogs and Metadata Hubs (30:00–34:30)
Stephen Gatchell (30:01):
How does all of this change the role or scope of the data catalog? Does it become more conceptual, or is inventory still core?
Justin Heller (30:25):
I don’t think it changes the role of the catalog—it reinforces it. The catalog is where business, technical, and operational metadata live.
Semantic layers and language processors depend on that foundation to make data usable and discoverable.
Stephen Gatchell (32:45):
What we’re also seeing is organisations moving toward multiple catalogs connected through a metadata hub, meeting people where they actually work.
Protecting Data in the Age of AI (34:30–39:30)
Stephen Gatchell (34:31):
What’s the best way to protect data from being abused by AI front ends?
Justin Heller (34:48):
I think of AI agents as virtual workers. They should have personas and role-based access controls just like humans.
Techniques like tokenization and encryption are critical, and access should be granted based on role and sensitivity.
Stephen Gatchell (37:00):
The starting point should always be the business use case, then choosing the right protection technique based on risk and regulation.
AI Helping Data Management (40:25–46:45)
Stephen Gatchell (40:26):
We’ve talked about AI and LLMs, but let’s get more specific. How can AI help with data management for AI itself? How do you see AI helping AI?
Justin Heller (40:45):
Data management is a broad discipline, so let me use a few examples.
In governance, the goal isn’t to block the business. It’s to put in place an operating model that manages risk while enabling use. A big part of that is storytelling—explaining data quality, risk, and impact to different audiences.
AI can help translate data quality patterns into business impact, speed up root-cause analysis, and improve how we communicate the state of data.
Your AI is only going to be as good as the quality of the data that’s underlying it.
CXO Perspectives: Privacy and Security (46:45–55:30)
Stephen Gatchell (46:46):
We’ve focused a lot on data and AI leaders, but how does automated data management help roles like the Chief Privacy Officer and CISO?
Justin Heller (47:10):
I think of the CDO, Chief Privacy Officer, and CISO as three overlapping circles.
Privacy interprets the laws, data teams know where the data is, and security protects it. None of these roles can succeed in isolation.
Data minimization is a great example—it reduces risk, limits breach impact, and improves defensibility.
Closing Thoughts (56:30–59:50)
Justin Heller:
Data management shouldn’t be the business goal. It’s an enabler.
The real business cases are about reducing cyber risk, demonstrating compliance, improving productivity, and enabling growth. Data management supports those outcomes—it isn’t the outcome itself.
Stephen Gatchell:
Exactly. When data governance becomes data enablement, the conversation changes.
Thank you everyone for joining us today.