Here's a pattern we see often. A business invests in AI — a new tool, a custom feature, a workflow powered by a language model. The demo looks impressive. The promise is real. And then it goes live, and the outputs are inconsistent, unreliable, or just not useful enough to change how people work.
The blame usually lands on the AI. But the AI is almost never the problem.
The problem is what the AI can see
Language models are extraordinarily capable at pattern recognition, synthesis, and generation. What they can't do is work with data that isn't there, isn't clean, or isn't connected to anything.
If you ask an AI to summarise a customer's history and that history is spread across three systems with no common identifier, it can't help you. If you ask it to predict which deals are at risk and your pipeline data hasn't been maintained consistently, the prediction will be consistently wrong. If you ask it to draft a follow-up email and the context it needs lives in a salesperson's head rather than a CRM, it's working blind.
The AI is only as good as the data it can access. This sounds obvious when you say it directly. It's surprisingly easy to overlook in the excitement of building something new.
What we see in practice
A recent client project is a good example of how this plays out when it's handled well. The client operates a Salesforce managed package product for NDIS and aged care providers. The AI feature we built generates a participant summary for care workers before each visit.
Before a line of code was written, we spent significant time mapping the data model. Which objects held the relevant information? Was the data structured consistently enough to be useful? What should the AI include, and what should it deliberately exclude — particularly given the privacy constraints of the healthcare context?
The data architecture decisions — what gets queried, how it's structured, what never leaves the platform — were what made the feature work. The AI model itself was almost the easy part.
The prerequisite work
Before most AI investments, there's a set of prerequisite questions worth working through honestly.
Is the relevant data being captured at all? In many businesses, the information that would make an AI feature genuinely useful isn't in any system — it's in emails, in spreadsheets, in people's memories. If that's the case, the first project is capturing the data, not building the AI.
Is the data consistent enough to be trusted? Inconsistent data — different formats, missing fields, duplicate records, conflicting values — produces inconsistent AI outputs. Cleaning and structuring data is unglamorous work. It's also often the highest-leverage thing a business can do before an AI investment.
Are the systems connected? AI features that synthesise across multiple data sources are often the most valuable. They're also the most likely to fail if those sources can't be queried together. Understanding the integration architecture before you design the AI feature saves significant rework.
What are the privacy and compliance constraints? In regulated industries — healthcare, financial services, legal — what data can be sent to an AI provider, stored, or surfaced to which users is a design constraint, not an afterthought. Building privacy in from the start is both the right approach and the practical one; retrofitting it is expensive.
The honest framing
AI is genuinely powerful. The businesses that get the most from it aren't necessarily the ones that move fastest or invest most. They're the ones that do the unglamorous work first — cleaning the data, connecting the systems, defining what good output looks like for a specific user in a specific moment.
The AI does the interesting part. But the data is what makes it work.



