16/09/2025
Why bad data will derail your AI ambitions
The artificial intelligence revolution is well underway, with UK organisations investing billions in AI capabilities that promise to transform operations and unlock new revenue streams. Yet a worrying trend is emerging from the data: 95% of generative AI pilots at companies are failing, according to a recent MIT report, and nearly half of companies abandoned AI projects altogether in 2025.
The culprit isn’t inadequate technology – it’s the data feeding these systems. While humans can work around incomplete information using context and judgment, AI systems amplify whatever patterns exist in their training data, including errors, biases, and gaps. Poor data doesn’t just limit AI performance; it actively undermines it, creating systems that deliver unreliable insights and erode trust in data-driven decision making.
The UK data crisis
The data quality challenge facing UK organisations is more severe than many leaders realise. Recent research from Experian reveals that 81% of organisations are held back by distributed data spread across multiple systems and locations, while 77% say their current tools can’t handle the volume of data they process.
This fragmentation creates a perfect storm for AI failure. IBM tell us that data quality and availability represents the most common obstacle (51%) for European organisations implementing new AI pilot projects. It’s clear that technical sophistication means nothing without reliable foundations.
The financial impact is enormous. Experian estimate that the “cost of bad data is an astonishing 15% to 25% of revenue for most companies”, while the UK Government claim that organisations spend between 10-30% of revenue on handling data quality issues.
When AI amplifies the problem
AI systems don’t just inherit data quality issues – they amplify them in ways that can be difficult to detect:
The confidence trap
•Unlike traditional reports that might show obviously incorrect results, AI-generated insights often appear plausible and internally consistent
•Poor decisions based on flawed AI recommendations seem “data-driven” and authoritative
•Teams become overconfident in unreliable outputs, leading to more significant strategic errors
Historical bias
•AI learns from previous patterns, including biases that may no longer be appropriate
•A local authority implementing AI resource allocation discovered their system consistently under-served certain areas – not due to malfunction, but because it accurately reflected historical under-recording of service requests from digitally excluded communities
Feedback loops
•Poor decisions create more poor data, which trains AI systems to make even worse decisions
•Each version compounds the problem, making it harder to identify the original source of errors
Trust erosion
Perhaps the most damaging long-term consequence is organisational trust erosion:
Internal scepticism
•When AI systems produce obviously incorrect results, stakeholders become resistant to all analytical insights
•Less than half of AI projects see the light of day in production, often because teams lose confidence during development
Rebuilding challenges
•Trust erosion is far more difficult to reverse than for it to first take hold
•Teams revert to manual processes, setting back transformation efforts by years
External reputation damage
•Customer-facing AI errors harm brand credibility beyond the specific system failure
•Social media amplifies negative experiences, creating lasting reputational consequences
UK investment in AI
Forward thinking UK organisations are recognising this challenge. 76% of UK businesses are planning to invest in data quality and consistency over the next two years, making it their number one priority. Significantly, 74% are turning to AI to support their data quality efforts – using AI to fix the data that will power future AI systems.
As UK Government Technology Secretary Peter Kyle emphasises: “AI has the potential to change all of our lives but for too long, we have been curious and often cautious bystanders to the change unfolding around us. With this plan, we become agents of that change.” This shift from being a bystander to actively participating requires the solid data foundations that make AI transformation possible.
Building AI-ready data foundations
Addressing data quality before implementing AI requires systematic approaches beyond traditional cleansing:
Governance evolution
•Establish clear ownership and accountability
•Address AI-specific issues like bias and model explainability
•Involve legal, compliance and subject matter experts beyond traditional IT teams
Quality monitoring
•Implement AI-specific validation of statistics, accuracy and completeness
•Develop quality metrics aligned with AI performance requirements
•Create automated, continuous monitoring rather than periodic checking
Cultural transformation
• Embed data quality into business processes rather than treating it as a technical concern
• Ensure data literacy and training become part of business conversations
• Build collaboration between departments to break down data silos
The competitive reality
While poor data quality creates widespread challenges, it also creates opportunities. In markets where competitors struggle with unreliable data and AI systems, organisations that solve data quality gain competitive advantages.
The most successful AI implementations share a common characteristic: careful attention to data foundations before pushing into advanced capabilities. These organisations may take longer to deploy initial systems, but they avoid the costly cycles of poor performance, lost confidence and system rebuilding that competitors may suffer.
IBM identified integration among existing systems as an obstacle for 42% of businesses attempting to adopt AI, so organisations with clean, well-integrated data environments start with significant advantages.
The path forward
Success with AI requires treating data quality as a strategic investment rather than a technical prerequisite. This means:
Immediate actions
•Audit current data quality across all systems feeding potential AI applications
•Implement automated monitoring for the data streams most critical to business decisions
•Establish cross-functional teams that combine technical expertise with business domain knowledge
Long-term capabilities
•Develop capabilities that maintain quality at scale
•Build competitive advantages through reliable information assets
•Create data governance frameworks that evolve alongside AI capabilities
The organisations that understand the fundamental relationship between data quality and AI success won’t just avoid the pitfalls that derail competitors – they’ll build capabilities that create lasting competitive advantages. In an increasingly AI-driven environment, that foundation makes the difference between AI systems that transform operations and those that simply create expensive complications.