AI expert Zoë Webster explains why organisational oversight is more important that compute power for successful and responsible AI use.
The EU AI Act reflects a broad shift in how artificial intelligence (AI) is governed, covering not just development, but also use, and it applies whenever AI could impact an EU citizen.
Many systems that influence business operations today were not labelled as AI when they were first adopted or have since been updated with new and improved AI-powered functionality, as many organisations are finding with some cloud-based enterprise software. This means they can often remain out of sight for any teams responsible for compliance.
For businesses, preparing for the AI regulation requires embedding clarity into the everyday use of automation, particularly where this includes AI (or constitutes a form of automated decision making covered by GDPR).
This means building an understanding of how any automated systems work, what decisions they influence, and who is responsible for managing, monitoring and maintaining them, whether they’re built in house or outsourced.
Where regulation stands, and what’s coming
The EU AI Act came into force in August 2024, beginning a phased roll-out with prohibitions on AI with unacceptable risks early on.
A subsequent wave of obligations applies to general-purpose AI models, including foundational systems. These early requirements focus on transparency, documentation and responsible model behaviour, especially when these tools are integrated into broader applications.
By August 2026, a further phase of requirements will apply to high-risk systems. These will bring more formal obligations around risk management, traceability and model performance.
The aim is to ensure that AI used in areas of material or legal impact, such as healthcare, education or employment, is built and managed with clear responsibility and robust internal processes.
This phased approach gives organisations time to prepare, but it also places the onus on internal teams to identify what AI systems they already rely on, and whether those systems can stand up to increasing scrutiny.
For many, these systems are already influencing outcomes, even if they’re not always recognised internally as AI.
Trace influence, not just inventory
One of the most important early actions businesses can take is to identify where AI is already shaping decisions. This means going beyond formal AI initiatives and examining embedded capabilities in enterprise software, workflow tools or customer platforms.
A software audit may show what tools are available, but not how they influence outcomes. Compliance depends on mapping where systems apply logic, prioritisation or classification, and understanding what happens as a result.
That visibility comes from developing a working understanding of how these tools behave, where their data comes from and who depends on their outputs.
Ask what assumptions sit behind each system. Find out when the model was last updated. Check whether performance is being tracked, and whether teams know how to escalate issues (and that someone is identified to catch those issues when they are).
These operational questions can seem rather theoretical, but they become critical for regulatory compliance under the EU AI Act. But are they not good practice, anyhow?
McKinsey’s 2025 Global AI Survey found that 78pc of organisations have adopted AI in some form, yet a related survey found that only 1pc of company executives believe they’ve reached AI maturity. That maturity needs to reflect not just the availability of advanced tooling, but the systems of management and regulation around it to ensure accountability, transparency, control and alignment with strategic goals.
Make building trust part of the working structure
Confidence and trust in AI is a mixed picture. Some have confidence that AI can quickly be woven into the fabric of an organisation to bring significant productivity benefits while others remain suspicious or sceptical and keep their distance.
Whatever the level of trust one has or does not have in AI itself, it is the governance around it that really needs to be trustworthy.
To help build that trust, the EU AI Act is not just asking businesses to disclose that AI is in use. It is asking them to maintain oversight as those systems evolve. To show that inputs are relevant and representative, decisions can be explained, issues can be escalated, and models can be adjusted with care and clarity.
Trust is also built through the way people work together. Operational leads, data owners and technical teams all hold different parts of the answer. When they’re brought into processes early, they can test assumptions, spot gaps and shape a system that can be explained, challenged and improved over time.
So, what does that mean in practice?
It means that governance can’t be something that’s filed away. It has to live and breathe inside the way teams work.
That includes knowing who is accountable and/or responsible, what information they have access to and what steps are taken when something no longer performs as expected. Those closest to the system need to know how to raise a concern, and those accountable need to be armed with the tools and the mandate to act.
This doesn’t require a complex new operating model. It does, however, require one that’s mature enough to surface problems and structured enough to effectively respond.
What readiness really looks like
This phase of the EU AI Act gives organisations some space to prepare. The systems already shaping decisions today are the ones that matter most. They’re not future risks to prepare for but are active responsibilities to understand and support.
Excellence in AI is not about volume or velocity. It’s about clarity of purpose, care in execution and awareness of how systems behave under a range of conditions.
And just as AI governance requires clarity, so too does capability. Most barriers to responsible deployment do not stem from infrastructure gaps or compute limitations. They stem from confidence issues – whether overly high or low, and a lack of shared understanding around how to design, test and evolve systems collaboratively.
The skills that matter most here aren’t just technical. These are the skills to problem solve, to think critically, to test assumptions and to work across disciplines. That’s what enables AI to scale not only compliantly, but with clarity, credibility and care.
By Zoë Webster
Dr Zoë Webster advises organisations on AI strategy and practice, having been in the AI space for more than two decades as a practitioner and leader. Until May 2024, she led BT’s AI Centre of Enablement, which she built from scratch to develop and deploy data science and AI at scale across the business. She is also a member of the Advisory Board for the UK’s National AI Awards.
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.