Regulatory Clarity Is Key to Unlocking AI
This blog looks at common questions that arise during the AI deployment lifecycle and how accessible answers are paramount if AI is to scale.
Published at 22 September 2025 by Robin Carpenter - Head of AI Governance & Policy at Newton's Tree
How do we put an AI product on the market? How do we make sure an AI product stays safe?
Questions like these have been common over the past several years whilst I have been advising academics, SMEs and NHS hospital staff on how to navigate the regulation that impacts healthcare AI. Sometimes this takes place in lectures, such as the NHS Fellowship in Clinical Artificial Intelligence, other times in short consultations, or in NHS workshops run by Newton’s Tree.
These questions are often simple until they are complex. They are simple because regulation gives a lot of hard boundaries which can be supplemented with good practice, but they are complex because you can often look at a word in regulation and find several perspectives on it.
However, you cannot run before you walk, and if this country is to achieve its ambitions of improving care with AI, then walking is paramount. Walking here would mean the ecosystem has some confidence in what regulation is asking of it
The questions that emerge are dependent on the stage of the person asking them, and these stages can be understood during the AI deployment lifecycle.
The AI Deployment Lifecycle
As someone moves through the AI deployment lifecycle, the regulations impacting AI get louder, and so the individual’s demand for regulatory knowledge gets stronger.
- The first step is Pre-deployment. During this stage they are asking: why AI? What Problem am I solving? How do I judge if a product solves that problem? This is when early questions should start to emerge, such as how data security is maintained and what product evaluation should look like.
- It is possible that during the Pre-deployment stage they discover the market is not meeting their needs. If that is the case then they will think about building an in-house model and asking what they must do to meet the expected standards for this.
- Once a product is chosen to meet the problem identified, the next step is preparing for deployment into care, and here regulation and standards should firmly be front of mind. They will be asking more questions regarding how to ensure the deployment is technically, legally and clinically safe.
- During post-deployment the primary task becomes ensuring the AI product that started safe, remains safe. There will be clinical governance and post market surveillance frameworks to work within, which should consider familiar AI issues like automation bias.
- Eventually the product must leave care and this must be done safely. There should be a plan to follow which will be largely informed by decisions in previous stages.
Throughout this lifecycle questions will emerge from related standards such as data protection, medical device regulation, and clinical safety. It is easy to see how a lack of education on this can stifle the development and deployment of healthcare AI. Standards are a vast sprawling topic, but one that cannot be ignored if we are to maintain our duties to the patients we help.
The Need For A Growing AI Digital Regulations Service
So, a repository of answers to common questions is fundamental if the national ambition to increase the quantity of quality AI is to be achieved. Thankfully, many common questions can now be answered via the AI Digital Regulations Service (AIDRS). Further, as these resources grow, so can the autonomy of the groups trying to improve care, enabling them to answer many of the questions that emerge throughout the AI deployment lifecycle. However, as the ecosystem has matured, so have the questions, both ethical and legal. To stay ahead of this there must be a wider network of experts, such as CERSI-AI, who examine the grey areas of regulation for clear answers. These are groups that address the more complex questions like how to address the various perspectives on a key word in regulation. These answers can then be fed into the regulatory ecosystem including the AIDRS, and builders and users of healthcare AI can focus on the most important task: delivering and improving patient care.
Important: Disclaimer
This blog is intended to provide insights into individual experiences but does not reflect the views or recommendations of the AI and Digital Regulations Service partners (AIDRS). AIDRS emphasises that users should continue to seek and adhere to formal statutory guidance and legal requirements applicable to their specific circumstances. It is the responsibility of the legal manufacturer to comply with all applicable statutory regulations.
Thank you for your feedback!
To share additional insights about this page, please use the following link (opens in a new tab) to submit your observations.
There is a problem
An error occurred when submitting your feedback. Please, refresh the page and try again.
Get more support
To discover how the regulatory organisations can assist you and for contact details, visit our 'Get Support' page.