Skip to main content

BETA This is a new service - your feedback (opens in a new tab) will help us to improve it.

This is best practice guidance

Although not legally required, it's an essential activity.

This Guide covers:

  • United Kingdom

From:

  • NHS England

Adopters - Understanding if digital technologies or AI are the right solution to a problem

Reviewed: 23 January 2023

You need to assess whether digital healthcare technologies or AI are the right solution to the problem you are trying to solve. There may be a simpler solution.

Identifying the problem

Starting with the problem you are trying to solve should be the first step of any decision to use a digital or data-driven technology such as AI. You need to map the current care pathway, and use the NHS service standard to help you identify problems and understand end-user needs. This is particularly important for new technologies, whose novelty might make it easy to overlook this critical step. You might then consider, ‘what is it about this technology that makes it a good choice for addressing this problem?’. Ask yourself what you are looking to improve in your service, and what metrics matter in measuring this improvement.

You should describe and quantify the quality improvements, savings and efficiencies that using AI would create for your organisation. If the technology may result in only small improvements, you might reconsider the need for it.

Deciding whether AI is the right solution for the problem

To help you decide whether AI is the right solution, see government guidance on assessing if AI is the right solution and a guide to good practice for the use of digital technology in health and social care. You will need to think about whether:

  • there is enough data for an AI model to learn from, and whether you can test the technology on historical data in your organisation to evaluate its potential impact
  • you can do a pilot study to test whether this impact can be achieved in a live, operational setting
  • the data can be used ethically and safely in a secure data environment
  • the outputs of the model could be tested for accuracy against a ‘ground truth’ (that is, information that is known to be real or true, provided by direct observation and measurement)
  • the model outputs would lead to problem-solving that achieves outcomes in the real world
  • a clear accountability framework could be set up to monitor ongoing safety and effectiveness, including how this varies across demographic groups

Thinking these considerations through may help you identify a simpler solution to the problem. It may also help you meet regulatory requirements if you choose to use AI, such as:

This is best practice guidance

Although not legally required, it's an essential activity.

This Guide covers:

  • United Kingdom

From:

  • NHS England

Get more support

To discover how the regulatory organisations can assist you and for contact details, visit our 'Get Support' page.

Is this article useful?

How can we improve this piece?

Error:Select how we can improve this piece
Cancel

Thank you for your feedback!

To share additional insights about this page, please use the following link (opens in a new tab) to submit your observations.

Print this guidance (opens a PDF in a new tab)

Regulations are regularly updated. For the latest information, check the website as printed documents may be outdated.