With artificial intelligence (AI) becoming an increasingly prominent fixture in healthcare, the British Standards Institution (BSI) has now launched new guidance to help build digital trust in AI-based innovations within the health sector – particularly from an ethical and safety perspective.
The global framework is specifically targeted at products whose primary function is to enable or provide either treatment, diagnosis or condition management.
With budgetary constraints rife in the NHS, BSI highlight how the auditable standard can be used by health leaders to decide which AI tools to use.
Clinical benefit, standards of performance, integration into the working environment, ethical considerations, and social equitable outcomes are all criteria which health professionals can evaluate using the framework.
The guidance encompasses a wide array of AI products, including:
- Regulated medical devices – such as software as a medical device
- Clinician-facing tools – such as imaging software
- Patient-facing products – such as AI-enabled smartphone chatbots
- Home-based innovations – such as monitoring tools
“This standard is highly relevant to organisations in the healthcare sector and those interacting with it,” said BSI’s global healthcare director, Jeanne Greathouse. “As AI becomes the norm, it has the potential to be transformative for healthcare.”
The framework has been developed in conjunction with health leaders, software engineers, ethicists, and other subject matter experts – BSI say relevant stakeholders can insert its guidance as a mandatory component of their procurement process so they know any tools or products have met a known standard.
It is also an evolved iteration of similar guidance trialled at Guy’s and St Thomas’ NHS Foundation Trust.
For more information on Validation framework for the use of AI within healthcare – Specification (BS 30440), click here.
Image credit: iStock