The Recommendations
Click here to read the recommendations (opens in a new window)
Download PDF version (PDF; 0.15Mb)
Version 1.0 published 30th October 2023.
DOI: 10.5281/zenodo.10048356
The potential for artificial intelligence (AI) to benefit our health must be balanced against the risks posed by algorithmic bias and harms. These technologies may work better for some groups and worse for others, causing or worsening health inequalities.
STANDING Together aims to ensure that inclusivity and diversity are considered when developing health datasets and AI health technologies.
We have built recommendations, through an international consensus process, which provide guidance on transparency around 'who' is represented in the data, 'how' people are represented, and how data is used when developing AI technologies for healthcare.
By getting the data foundation right, STANDING Together ensures that 'no-one is left behind' as we seek to unlock the benefits of AI in healthcare.
Scope
We intend for these items to be...
Recommendations for best practice
Of value at every stage in the data life cycle
Principle-based, allowing them to be relevant to any and every jurisdiction
Of sufficient detail to be implementable, and yet flexible to allow application in different settings
Used mainly when developing AI health technologies (but they may also have other uses)
An extended scientific paper on these recommendations, including detailed explanatory text giving context and rationale for each item, will be published in due course.
STANDING Together is building STANdards for data Diversity, INclusivity, & Generalisablity. Established in 2021 as part of the NHS AI Lab’s AI Ethics initiative, it is a partnership between over 30 academic, regulatory, policy, industry, and charitable organisations worldwide. STANDING Together is funded by the NHS AI Lab at the NHS Transformation Directorate and The Health Foundation and managed by the National Institute for Health and Care Research (AI_HI200014).