FAQs



What is AI?

AI stands for Artificial Intelligence. This is where tasks which humans usually perform are completed by a computer system. An example of this might be using a smartphone app to translate a word in English into French.


AI systems are computer programmes that are able to find patterns in data. AI can learn patterns from different types of data, including information collected by hospitals. Data from hospitals can be combined together to make 'datasets' - these datasets are then used to ‘train’ and test AI systems.

How is AI used in healthcare?

AI in healthcare can be used to help doctors diagnose diseases, decide on the best treatments, and many other tasks. For instance, an AI system might assess a patient’s chest x-ray image and determine whether it shows cancer or not.

What are some of the benefits of AI in healthcare?

AI might allow diagnosis and treatments to be delivered faster for patients, meaning that waiting times can be reduced and outcomes can be improved. Additionally, AI might provide insights into new treatments and existing treatments might be able to be targeted better, which could help preserve resources to ensure more patients can benefit.

Where are the problems with AI in healthcare?

AI is not magic. Just like how medications are less effective for certain people, AI systems can also be less effective. Sometimes people may even be harmed by the impact of AI systems - similarly to how drugs may have side effects.


Some healthcare AI systems have already been shown to work less well for certain groups in society, meaning that they receive worse healthcare. In many cases these harmed groups are already those who have the worst access to healthcare, meaning that existing disadvantage is worsened. An example of this is that a USA-based AI system which was supposed to estimate who needed healthcare most was shown to discriminate against Black patients (more information here).


One of the reasons that AI systems might not work well for everyone is relating to the data used in the development. When we don’t have enough good-quality data for certain groups of people, we cannot develop or test AI systems for these people. Data protection is also really important to ensure that people’s data is not abused. Carefully designing AI systems using balanced, good-quality data can reduce the risk that people will come to harm, but it doesn’t totally eliminate this risk. 


Just like other healthcare interventions (such as medications, or surgeries), the safety and effectiveness of AI systems must be studied in clinical trials. It’s important that these trials include a diverse group of people, otherwise doctors will be less sure that the AI system works for everyone.

How will the STANDING Together project improve AI in healthcare?

We hope that by producing recommendations to improve the quality and diversity of data used to develop AI systems, and to improve the way datasets are used, the quality of the AI systems will be improved. The recommendations should allow us to better understand the performance of AI systems for different people from different backgrounds, allowing everyone in society to benefit.

What is PPIE?

Patient and public involvement & engagement in research (PPIE) is when researchers form partnerships with patients or members of the public to collectively design and carry out research. By involving patients and members of the public, our research is more likely to be relevant to the needs of society.

What is a delphi study in context to the STANDING Together project?

A delphi study (or simply just ‘delphi’) is a type of research involving a series of surveys to gather people's opinions about a topic, and then use these opinions to refine and develop recommendations. 


Participants are asked to give their opinions on a series of statements by voting on whether each statement is important or not, and whether it should be changed. Statements which are popular (receive the highest votes) are carried forwards to the next survey. Those which are unpopular (receive the lowest votes) are removed. Comments can be provided to allow the researchers to make some of the statements better. 


For example - a statement might read “Healthcare should be equally high quality for every person, regardless of their wealth, sex, or ethnicity”. Participants are likely to vote this statement is important, but may comment that we should add “gender identity, race, and age” to the statement.


When every participant has had the chance to vote, the researchers will go through the results and make any changes suggested in the comments. Statements which were voted as unimportant will be removed altogether. The same list of participants will then be asked to vote again on the new amended statements. The amended statements which the participants now rate as unimportant are removed. This process continues until all remaining statements are voted of ‘high importance’ from most of the participants.


At this stage, a final meeting is usually held between participants. At the meeting, each of the successful statements is discussed in more detail, and a panel votes for a final time on whether they should be included. This final meeting achieves ‘consensus’ - an agreement between people on a final list of recommendations.

What was the STANDING Together delphi study for?

The STANDING Together project used a delphi study to decide on a list of recommendations aimed at improving how health data are collected and used (particularly when making AI tools for healthcare use). 


We reviewed many existing guidelines, standards, and frameworks, and have taken the most useful bits of each of them. These useful bits have been assembled into a list of statements (at this stage termed as ‘items’), and the list has been voted on as described above.