What is AI?

AI stands for Artificial Intelligence. This is where tasks which humans usually perform are delegated to a computer system. An example of this might be using a smartphone app to translate a word in English into French.

AI systems are computer programs. They are created by allowing computers to ‘learn’ patterns seen in data. Data collected from hospital records can be combined together to make a dataset - these datasets are then used to ‘train’ and test AI systems.

How is AI used in healthcare?

AI in healthcare can be used to help doctors diagnose patients, decide on the best treatments, and many other tasks. For instance, an AI system might look at a patient’s chest xray image and determine whether it shows cancer or not.

What are some of the benefits of AI in healthcare?

AI might allow treatments to be delivered faster for patients, meaning that waiting times can be reduced. Additionally, new treatments may become available, or existing treatments might be able to be targeted better, preserving resources to ensure more patients can benefit.

Where are the problems with AI in healthcare?

AI is not magic. Just like how medications are less effective for certain people, AI systems can also be less effective. Sometimes people may even be harmed by AI systems - similarly to how drugs may have side effects.

Some healthcare AI systems have already been shown to work less well for certain groups in society, meaning that they receive worse healthcare. In many cases these harmed groups are already those who have the worst access to healthcare, meaning that existing harms are worsened. An example of this is that a USA-based AI system which was supposed to estimate who needed healthcare most was shown to discriminate against Black patients (more information here).

Data is critical in the development and testing of AI systems - when we don’t have enough good-quality data for certain groups of people, we cannot develop or test AI systems for these people. Data protection is also really important to ensure that no people’s data is abused. Carefully designing AI systems using balanced, good-quality data can reduce the risk that people will come to harm, but it doesn’t totally eliminate this risk.

Just like other healthcare interventions (such as medications, or surgeries), AI systems must be studied in a clinical trial - this gives greater certainty that they are safe and effective. It’s important that these trials include a diverse group of people, otherwise doctors will be less sure that the AI system works for everyone equally well.

How will the study improve AI in healthcare?

We hope that by setting standards to improve the quality of data used to develop AI systems, and to improve the way datasets are used, the quality of the actual AI systems will be improved. This should mean that they work well for everyone in society.

By participating in our Delphi study you can help us build standards which work for you, and give you confidence that medical AI systems will lead to us having healthier, happier lives.

What is a delphi study in context to the STANDING Together project?

A delphi study (or simply just ‘delphi’) is a type of research where researchers use a series of surveys to gather people's opinions about a topic, and then use these opinions to refine and shape existing standards or to develop new standards.

Participants are asked to give their opinions on a series of statements by voting on whether each statement is important or not, and whether it should be changed. Statements which are popular (receive the highest votes) are carried forwards to the next survey. Those which are unpopular (receive the lowest votes) are removed. Comments can be provided to allow the researchers to make some of the statements better.

For example - a statement might read “Healthcare should be equally high quality for every person, regardless of their wealth, sex, or ethnicity”. Participants are likely to vote this statement is important, but may comment that we should add “gender identity, race, and age” to the statement.

When every participant has had the chance to vote, the researchers will go through the results and make any changes suggested in the comments. Statements which were unimportant will be removed altogether. The same list of participants will then be asked to vote again on the new amended statements, and give comments. The amended statements which the participants now rate as unimportant are removed.This process continues until all remaining statements are voted of ‘high importance’ from most of the participants.

At this stage, a final meeting is usually held between participants. At the meeting each of the successful statements is discussed in more detail, and a panel votes a final time on whether they should still be included. This final meeting achieves ‘consensus’ - an agreement between people on a final list of statements.

How will STANDING Together use the delphi study?

The STANDING Together project will use a delphi study to decide on a list of standards aimed at improving how health data are collected and used (particularly when making AI tools for healthcare use).

We have already reviewed many existing guidelines, standards, and frameworks, and have taken the most useful bits of each of them. These useful bits have been assembled into a list of statements (at this stage termed as ‘items’), and the list will be voted on as described above.

What is PPIE?

Patient and public involvement & engagement in research (PPIE) is when researchers form partnerships with patients or members of the public to design and carry out research. By involving patients and members of the public, our research is more likely to be relevant to the needs of society.