Delphi Privacy Notice



WHAT IS THE PURPOSE OF THIS STUDY?


Artificial Intelligence provides a platform that can make accessible and personalised medicine a reality. Whilst AI has the potential to improve healthcare, it can also perpetuate and even exacerbate existing societal biases leading to further health inequalities. A key consideration is representativeness of the data underpinning AI: if the data is not representative of specific populations, there is a risk that the AI algorithm will underperform in those individuals. Poor data composition and quality in underrepresented groups risks poorer algorithmic performance and perpetuation of societal biases.

To address this, gathering of health data needs to be designed with inclusivity and diversity in mind. We need standards to guide how AI datasets should be composed (‘who’ is represented in the data) and transparency around the data composition (‘how’ they are represented).

This project is about developing standards for health data, to support development of AI algorithms which do not disadvantage minoritised population groups. The standards arising from this project will be of importance to policy makers, regulators, developers of AI systems, patients and healthcare professionals.


WHO IS ORGANISING AND FUNDING THE RESEARCH?


This project is being conducted by an international team of researchers from multiple research institutes – the full list of co-investigators can be found at www.datadiversity.org. The study is funded by the National Institute of Health Research, NHSx and The Health Foundation as part of an AI and Racial and Ethnic Inequalities in Health and Care award (AI_HI200014).


WHY ARE WE APPROACHING YOU?


We are inviting individuals who have expertise in, or experience of, machine learning, health data science, digital health technologies and health inequalities. Your participation will help us to identify the items that should be incorporated in the standards we are seeking to produce. Additionally, we're inviting patients and other members of the public to participate to ensure that the standards we produce meet the needs of wider society.


WHAT WILL HAPPEN TO ME IF I TAKE PART?


You will be contacted regarding participation in a modified e-Delphi study, which is a two round questionnaire study. There will be a period of three weeks between rounds to allow for response analysis by the study team. A third round may be required in the event of poor consensus in round two. You do not have to complete all the rounds, but we prefer that you do, to preserve the validity of the study findings and minimise bias. Each round of the modified e-Delphi will take no longer than 40 minutes to complete in total.


DO I HAVE TO TAKE PART?


No, your participation is entirely voluntary. If you do decide to take part you will be given a copy of this information sheet and be asked to confirm consent on the first page of the online e-Delphi form.


WHAT WILL HAPPEN IF I NO LONGER WANT TO TAKE PART IN THE STUDY?


You may change your mind at any time (before the start of the study or even after you have commenced the study) for whatever reason without having to justify your decision. Please contact the research team (Dr Xiaoxuan Liu, x.liu.8@bham.ac.uk) and they will remove your confidential information from the study database. You may additionally ask for the removal of data you have provided up until the point of withdrawal, in which case we will remove any data that is identifiably linked to you. However, please be advised that data will be aggregated and anonymised five days after each round of the Delphi study has closed, so if you withdraw after this date, we will no longer be able to remove this data as we will not be able to identify that it was specifically collected from you.


WHAT ARE THE POSSIBLE BENEFITS OF TAKING PART?


There will be no direct benefits for you if you decide to take part. However, there will be wider benefits anticipated for marginalised or disadvantaged groups, who may not have been previously represented in the development of AI in healthcare. Thus ensuring that minority groups can derive the benefit from this technology that other healthcare groups currently already experience.


WILL MY TAKING PART IN THIS STUDY BE KEPT CONFIDENTIAL?


All information about you which is connected with the research study will be kept strictly confidential. Only the research team will have access to any data generated from this study. The data that we will collect from you include the following, name, contact details (email), age, gender, ethnicity, professional role, organisation you work for and geographical location. When we use your information for research, we rely on Article 6(1)e (“processing is necessary for the performance of a task carried out in the public interest”) and Article 9(2)j (“processing is necessary for archiving purposes in the public interest, scientific or historical research purposes”) of the General Data Protection Regulation (GDPR) in combination with Schedule 1, Part 1, Art 4 Data Protection Act (DPA) 2018. For more information on how we manage your data and what are your rights, our privacy notice can be accessed at https://www.birmingham.ac.uk/privacy/index.aspx

Electronic data will be stored on University of Birmingham encrypted computers that are password-protected, whilst any data collected on paper, such as paper consent forms and any paper correspondence will be stored in locked filing cabinets in the researcher’s office. Data will be kept for 10 years and after this retention period has finished, the study data will be destroyed in line with University of Birmingham’s standard operating procedures. Any information used in publications or reports will be anonymised so that your identity cannot be known.


WHO HAS REVIEWED THIS STUDY?


This research study has been reviewed and approved by the University of Birmingham ethics committee (ERN_21-1831).

If you have concerns about any aspect of this study, or if you wish to withdraw, please contact Dr Xiaoxuan Liu (x.liu.8@bham.ac.uk) or contact@datadiversity.org