AI and Health: what about the protection of personal data?

New technologies and artificial intelligence (AI) have the power to significantly improve the daily lives of doctors and the prognosis of patients when they are based on the personal health data of patients. The rise of assisted surgeries, companion robots, intelligent prostheses and personalized treatments, thanks to the cross-checking of this personal data, testify to this. 

However, the use of AI in healthcare also raises important legal and ethical questions, particularly with regard to the management of patients' personal data, their confidentiality and the transparency of algorithms. The challenge is then to combine the use of AI with a responsible and ethical approach. 

Artificial intelligence at the service of medical diagnosis

One of the most important applications of AI is to aid in medical diagnosis. AI can indeed be trained to recognize early signs of disease. In medical imaging in particular, health professionals can use machine learning algorithms to analyze medical images, in order to detect anomalies and diagnose pathologies early. 

AI can also be used to analyze biological data and medical histories of patients, comparing them to a knowledge base to provide treatment recommendations or suggestions. AI can also analyze large amounts of clinical data and help predict the outcomes of treatments or interventions. 

Finally, it can be used to extract relevant information from electronic medical records in an automated way, thus speeding up the analysis process and allowing healthcare professionals to make informed decisions more quickly.

In this context, several projects have emerged, such as the French project "Automatic processing of emergency room summaries", says “TARPON” aiming to analyze the origin of the injuries suffered by patients presenting to the emergency room, in order to inform them of their possible risks, such as those associated with taking certain medications. AI analyzes annotated information on patient clinical reports, to classify trauma-related ER visits, and ultimately build a near-exhaustive trauma surveillance system.

In the United States, a predictive model, NYUTron, has been developed using millions of medical observations from the files of patients treated in hospitals affiliated with New York University (medical reports, notes on the evolution of the 'state of patients, radiological images, etc.), between January 2011 and May 2020. NYUTron was able to identify in advance 95% of patients who died in hospitals as well as 80% of those who were readmitted less than a month later. their exit. 

The legal challenge of artificial intelligence in the field of health: the protection and security of patients' personal data

One of the main challenges of health AI concerns the massive management of the health data it uses. 

According to the General Data Protection Regulation (GDPR)1, personal data concerning health are “ all data relating to the state of health of a data subject which reveal information about the past, present or future physical or mental state of the data subject ». 

Given the mass of health data processed by AI within the framework of the various existing projects, it is essential to ensure the application of the GDPR

When AI involves the collection and use of personal health data for research purposes or to improve algorithms, then obtaining informed consent from patients for the use of their data is crucial. such projects. As such, in accordance with Article 32 of the GDPR, the data controller (for example the health establishment) is required to implement, from the design phase of the AI ​​system, all the technical and organizational measures appropriate to guarantee a level of health data security appropriate to the risk. In addition, a data protection impact assessment (DPIA) is mandatory when the processing of personal data is likely to create a high risk for the rights and freedoms of data subjects. It is then a question of studying the risks on data security (confidentiality, integrity and availability) as well as the potential impacts for the persons concerned, in order to determine the appropriate protection and risk reduction measures.

The processing of health data by AI involves an indisputably high risk: the data is sensitive, it is collected on a large scale and used by algorithms whose reliability is not always known. Impact analysis is therefore not only mandatory but essential.

Aware of the importance of these issues, the CNIL recalled in a communication entitled “ AI: how to comply with the GDPR? » of April 5, 20222, the main principles of the Data Protection Act and the GDPR to be followed, as well as its positions on certain more specific aspects. A joint statement and action plan on generative AI was recently adopted by the data protection authorities of the G7 countries gathered from June 19 to 21, 2023, in Tokyo, in order to contribute to the development of AI while respecting fundamental rights3.

The national assembly also equipped with a information mission relating to AI and the protection of personal data in May 2023, under the direction of the rapporteurs, Messrs. Philippe Pradal and Stéphane Rambaud4

While AI offers extremely promising prospects for improving healthcare services and the daily lives of patients, it remains crucial to combine its advantages with responsible and ethical approach, in order to guarantee the protection and security of data while ensuring the transparency of the algorithms and the absence of discriminatory effects that they are likely to generate.


1 - Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of individuals with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95 /46/EC (General Data Protection Regulation)

2 - CNIL, IA: how to comply with the GDPR?, April 5, 2022

3 - CNIL, Generative AI: the G7 of data protection authorities adopts a joint statement, June 23, 2023

4 - National Assembly, Information missions of the committees, Artificial intelligence and data protection, Mr. Philippe Pradal, Mr. Stéphane Rambaud

NATHALIE RGB

Nathalie Boudet-Gizardin

Partner

Nathalie Boudet-Gizardin has developed expertise in advising and assisting health professionals (doctors, pharmacists, biologists, veterinarians, dental surgeons, midwives), both to legally structure their activity and negotiate their contracts and partnerships with health facilities.

NAVY RGB

Marine Vanhoucke

Partner

Marine Vanhoucke advises companies on Intellectual property and accompanies them on their subjects of Compliance.

Head of Hong Kong office, she assists French companies in their establishment and growth in Asia and has built up expertise in legal issues of international law, notably combining French and Asian interests.