Article
EMERGING TRENDS IN DATA PRIVACY FOR LIFE SCIENCES
December 8th, 2023
Privacy concerns in the Life Sciences sector are influenced by the highly sensitive nature of the data involved. The processing of these data is subjected to a stringent regime. In the case of the European General Data Protection Regulation (GDPR), the processing is prohibited unless it falls into one of the exceptions listed in Article 9. The GDPR points out the significant risks to the fundamental rights and freedoms of individuals, including the possibility of discrimination, as rationale for this approach. In this context, it appears crucial that companies keep abreast of technological and regulatory developments to ensure compliance with data protection requirements.
Fast-paced digitalization and AI development in the Life Sciences sector
The COVID-19 pandemic accelerated the transition of clinical research to a more decentralized model. To adapt to the exceptional circumstances derived from the sanitary emergency, numerous countries permitted remote monitoring in clinical trials. Furthermore, there is a widespread use of mobile applications, wearables, and all types of devices to facilitate remote patient monitoring and collection of data. While very promising, this digital and interconnected environment increases the challenges faced in terms of security and guarantee of data subjects’ fundamental rights to data protection and privacy as stated in the Charter of Fundamental Rights of the European Union (EU).
We are currently experiencing an era of artificial intelligence (AI) growth. Reportedly, medical and healthcare areas account for the highest investment in AI, reaching 6.1 billion dollars globally in 2022. AI is already being used as a tool in prevention, diagnosis and prognosis (particularly in radiology), and personalized medicine.
AI’s impact on Life Sciences sector
The impact of AI on drug discovery and development is also expected to grow, resulting in reduced costs and improved outcomes. The United States Food and Drug Administration (FDA) has recently issued the first orphan drug designation discovered and developed through generative AI, which is currently in phase II trial. The use of AI extends to all clinical research related operations, including research and design of the study, site selection, patient identification and recruitment, and collection and analysis of the clinical data.
Data Minimization: personal data shall be adequate, relevant and limited to what is necessary in relation to the specific purposes for which they are processed. This principle may clash with the essence of AI technologies, which normally require access to large datasets.
Purpose Limitation: before collecting data, the purpose of processing must be clearly defined. AI technologies enable the reuse of data for purposes different than the ones determined at the time of collection. It is then necessary to determine whether these new purposes are compatible with the original ones.
Transparency: information about the processing of personal data should be given to data subjects when using digital tools, online platforms, or AI-based technologies. It is challenging to establish the level of information that shall be given to the data subjects, as the way AI technologies work can be difficult to understand and to explain (the “black box” nature of AI, that is, the system’s inputs and operations are invisible to the user). Additionally, the digital literacy of data subjects can vary, and we may need to find a balance to comply with the transparency principle without increasing the reluctance of data subjects to share their data.
It is therefore key to ensure compliance with data protection regulations throughout the lifecycle of the AI system. This will be necessary in order to avoid bias and discrimination, as well as to enable data subjects to exercise their rights and keep control over their data. Additionally, companies should keep in mind that the use of AI technology will often lead to high risks for data subjects’ rights, triggering the mandatory performance of a data protection impact assessment.
Digital technologies and AI trigger a complex regulatory landscape
AI Act
The EU has proposed what would constitute the first regulation on AI: the AI Act. The goal of the act is to ensure that the development of innovative AI-based technologies can continue while protecting people’s rights. Under the proposed legislation, the risks of AI technologies are classified from minimal to unacceptable. Importantly, AI systems in healthcare areas may be classified as high-risk and would be strictly regulated. For the development and assessment of high-risk AI systems, actors need to access and use high quality datasets. Moreover, the risks that AI might trigger individuals’ privacy should be considered seriously, as AI may reveal personal information about an individual. Therefore, the GDPR is one of the pillars for the AI legislation. The AI Act, together with the GDPR, aim to provide trustful, accountable and non-discriminatory access to high quality data for the training, validation and testing of AI systems, while protecting individual’s privacy.
The proposal is currently in trilogue meetings, targeted to be approved in the next months. In the meantime, the European Data Protection Board is following closely the development of the negotiations, with the purpose of ensuring compatibility with the GDPR.
Data Governance Act
- Within and between the hospitals: to improve quality of medical care, to prescribe optimal treatments, considering individual’s other health conditions.
- For evidence-based medicine: to make decisions about care for individual patients. This involves data collected from clinical trials and metanalysis of its results, but also from observational and epidemiological studies.
- Within academic and pharmaceutical sector: to develop new medicines and find treatments for rare diseases.
European Health Data Space (EHDS)
The ways of sharing personal data are shaped by the advances of digital technologies. The European Health Data Space (“EHDS”) is supposed to be the answer to the technical problem of the vast amount of data required in healthcare, including clinical research, hospitals or pharmacies. However, the interplay with the GDPR may cause some frictions and uncertainties still to be resolved. The EHDS aims to empower individuals to exercise control over their health data and implement a secure avenue to access data for healthcare and for secondary uses in the hospitals and for clinical research.
US regulatory landscape
While we wait for the approval of the American Data Privacy and Protection Act (ADPPA), several US states are working on their own comprehensive privacy laws. New state laws have entered into force this year (California Privacy Rights Act (amending the CCPA), Colorado Privacy Act, Connecticut Personal Data Privacy and Online Monitoring Act, and Virginia Consumer Data Protection Act), while others are expected to be approved or effective in the short term. Additionally, some states have adopted a sectoral approach, with the most significant example being Washington’s My Health My Data Act. It is also worth noticing the prominent role that the FTC has taken with regards to health data. Additionally, the FTC has started enforcing the Health breach Notification Rule, which is applicable to non-HIPPA covered entities. Furthermore, the recently proposed amendment to this rule clarifies that many health applications are within its scope.
- Within and between the hospitals: to improve quality of medical care, to prescribe optimal treatments, considering individual’s other health conditions.
- For evidence-based medicine: to make decisions about care for individual patients. This involves data collected from clinical trials and metanalysis of its results, but also from observational and epidemiological studies.
- Within academic and pharmaceutical sector: to develop new medicines and find treatments for rare diseases.
Ensuring data security and respecting individual’s privacy is more important than ever
The healthcare sector, and hospitals in particular, are the main target for cyberattacks. Such is the case of the ransomware attack on the Hospital Clinic de Barcelona earlier this year. As a result of this incident, thousands of appointments had to be rescheduled as it was impossible to use the systems or access clinical records, and highly sensitive data of patients and staff was leaked on the deep web.
These types of attacks often lead to disruption of operations, regulatory fines, and reputational damage. According to the latest IBM Data Breach Report, data breaches are more costly in health than in any other sector, reaching an average 10.93 million dollars in 2023.
In conclusion, while new technologies present numerous opportunities for the life sciences sector, the risks associated to data subjects are no less significant. In a complex ecosystem with numerous actors involved, increased digitalization, connectivity, and escalating security threats, it is crucial for companies to implement a data protection policy that ensures compliance with the complex and changing regulatory environment.