Skip to main content

Article based on the IAPP Webinar

Let’s do it live: Role-playing a GenAI project risk assessment

April 17th, 2024

From ensuring data protection and security to addressing potential biases, organizations should navigate through a complex landscape to deploy chatbot systems effectively. Organizations should be aware that there are risks in terms of data protection when using AI, take into consideration that data is representative and acknowledge the presence of bias or risks of inaccurate information input and output while using these systems.

With the forthcoming requirements under the EU AI Act, which aims to regulate AI systems within the European Union, additional considerations and compliance measures become essential. For example, chatbots should provide the information for the purpose(s) they were created. Chatbots often collect and process sensitive information, such as personal preferences and behavioral data. Some gaps need to be addressed regarding the reliability and trustworthiness of this information.

It is crucial for organizations to maintain enhanced protection measures, especially when collecting and processing personal data through various GenAI Models from individuals located in areas covered by strict data protection regulations. These measures include, but are not limited to, the development and implementation of protection -preserving algorithms to train these GenAI models to mitigate vulnerabilities and privacy risks. It is also important to introduce mechanisms to ensure that training data used to feed the GenAI models does not contain sensitive information, preventing unintended or accidental disclosure.

How to mitigate controls for different risks?

By collecting data at the bare minimum required, using the principle of data minimization, ensures that users are not adding irrelevant information. Therefore, only the minimum necessary data should be collected and fed into the GenAI training programs and thorough risk assessments should be conducted at various stages of development.

Organizations could allocate resources for regular maintenance, troubleshooting, and software updates to ensure the optimal performance of these systems and protect them from fraudulent use. Ethical review procedures are also recommended to evaluate the potential impacts of AI-generated content, with a focus on privacy concerns.

Regarding the transparency principle, organizations are encouraged to conduct regular audits to monitor the AI-generated content for privacy risks and promote the use of transparent and comprehensible GenAI algorithms, enabling the detection of sensitive data in the output text. It is essential to implement robust processes to obtain explicit consent from users before using any of their data for any GenAI-related processing activities and to provide clear protection notices to inform users how their data will be used and processed.

Promoting strong anonymization techniques to eliminate personal identifiers from the data before feeding it to the GenAI models, emphasizing the use of encryption techniques, and secure data storage methods to safeguard training data and prevent unauthorized access during the transfer is also recommended. Establishing strategies, maintaining records, and monitoring metrics to demonstrate compliance with regulatory standards can support data integrity. In summary, crucial steps in adhering to protection requirements when it comes to GenAI include enhancing transparency, implementing the appropriate security measures, embedding a strong Privacy by Design and by Default Program, clearly informing users about the intended use of their data and closely controlling and monitoring the training data fed into GenAI models.

Impact of the EU AI Act

The EU AI Act proposed by the European Commission aims to establish a comprehensive regulatory framework for AI systems operating within the EU. Key requirements and implications may impact the rollout of chatbot AI systems. Implementing ethical guidelines, providing comprehensive training programs for developers to reduce bias, and conducting regular audits and validation checks to identify and address potential errors or performance issues are essential. Monitoring also helps organizations maintain accountability and transparency for a robust framework.

In conclusion, the use of chatbot AI systems presents both opportunities and challenges for organizations. By prioritizing data protection, addressing biases, optimizing user experiences, and embracing regulatory compliance, companies can navigate the complexities of AI deployment successfully and contribute to the responsible advancement of AI technology. MyData-Trust provides global data protection assessments to ensure the best protection coverage and provide the possible solutions to the current needs and challenges faced by customers, in the context of a continuously changing technological environment.

Patricia Tenorio

Data Protection Manager Associate

Cristina Fiat

Data Protection Manager

If you want to contact us

Contact us