Artificial intelligence (AI) has become an integral part of our lives, occupying various aspects and technologies. While the concept lacks a universal definition, it is often linked with complex high-tech systems like Large Language Models (LLM), such as ChatGpt.
And yet, AI is far more omnipresent than we realize.
Consider the recommendation systems that suggest sentences as we write emails, the curated ranking lists of songs and videos, or machine translation.
AI even plays a role in our homes, powering smart speakers or robotic vacuums that may capture sensitive personal data such as visuals of the inside of your home.
Whether you are (considering) using Artificial Intelligence Systems (AIS) in your private life or in the workplace, in an era where AI increasingly acquires a prominent place in diverse aspects of our lives, it might be interesting to highlight some general key considerations when developing or deploying AIS.
The General Data Protection Regulation (GDPR) sets out a general framework with key principles and rules for the processing of personal data in the European Union.
Yes, you read it correctly. Due to the correlation between AIS and the processing of personal data, the GDPR rules will play a role here as well. And although the principles remain the same, some might be trickier to comply with than you first thought.
Defining the purpose:
The principle requires that your AIS should be developed, trained, and deployed with a clearly defined purpose.
In practice, this means that you will need to define both:
- A purpose for the learning phase – as in clearly define why you are processing data before developing and training your system – and
- A purpose for the production or deployment phase – which will also require you to include the foreseeing future including the machine learning capacities of AIS.
Information and explanation:
This is another critical aspect which revolves around providing information or communication relating to the processing of personal data in a concise, transparent, understandable, and easily accessible manner, using clear and plain language.
Although, it might seem simple at first sight, applying this for both the learning and the deployment phase might be more challenging than you think.
Especially, when you are using data that is collected indirectly. Providing precise explanations for complex and opaque AI systems may pose further difficulties.
Additionally, it is interesting to also point out that the forthcoming EU legislative instruments such as the Artificial Intelligence Act (AIA) or the Digital Services Act (DSA) contain provisions that further amplify the scope of this requirement.
Exercise of rights:
Respecting individuals’ rights under the GDPR is vital for AI systems that process personal data.
Access, rectification, erasure, restriction, portability, and objection are essential rights that allow individuals to understand and take control of their personal data being processed.
These rights must be upheld throughout the AI system’s life cycle, covering data contained in the learning datasets and produced during the production or deployment phase.
Therefore, data controllers must incorporate mechanisms and procedures to address data subject requests, keeping in mind that certain exceptions may be applicable for AI used in scientific research.
Furthermore, it is worth considering that AI models themselves often contain personal data. Some algorithms inherently include fragments of training data, while other models may inadvertently capture personal information, necessitating safeguards against potential risks.
And it does not stop here.
We are expecting several new legislative instruments that will bring forward a whole new set of rules complementing the currently existing GDPR framework:
- Data Governance Act (DGA) with rules on data usage and sharing (adopted);
- Digital Services Act (DSA) setting the rules for online platforms and intermediaries – online marketplaces, social networks, content-sharing platforms, app stores, and online travel and accommodation platforms (adopted);
- Digital Markets Act (DMA) setting obligations and prohibitions for Gatekeepers (large digital platforms with core platform services – such as for example online search engines, app stores, and messenger services. Think for instance of Google, and Amazon.) – which could have an aftereffect for smaller entities (adopted).
- Data Act with rules on fair use and access of data;
- Artificial Intelligence Act (AIA) setting the rules for AIS based on a risk-based approach;
- Liability Directives includes the amended Product Liability Directive (PLD) and the Artificial Intelligence Liability Directive (AILD).