Automated decisions under the GDPR and the AI Act – Technologist

The significance of the GDPR

The GDPR places a variety of obligations on businesses to ensure that processing is fair and lawful.

A specific obligation under article 22 of the GDPR concerns the circumstances in which decisions are permitted to be taken by automated means without human involvement, and any appropriate safeguards that businesses should undertake to protect individuals. Automated decisions which come within the rule are prohibited unless certain conditions are satisfied.

The SCHUFA decision, by the Court of Justice of the European Union (CJEU), gave a broad interpretation of ‘decision’. The CJEU found that a probability value provided by a credit reference agency constitutes a decision, where the third party that makes a loan decision ‘draws strongly’ on that probability value to establish, implement or terminate a contractual relationship. The impact of this decision is that a greater number of organisations may come within the rules. Such organisations will then need to consider how to mitigate these impacts through their contractual arrangements, provide understandable explanations to individuals about how decisions are made, and to have the processes and resources to allow individuals to contest decisions. The case also stated that decisions must have a significant enough effect on an individual to come within the rules, for example where there is a strong economic impact.

Evaluating the level of risk under the AI Act

The AI Act will prohibit certain uses of AI and place restrictions on others.

It is worth noting first that while the AI Act is an EU law, companies operating across borders may be impacted because the scope of the AI Act, much like the GDPR, extends not just to organisations in the EEA, but also those outside of it. The maximum fine under the AI Act is €35m or up to 7 percent of total worldwide annual turnover for the preceding financial year, whichever is higher.

The AI Act will prohibit eight specified uses of AI entirely, including evaluating or classifying individuals or groups based on their behaviour or characteristics with a social score leading to detrimental or unfavourable treatment that is unjustified or disproportionate. It will also place restrictions on ‘high risk’ AI systems. One criterion that organisations will need to consider is whether the AI system poses a significant risk of harm, including by not materially influencing the outcome of decision making.

Deciding on risk mitigations

Banks, lenders and service providers will need to understand whether the organisation and its products and services are within scope of the AI Act and GDPR before mitigating risks. These may include robust documentation, transparent communication, human oversight and training.

This is a summary of an article written for Financier Worldwide. Read the full article here.

 

Authored by Robert Fett.

Add a Comment

Your email address will not be published. Required fields are marked *