Hands of a guy on laptop keyboard

Addressing discrimination in data-driven advertising: Regulatory opportunities and failures within the EU

Published on 23 June 2021
Updated on 05 April 2024

The marketing industry has become a data-driven realm. It uses data to predict consumer preferences, to anticipate their needs, convince them about the utility of a new product, shape their shopping habits, and to target advertisements. Some ad technologies such as collaborative filtering, smart-search systems, public segmentation, and other forms of consumer targeting, used by companies such as Facebook and Google, strongly depend on third-party information. All this shows just how much data is an important driver of the marketing industry. 

Currently, online targeted advertising, which seeks to find the ‘right person at the right time’, is ubiquitous on social-networking platforms and search engines for two main reasons. First, the abundance of free data that these services have about their own users makes it easier to predict consumer preferences. Second, the fact that several platforms are free of charge for the end users compels them to monetise their services by selling advertisement spaces. 

The abundance of demographic and behavioral data is a gold mine for advertisers because it allows them to target potential clients with an accuracy never before seen, and includes data such as gender, age, specific geographic location, ethnic origin, and sexual orientation, among infinite other possibilities. 

targeted-ads

Unquestionably, targeted ads can improve a consumer’s experience by giving them access to offers that interest them. However, they can also pose a risk of discrimination, for example, against women and minorities when they exclude them from receiving ads related to job opportunities, real-estate offers, and even credit possibilities.

In 2018, employment ads for the STEM sector (science, technology, engineering, and mathematics) were shown over 20% more to men than women in 191 countries. The campaign was run on Facebook and targeted both men and women over the age of 18. After crossing different data sets, three hypotheses for this discriminatory outcome were offered: (a) the algorithm developed this discriminatory approach from consumer behavioral data; (b) the algorithm developed the approach via other data sources it had access to, which in turn might have reflected a pattern of discrimination against women in different countries; and (c) the difference reflected the economics of advertisement delivery. 

In general, targeted advertising is illegal when it poses risks of discrimination for statutorily protected classes, especially when targeting relates to certain services or employment offers (see, for example, the EU Equality Directives). In the EU, the offer of employment, and several services and goods (e.g.housing), cannot exclude protected classes (including women and minorities) without a genuine occupational requirement or a legitimate aim. Imagine a situation where a hotel wants to advertise its hotel rooms online, or a real-estate agency that wants to advertise its brand new apartments, or where a software company wants to advertise online employment offers for its IT department. Can they decide to display their ads only to internet users of a certain ethnic group or a predetermined gender? Can the real-estate agency possibly target only straight men? 

Far from being fictional, these examples of targeting internet users (based on precise  demographic data) to see ads with housing, credit, and employment offers have been brought before US courts by civil rights lawyers. Plaintiffs have accused Facebook and advertisers of redlining women, as well as Black, Hispanic, and Asian Americans, and older workers from receiving ads related to employment, credit, and housing. Intriguingly, even though Facebook applies the same targeting possibilities in European countries, so far, no case regarding equality laws has been brought to national coucourts. 

Conversely, targeted ads, despite the risks they pose to the equal treatment of certain protected consumers and internet users, have been frequently litigated in European countries under the data protection legal framework due to privacy concerns. For example, the French data protection authority, the Commission Nationale de l’Informatique et des Libertés (CNIL), condemned Facebook to pay €150,000 for collecting personal data and displaying targeted ads without a legal basis. The sanction was implemented in 2017 before the GDPR entered into force, and was grounded in a breach of the French data protection law. In 2019, the same French authority imposed a financial penalty of €50 million on Google for breaching the GDPR. Google lacked transparency and valid consent regarding its advertisement personalisation. In the Netherlands, the Dutch data protection authority, the Autoriteit Persoonsgegevens (DPA), revealed that Facebook used the personal data of 9.6 million Dutch people for targeted ads without having their explicit consent. After investigations, the DPA found that the platform enabled advertisers to select ‘men who are interested in other men’ for targeted advertising purposes. The DPA condemned Facebook for not having required its users’ specific consent to process this sensitive data. While focusing on data protection, none of these cases delved into the issue of discrimination against protected groups.

marketing

The EU and the Council of Europe provide a broad legal framework for companies with access to personal data of EU citizens and residents. In addition, these institutions have recently produced several guidelines and have dedicated their work to address the challenges posed by a large range of artificial intelligence (AI) systems used in online targeted advertisements. This data governance framework aims not only to protect privacy but to prevent discrimination against individuals whose personal data is subjected to automated systems. This legal framework is multilayered and comprises a set of principles to ensure privacy and non-discrimination. 

Several EU foundational documents (such as the EU Charter of Fundamental Rights, the Treaty on the Functioning of the European Union (TFEU), and the immediately-applicable legislations such as the GDPR) have general principles that balance data processing and personal privacy. More specifically, EU laws also obligate member states to regulate tracking technologies through the e-Privacy Directive, as well as guidelines and reports produced by advisory boards and, in some cases, by the European Commission. This overarching set of rules invites businesses to carefully reflect on the personal data they use, and to have a detailed plan for data collection, processing, and use. Moreover, it invites businesses to be transparent in their data practices, potentially helping enforcement authorities assess any sort of wrongdoing.

However, the EU data protection legal framework has limitations in addressing discrimination in automated systems, especially in targeted advertisement. One of its main limitations resides in the absence of a definition of ‘discrimination’. 

The concept of ‘discrimination’ is used too broadly in data protection legal foundational texts. There is a general lack of scope and grounds involving the concept of discrimination in this field, which leaves it without any firm definition. The issue is often referred to in very general terms, such as: ‘personal data processing (…) may give rise to discrimination’, ‘automated systems present risk for discrimination’, or ‘the trouble big data analytics bring about is that of discrimination’. 

The generalisation of the term and its lack of scope sounds unfamiliar to non-discrimination lawyers and activists who are used to assessing equality and, ultimately, discriminaton, through statut-y-specific ‘grounds’, ‘classes’, and ‘protected aspects’. For instance, the European equality directives are very specific on the grounds and contexts by which they outlaw discrimination: the Gender Equality Directive covers gender work-related discrimination; the Race Equality Directive addresses racial discrimination in several contexts, including employment and the provision of goods and services; the Employment Equality Directive shields against work-related discrimination based on religion or belief, disability, age, and sexual orientation; and the Goods and Services Directive addresses gender-based discrimination in the access and supply of goods and services. In this sense, one can legitimately ask against whom and in which automated contexts European data protection laws can enforce non-discriminatory measures. This answer can only be provided by non-discrimination statutory laws. However, the convergence between data protection athe anti-discrimination fieldsshas been highly neglected in the EU.  

The provision of discrimination in data protection legal texts must necessarily be construed in light of European and national non-discrimination laws that offer definitions and details about classes which should be protected against discrimination. This is the only path to argue against potential discriminatory practices companies such as Facebook might be allowing in the context of targeting advertisement in EU member states. 

Intriguingly, contrary to the USA, European non-discrimination lawyers and activists have not paid much attention to the issue of discrimination in the online targeted ad industry. Equality bodies, legal experts, and lawyers have sparsely addressed the issue. Not surprisingly, Facebook’s discriminatory targeting practices have not been assessed by national courts within the EU. The non-discrimination law field has many answers, expertise, and legal tools to address discrimination in automated systems, and investments in this field should be made as soon as possible. 


Ana Maria Corrêa is currently lecturing Comparative Law at the Université Libre de Bruxelles. She has recently defended her PhD thesis on the challenges of regulating the digital economy and preventing discrimination in the US and European markets. Ana Maria is a curator for the Digital Watch observatory.

An extended version of this text was published in the book ‘AI and Law: A Critical Overview’, Édition Thémis, 2021.

Related events

Load more
0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

The reCAPTCHA verification period has expired. Please reload the page.

Subscribe to Diplo's Blog

Tailor your subscription to your interests, from updates on the dynamic world of digital diplomacy to the latest trends in AI.

Subscribe to more Diplo and Geneva Internet Platform newsletters!