What about fairness, bias and discrimination? (2024)

In detail

  • How do bias and discrimination relate to fairness?
  • How should we address risks of bias and discrimination?
  • Why might an AI system lead to discrimination?
  • Should we just remove all sensitive data?
  • What is the difference between fairness in data protection law and “algorithmic fairness”?
  • What are the technical approaches to mitigate discrimination risk in ML models?
  • Can we process special category data to assess and address discrimination in AI systems?
  • What about special category data, discrimination and automated decision-making?
  • What if we accidentally infer special category data through our use of AI?
  • What can we do to mitigate these risks?
  • Is AI using personal data the best solution to your problem?

How do bias and discrimination relate to fairness?

Fairness in data protection law includes fair treatment and non-discrimination. It is not just about the distribution of benefits and opportunities between members of a group. It is also about how you balance different, competing interests. For example, your own interests and the interests of individuals who are members of that group.

In this guidance we differentiate between bias and discrimination. Bias is an aspect of decision-making. It is a trait often detected not just in AI systems but also humans or institutions. We refer to discrimination as the adverse effects that result from bias. For example, a prejudicial approach in favour of one solution over another.

How should we address risks of bias and discrimination?

As AI systems learn from data which may be unbalanced and/or reflect discrimination, they may produce outputs which have discriminatory effects on people based on their gender, race, age, health, religion, disability, sexual orientation or other characteristics.

The fact that AI systems learn from data does not guarantee that their outputs will not lead to discriminatory effects. The data used to train and test AI systems, as well as the way they are designed, and used, might lead to AI systems which treat certain groups less favourably without objective justification.

The following sections and the Annex A: Fairness in the AI lifecycle give guidance on interpreting the discrimination-related requirements of data protection law in the context of AI, as well as making some suggestions about best practice.

The following sections do not aim to provide guidance on legal compliance with the UK’s anti-discrimination legal framework, notably the UK Equality Act 2010. This sits alongside data protection law and applies to a wide range of organisations, both as employers and service providers. It gives individuals protection from direct and indirect discrimination, whether generated by a human or an automated decision-making system (or some combination of the two).

Demonstrating that an AI system is not unlawfully discriminatory under the EA2010 is a complex task, but it is separate and additional to your obligations relating to discrimination under data protection law. Compliance with one will not guarantee compliance with the other.

Data protection law addresses concerns about unjust discrimination in several ways.

First, processing of personal data must be ‘fair’. Fairness means you should handle personal data in ways people reasonably expect and not use it in ways that have unjustified adverse effects on them. Any processing of personal data using AI that leads to unjust discrimination between people, will violate the fairness principle.

Second, data protection aims to protect individuals’ rights and freedoms– with regard to the processing of their personal data. This includes the right to privacy but also the right to non-discrimination. Specifically, the requirements of data protection by design and by default mean you have to implement appropriate technical and organisational measures to take into account the risks to the rights and freedoms of data subjects and implement the data protection principles effectively. Similarly, a data protection impact assessment should contain measures to address and mitigate those risks, which include the risk of discrimination.

Third, the UKGDPR specifically notes that processing personal data for profiling and automated decision-making may give rise to discrimination, and that you should use appropriate technical and organisational measures to prevent this.

Why might an AI system lead to discrimination?

Before addressing what data protection law requires you to do about the risk of AI and discrimination, and suggesting best practices for compliance, it is helpful to understand how these risks might arise. The following content contains some technical details, so understanding how it may apply to your organisation may require attention of staff in both compliance and technical roles.

Example

A bank develops an AI system to calculate the credit risk of potential customers. The bank will use the AI system to approve or reject loan applications.

The system is trained on a large dataset containing a range of information about previous borrowers, such as their occupation, income, age, and whether or not they repaid their loan.

During testing, the bank wants to check against any possible gender bias and finds the AI system tends to give women lower credit scores.

In this case, the AI system puts members of a certain group (women) at a disadvantage, and so would appear to be discriminatory. Note that this may not constitute unlawful discrimination under equalities law, if the deployment of the AI system can be shown to be a proportionate means of achieving a legitimate aim.

There are many different reasons why the system may be giving women lower credit scores.

One is imbalanced training data. The proportion of different genders in the training data may not be balanced. For example, the training data may include a greater proportion of male borrowers because in the past fewer women applied for loans and therefore the bank doesn’t have enough data about women.

Machine learning algorithms used to create an AI system are designed to be the best fit for the data it is trained and tested on. If the men are over-represented in the training data, the model will pay more attention to the statistical relationships that predict repayment rates for men, and less to statistical patterns that predict repayment rates for women, which might be different.

Put another way, because they are statistically ‘less important’, the model may systematically predict lower loan repayment rates for women, even if women in the training dataset were on average more likely to repay their loans than men.

These issues will apply to any population under-represented in the training data. For example, if a facial recognition model is trained on a disproportionate number of faces belonging to a particular ethnicity and gender (eg white men), it will perform better when recognising individuals in that group and worse on others.

Another reason is that the training data may reflect past discrimination. For example, if in the past, loan applications from women were rejected more frequently than those from men due to prejudice, then any model based on such training data is likely to reproduce the same pattern of discrimination.

Certain domains where discrimination has historically been a significant problem are more likely to experience this problem more acutely, such as police stop-and-search of young black men, or recruitment for traditionally male roles.

Should we just remove all sensitive data?

Data protection provides additional protections for special category data, while UK equality law is concerned with protected characteristics. Here we use ‘sensitive data’ as an umbrella term for both groups.

It’s important to note that discrimination issues can occur even if the training data does not contain any protected characteristics like gender or race.

This is because a variety of features in the training data are often closely correlated with protected characteristics in non-obvious ways (eg occupation). These “proxy variables” enable the model to reproduce patterns of discrimination associated with those characteristics, even if the designers did not intend this.

Therefore, removing particular attributes to mitigate the risk of discrimination will not necessarily achieve the intended outcome. This approach is sometimes known as “fairness through unawareness”. However, simply removing special category data (or protected characteristics) does not guarantee that other proxy variables cannot essentially reproduce previous patterns.

For example, even if you remove an attribute about gender from a dataset, it may still be possible to infer it from other data that you retain. For example, if more women traditionally work part-time in a sector, a model using working hours to make a recommendation in the context of redundancies may end up discriminating on the basis of gender.

These problems can occur in any statistical model, so the following considerations may apply to you even if you don’t consider your statistical models to be ‘AI’. However, they are more likely to occur in AI systems because they can include a greater number of features and may identify complex combinations of features which are proxies for protected characteristics. Many modern ML methods are more powerful than traditional statistical approaches because they are better at uncovering non-linear patterns in high dimensional data. However, these may also include patterns that reflect discrimination. For example, ML models can pick up redundant encodings in large datasets and replicate any biases associated with them.

Other causes of potentially discriminatory AI systems include:

  • prejudices or bias in the way variables are measured, labelled or aggregated;
  • biased cultural assumptions of developers;
  • inappropriately defined objectives (eg where the ‘best candidate’ for a job embeds assumptions about gender, race or other characteristics); or
  • the way the model is deployed (eg via a user interface which doesn’t meet accessibility requirements).

What is the difference between fairness in data protection law and “algorithmic fairness”?

Computer scientists have been developing mathematical techniques to measure if AI models treat individuals from different groups in potentially discriminatory ways. This field is referred to as “algorithmic fairness”. It reflects a statistical approach to fairness concerned with the distribution of classifications or predictions leading to the real-world allocation of resources, opportunities or capabilities. This is not the same as fairness in data protection which is broader than that, and considers imbalances between affected groups and the stakeholders processing their data.

When deciding what algorithmic fairness metrics you use you must consider legal frameworks relevant to your context, including equality law.

Statistical approaches can be useful in identifying discriminatory impacts. But they are not likely to guarantee your system complies with fairness or explaining why and how any unfairness takes place, even less what mitigation measures are efficient. This is because they cannot fully capture the social, historical and political nuances of each use case that relate to how, where, why personal data was processed.

Example

An organisation uses algorithmic fairness metrics to evaluate whether a system has shortlisted a disproportionate number of women to men for a specific job. The metrics do not address more substantive elements such as the terms of employment or the suitability of the candidates.

As a result, you should view algorithmic fairness metrics as part of a broader non-technical framework that you need to put in place.

Further reading outside this guidance

Fairness definitions explained

Algorithmic fairness metrics and relevant toolkits may assist you in identifying and mitigating risks of unfair outcomes. However, fairness is not a goal that algorithms can achieve alone. Therefore, you should take a holistic approach, thinking about fairness across different dimensions and not just within the bounds of your model or statistical distributions.

You should think about:

  • the power and information imbalance between you and individuals whose personal data you process;
  • the underlying structures and dynamics of the environment your AI will be deployed in;
  • the implications of creating self-reinforcing feedback loops;
  • the nature and scale of any potential harms to individuals resulting from the processing of their data; and
  • how you will make well-informed decisions based on rationality and causality rather than mere correlation.

In general, you should bear in mind the following:

Statistical approaches are just one piece of the puzzle: You need to take a broader approach to fairness. This is because vital elements are not captured by algorithmic fairness metrics, such as governance structures or legal requirements. Additionally, it may be difficult (and in some cases, misguided) to mathematically measure and remove bias that may be encoded in various features of your model.

Context is key: The conditions under which decision-making takes place is equally important as the decision-making process itself.

Fairness in terms of data protection in the context of AI is not static: AI-driven or supported decisions can be consequential, changing the world they are applied in, and potentially creating risks for cumulative discrimination.

The root causes are important: AI should not distract your decision-makers from addressing the root causes of unfairness that AI systems may detect and replicate.

Patterns are not destiny: AI models do not just memorise but seek to replicate patterns. The decisions they give rise to will influence the status quo, which in turn will impact the input data that inform future predictions. Without thoughtful adoption, AI can lead to a vicious cycle where past patterns of unfairness are replicated and entrenched

What are the technical approaches to mitigate discrimination risk in ML models?

While discrimination is a broader problem that cannot realistically be ‘fixed’ through technology, various approaches exist which aim to mitigate AI-driven discrimination.

As explained above, some of these involve algorithmic fairness. This is a field of different mathematical techniques to measure how AI models treat individuals from different groups in potentially discriminatory ways and reduce them.

The techniques it proposes do not necessarily align with relevant non-discrimination law in the UK, and in some cases may contradict it, so should not be relied upon as a means of complying with such obligations. However, depending on your context, some of these approaches may be appropriate technical measures to ensure personal data processing is fair and to minimise the risks of discrimination arising from it.

In cases of imbalanced training data, it may be possible to balance it out by adding or removing data about under/ overrepresented subsets of the population (eg adding more data points on loan applications from women). This is part of pre-processing techniques.

In cases where the training data reflects past discrimination, you could either modify the data, change the learning process, or modify the model after training. These are part of in-processing and post-processing techniques.

In order to measure whether these techniques are effective, there are various mathematical ‘fairness’ measures against which you can measure the results.

Simply removing any protected characteristics from the inputs the model uses to make a prediction is unlikely to be enough, as there are often variables which are proxies for the protected characteristics. Other measures involve comparing how the AI system distributes positive or negative outcomes (or errors) between protected groups. Some of these measures conflict with each other, meaning you cannot satisfy all of them at the same time. Which of these measures are most appropriate, and in what combinations, if any, will depend on your context, as well as any applicable relevant laws (eg equality law).

You should also consider the impact of these techniques on the statistical accuracy of the AI system’s performance. For example, to reduce the potential for discrimination, you might modify a credit risk model so that the proportion of positive predictions between people with different protected characteristics (eg men and women) are equalised. This may help prevent discriminatory outcomes, but it could also result in a higher number of statistical errors overall which you will also need to manage as well.

In practice, there may not always be a tension between statistical accuracy and avoiding discrimination. For example, if discriminatory outcomes in the model are driven by a relative lack of data about a statistically small minority of the population, then statistical accuracy of the model could be increased by collecting more data about them, whilst also equalising the proportions of correct predictions.

However, in that case, you would face a different choice between:

  • collecting more data on the minority population in the interests of reducing the disproportionate number of statistical errors they face; or
  • not collecting such data due to the risks doing so may pose to the other rights and freedoms of those individuals.

Unfairness and discrimination is not limited to impacts on groups for which there is Equality Act protection, but consideration should be given to whether the use of AI may result in unfair outcomes for other groups as well. Therefore, you must think about how you can protect minorities or vulnerable populations, while addressing risks of exacerbating pre-existing power imbalances. You should balance your bias mitigation goals with your data minimisation obligations. For example, if you can show that additional data is genuinely useful to protect minorities, then it is likely to be appropriate to process that additional data.

Can we process special category data to assess and address discrimination in AI systems?

In order to assess and address the risks of discrimination in your AI system, you may need a dataset containing data about individuals that includes:

  • special category data under data protection law; and/or
  • protected characteristics such as those outlined in the Equality Act 2010.

For example, you could use the dataset to test how your system performs with different groups, and also potentially to re-train your model to avoid discriminatory effects.

If your processing for this purpose involves special category data, then in addition to having a lawful basis under Article 6 of the UK GDPR you must meet one of the conditions in Article 9. Some of these also require authorisation by law or a basis in law, which can be found in Schedule 1 of the DPA 2018.

There is no single condition in Article 9 that is specifically about the purpose of assessing and addressing discrimination in AI systems. This means that which, if any, of these conditions are appropriate depends on your individual circumstances.

You can use the following diagram to understand what each condition requires. It has links to more information about each one.

Do you need to process special category data?What about fairness, bias and discrimination? (1)What about fairness, bias and discrimination? (2)
What about fairness, bias and discrimination? (3)
Are you considering the “explicit consent” condition?What about fairness, bias and discrimination? (4)To work out if this is appropriate, see our guidance on this condition.
What about fairness, bias and discrimination? (5)
Does the Article 9 condition say that it requires authorisation by law or a basis in law?What about fairness, bias and discrimination? (6)

This means you are considering:

  • vital interests
  • not-for-profit bodies
  • made public by the data subject
  • legal claims and judicial acts

To work out if any of these are appropriate, see our guidance. You don’t need a DPA Schedule 1 condition, or an appropriate policy document.

What about fairness, bias and discrimination? (7)

This means you are considering:

  • employment, social security and social protection
  • health and social care
  • public health
  • archiving, research and statistics
  • substantial public interest
Are you considering the “employment, social security and social protection” condition?
What about fairness, bias and discrimination? (8)

To work out if this is appropriate, see our guidance on this condition.

The relevant legal authorisation is set out in the DPA, at Schedule 1 condition 1. You need an appropriate policy document.

What about fairness, bias and discrimination? (9)
Are you considering the “health and social care” condition?What about fairness, bias and discrimination? (10)

To work out if this is appropriate, see our guidance on this condition.

The relevant basis in UK law is set out in the DPA, at Schedule 1 condition 2. You don’t need an appropriate policy document.

What about fairness, bias and discrimination? (11)
Are you considering the “public health” condition?What about fairness, bias and discrimination? (12)

To work out if this is appropriate, see our guidance on this condition.

The relevant basis in UK law is set out in the DPA, at Schedule 1 condition 3. You don’t need an appropriate policy document.

What about fairness, bias and discrimination? (13)
Are you considering the “archiving, research and statistics” condition?What about fairness, bias and discrimination? (14)

To work out if this is appropriate, see our guidance on this condition.

The relevant basis in UK law is set out in the DPA, at Schedule 1 condition 4. You don’t need an appropriate policy document.

What about fairness, bias and discrimination? (15)
Are you considering the “substantial public interest” condition?
What about fairness, bias and discrimination? (16)

To work out if this is appropriate, see our guidance on this condition.

The relevant basis in UK law is set out in the DPA, at Section 10(3). You need to meet one of the 23 specific substantial public interest conditions in Schedule 1 (at paragraphs 6 to 28). In almost all cases, you must also have an appropriate policy document.

An accessible, written description of this diagram (suitable for screen readers) is available here.

Example: using special category data to assess discrimination in AI, to identify and promote or maintain equality of opportunity

An organisation using a CV scoring AI system to assist with recruitment decisions needs to test whether its system might be discriminating by religious or philosophical beliefs. While the system does not directly use information about the applicants’ religion, there might be features in the system which are indirect proxies for religion, such as previous occupation or qualifications. In a labour market where certain religious groups have been historically excluded from particular professions, a CV scoring system may unfairly under-rate candidates on the basis of those proxies.

The organisation collects the religious beliefs of a sample of job applicants in order to assess whether the system is indeed producing disproportionately negative outcomes or erroneous predictions for applicants with particular religious beliefs.

The organisation relies on the substantial public interest condition in Article 9(2)(g), and the equality of opportunity or treatment condition in Schedule 1 (8) of the DPA 2018. This provision can be used to identify or keep under review the existence or absence of equality of opportunity or treatment between certain protected groups, with a view to enabling such equality to be promoted or maintained.

Example: using special category data to assess discrimination in AI, for research purposes

A university researcher is investigating whether facial recognition systems perform differently on the faces of people of different racial or ethnic origin, as part of a research project.

In order to do this, the researcher assigns racial labels to an existing dataset of faces that the system will be tested on, thereby processing special category data. They rely on the archiving, research and statistics condition in Article 9(2)(j), read with Schedule 1 paragraph 4 of the DPA 2018.

Is special category data in data protection law the same as protected characteristics under the Equality Act?

Not in all cases. Some of the protected characteristics outlined in the Equality Act are classified as special category data. For example, race, religion or belief, and sexual orientation.

Other protected characteristics aren’t. For example, testing for discriminatory impact by age does not involve special category data, even though age is a protected characteristic. In contrast, testing for discriminatory impact by ethnic origin does involve special category data.

You also need to be aware that some protected characteristics may constitute special category data even when the link is not obvious. For example, disability, pregnancy, and gender reassignment may be special category data in so far as they concern information about a person’s health. Similarly, because civil partnerships were until recently only available to same-sex couples, data that indicates someone is in a civil partnership may indirectly reveal their sexual orientation.

You should take this into account, as there are different data protection considerations depending on the kinds of discrimination you are testing for.

You can see the overlap of special category data and protected characteristics in Table 1.

Table 1.

Protected characteristics in the Equality Act 2010Special category data in UK data protection
  • race
  • religion or belief
  • sexual orientation
  • racial or ethnic origin
  • religious or philosophical beliefs
  • sexual orientation
  • age
  • disability
  • gender reassignment
  • marriage and civil partnership
  • pregnancy and maternity
  • sex
  • political opinions
  • trade union membership
  • genetic data
  • biometric data (where used for identification purposes)
  • health
  • sex life

What else do we need to consider?

You should also note that when you are processing personal data that results from specific technical processing about the physical, physiological or behavioural characteristics of an individual, and allows or confirms that individual’s unique identification, that data is biometric data.

Where you use biometric data for the purpose of uniquely identifying an individual, it is also special category data.

So, if you use biometric data for testing and mitigating discrimination in your AI system, but not for the purpose of confirming the identity of the individuals within the dataset or making any kind of decision in relation to them, the biometric data may not come under Article 9. The data is still regarded as biometric data under the UK GDPR, but may not be special category data.

Similarly, if the personal data does not allow or confirm an individual’s unique identification, then it is not biometric data (or special category data).

Additionally, even when you are not processing data classified as special category data in data protection law, you still need to consider:

  • the broader questions of lawfulness, fairness and the risks the processing poses as a whole; and
  • the possibility for the data to either be special category data anyway, or becoming so during the processing (ie if the processing involves analysing or inferring any data to do with health or genetic status).

Finally, if the personal data you are using to assess and improve potentially discriminatory AI were originally processed for a different purpose, you should consider:

  • whether your new purpose is compatible with the original purpose;
  • how you will obtain fresh consent, if required. For example, if the data was initially collected on the basis of consent, even if the new purpose is compatible you still need to collect a fresh consent for the new purpose; and
  • if the new purpose is incompatible, how you will ask for consent.

Further reading outside this guidance

Read our guidance on purpose limitation and special category data.

What about special category data, discrimination and automated decision-making?

Using special category data to assess the potential discriminatory impacts of AI systems does not usually constitute automated decision-making under data protection law. This is because it does not involve directly making any decisions about individuals.

Similarly, re-training a discriminatory model with data from a more diverse population to reduce its discriminatory effects does not involve directly making decisions about individuals and is therefore not classed as a decision with legal or similarly significant effect.

However, in some cases, simply re-training the AI model with a more diverse training set may not be enough to sufficiently mitigate its discriminatory impact. Rather than trying to make a model fair by ignoring protected characteristics when making a prediction, some approaches directly include such characteristics when making a classification, to ensure members of potentially disadvantaged groups are protected. Including protected characteristics could one of the measures you take to comply with the requirement to make ‘reasonable adjustments’ under the Equality Act 2010.

For example, if you were using an AI system to assist with sorting job applicants, rather than attempting to create a model which ignores a person’s disability, it may be more effective to include their disability status in order to ensure the system does not indirectly discriminate against them. Not including disability status as an input to the automated decision could mean the system is more likely to indirectly discriminate against people with a disability because it will not factor in the effect of their condition on other features used to make a prediction.

However, if you process disability status using an AI system to make decisions about individuals, which produce legal or similarly significant effects on them, you must have explicit consent from the individual, or be able to meet one of the substantial public interest conditions laid out in Schedule 1 of the DPA.

You need to carefully assess which conditions in Schedule 1 may apply. For example, the equality of opportunity monitoring provision mentioned above cannot be relied on in such contexts, because the processing is carried out for the purposes of decisions about a particular individual. Therefore, such approaches will only be lawful if based on a different substantial public interest condition in Schedule 1.

What if we accidentally infer special category data through our use of AI?

There are many contexts in which non-protected characteristics, such as the postcode you live in, are proxies for a protected characteristic, like race. Recent advances in machine learning, such as ‘deep’ learning, have made it even easier for AI systems to detect patterns in the world that are reflected in seemingly unrelated data. Unfortunately, this also includes detecting patterns of discrimination using complex combinations of features which might be correlated with protected characteristics in non-obvious ways.

For example, an AI system used to score job applications to assist a human decision-maker with recruitment decisions might be trained on examples of previously successful candidates. The information contained in the application itself may not include protected characteristics like race, disability, or mental health.

However, if the examples of employees used to train the model were discriminated against on those grounds (eg by being systematically under-rated in performance reviews), the algorithm may learn to reproduce that discrimination by inferring those characteristics from proxy data contained in the job application, despite the designer never intending it to.

So, even if you don’t use protected characteristics in your model, it is very possible that you may inadvertently use a model which has detected patterns of discrimination based on those protected characteristics and is reproducing them in its outputs. As described above, some of those protected characteristics are also special category data.

Special category data is defined as personal data that ‘reveals or concerns’ the special categories. If the model learns to use particular combinations of features that are sufficiently revealing of a special category, then the model may be processing special category data.

As stated in our guidance on special category data, if you use profiling with the intention of inferring special category data, then this is special category data irrespective of whether the inferences are incorrect.

Furthermore, for the reasons stated above, there may also be situations where your model infers special category as an intermediate step to another (non-special-category data) inference. You may not be able to tell if your model is doing this just by looking at the data that went into the model and the outputs that it produces. It may do so with high statistical accuracy, even though you did not intend for it to do so.

If you are using machine learning with personal data you should proactively assess the chances that your model might be inferring protected characteristics or special category data or both in order to make predictions, and actively monitor this possibility throughout the lifecycle of the system. If your system is indeed inferring special category or criminal conviction data (whether unintentional or not), you must have an appropriate Article 9 or 10 condition for processing. If it is unclear whether or not your system may be inferring such data, you may want to identify a condition to cover that possibility and reduce your compliance risk, although this is not a legal requirement.

As noted above, if you are using such a model to make legal or similarly significant decisions in a solely automated way, this is only lawful if you have the person’s consent or you meet the substantial public interest condition (and an appropriate provision in Schedule 1).

Further reading outside this guidance

Read our detailed guidance on special category data.

What can we do to mitigate these risks?

The most appropriate approach to managing the risk of discriminatory outcomes in ML systems will depend on the particular domain and context you are operating in.

You should determine and document your approach to bias and discrimination mitigation from the very beginning of any AI application lifecycle, so that you can take into account and put in place the appropriate safeguards and technical measures during the design and build phase. Annex A has more information about good practice steps for mitigating bias and discrimination.

Establishing clear policies and good practices for the procurement and lawful processing of high-quality training and test data is important, especially if you do not have enough data internally. Whether procured internally or externally, you should satisfy yourself that the data is representative of the population you apply the ML system to (although for reasons stated above, this will not be sufficient to ensure fairness). For example, for a high street bank operating in the UK, the training data could be compared against the most recent Census.

Your senior management should be responsible for signing-off the chosen approach to manage discrimination risk and be accountable for its compliance with data protection law. While they are able to leverage expertise from technology leads and other internal or external subject matter experts, to be accountable your senior leaders still need to have a sufficient understanding of the limitations and advantages of the different approaches. This is also true for DPOs and senior staff in oversight functions, as they will be expected to provide ongoing advice and guidance on the appropriateness of any measures and safeguards put in place to mitigate discrimination risk.

In many cases, choosing between different risk management approaches will require trade-offs. This includes choosing between safeguards for different protected characteristics and groups. You need to document and justify the approach you choose.

Trade-offs driven by technical approaches are not always obvious to non-technical staff so data scientists should highlight and explain these proactively to business owners, as well as to staff with responsibility for risk management and data protection compliance. Your technical leads should also be proactive in seeking domain-specific knowledge, including known proxies for protected characteristics, to inform algorithmic ‘fairness’ approaches.

You should undertake robust testing of any anti-discrimination measures and should monitor your ML system’s performance on an ongoing basis. Your risk management policies should clearly set out both the process, and the person responsible, for the final validation of an ML system both before deployment and, where appropriate, after an update.

For discrimination monitoring purposes, your organisational policies should set out any variance tolerances against the selected Key Performance Metrics, as well as escalation and variance investigation procedures. You should also clearly set variance limits above which the ML system should stop being used.

If you are replacing traditional decision-making systems with AI, you should consider running both concurrently for a period of time. You should investigate any significant difference in the type of decisions (eg loan acceptance or rejection) for different protected groups between the two systems, and any differences in how the AI system was predicted to perform and how it does in practice.

Beyond the requirements of data protection law, a diverse workforce is a powerful tool in identifying and managing bias and discrimination in AI systems, and in the organisation more generally.

Finally, this is an area where best practice and technical approaches continue to develop. As a result, we will keep this guidance (and Annex A) under review as these approaches mature and evolve.

You should invest the time and resources to ensure you continue to follow best practice and your staff remain appropriately trained on an ongoing basis. In some cases, AI may actually provide an opportunity to uncover and address existing discrimination in traditional decision-making processes and allow you to address any underlying discriminatory practices.

Further reading inside this guidance

See our guidance on ‘How should we manage competing interests when assessing AI-related risks?

Is AI using personal data the best solution to your problem?

It is useful to reflect on whether an AI system is the most appropriate solution to the problem you are trying to solve in the first place. Some problems are solved by being open to unpredictability, rather than identifying patterns of the past and assuming continuity. At least that may be the individual’s reasonable expectation based on your context.

AI systems follow predetermined rules and are often unable to adapt to novel or edge cases that don’t neatly map onto input data they have been trained on. In these cases, you should carefully evaluate whether it is appropriate to integrate an AI system into a decision-making process. Certain problems are not ones that algorithms can interpret and solve, especially if they require individualised human discretion or are particularly contested. In such cases, there may be more merit in AI playing a decision-support role. For example, providing context for a human decision-maker rather than making the decision itself or influencing it in a meaningful way.

As we explain in the Fairness in the AI lifecycle section, the choices you make about formulating a problem can have profound consequences in terms of the impacts your system has on individuals and wider society. You need to think about the underlying social dynamics and nuances, such as structural discrimination. In general, you need to think whether your AI system is doing the right thing, not just if it is correctly doing what you asked it to do.

What about fairness, bias and discrimination? (2024)
Top Articles
Latest Posts
Recommended Articles
Article information

Author: Domingo Moore

Last Updated:

Views: 6091

Rating: 4.2 / 5 (73 voted)

Reviews: 80% of readers found this page helpful

Author information

Name: Domingo Moore

Birthday: 1997-05-20

Address: 6485 Kohler Route, Antonioton, VT 77375-0299

Phone: +3213869077934

Job: Sales Analyst

Hobby: Kayaking, Roller skating, Cabaret, Rugby, Homebrewing, Creative writing, amateur radio

Introduction: My name is Domingo Moore, I am a attractive, gorgeous, funny, jolly, spotless, nice, fantastic person who loves writing and wants to share my knowledge and understanding with you.