With more and more of our day to day life happening in an online environment, many decisions are inevitably made by computers, with no human involvement – a process called automated decision-making. Organisations use technology, including algorithms and machine-learning, to collect and analyse a range of personal data about an individual – from their buying habits to lifestyle data, from social network data to mobile phone data.
Automated individual decision-making is frequently used for the purposes of profiling. Profiling is defined by the GDPR as:
“Any form of automated processing of personal data consisting of the use of personal data to evaluate certain personal aspects relating to a natural person, in particular to analyse or predict aspects concerning that natural person’s performance at work, economic situation, health, personal preferences, interests, reliability, behaviours, location or movements.”
Profiling can be used to find out about people’s preferences, predict their behaviour and make decisions about them. While such information is very useful for the organisations collecting it, it comes with risks for those whose personal data is being used and processing in this way is considered to be high-risk for the purposes of the GDPR.
The GDPR places therefore restrictions on solely automated decision-making (including profiling), which has legal or similarly significant effects. It does this by providing that:
“The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.”
So when can this type of processing be used? There are three situations in which automated decision-making, including profiling, can be used and the restriction is lifted. There are where the decision is:
This is however further qualified for those using special category personal data where such processing can only be carried out:
So what must organisations who want to use automated decision-making do?
As with all high risk processing, the GDPR requires anyone undertaking processing involving automated decision-making to first carry out a Data Protection Impact Assessment (DPIA) as evidence of the fact that they have identified and assessed what risks are and how they will be addressed.
Organisations must also put appropriate technical and organisation measures in place, to enable them to correct inaccuracies and minimise the risk of errors and secure personal data in a proportionate manner.
Additionally, anyone carrying out automated decision-making must give individuals specific information about the processing as well as taking steps to prevent errors, bias and discrimination. The GDPR also gives individuals the right to challenge and request a review of any decision made by automated means so organisations need to have procedures in place to allow them to respond to such challenges and requests. Staff must be trained to ensure that they can recognise the exercise of this right by individuals and know how to respond appropriately and timeously.
Individuals whose personal data will be subject to automated decision making must be given clear information on how the organisation may use their personal data, by being provide with information about the logic involved in the decision-making process together with information about the significance and the envisaged consequences for the individual. They must be given the opportunity to obtain human intervention, express their point of view and obtain an explanation of the decision and be able to challenge it.
Therefore, while automated decision-making including profiling is not prevented by the GDPR, organisations planning to use it must bear in mind the restrictions and limitations put in place by the GDPR and accordingly should now review their processes to ensure that they are GDPR compliant and properly protecting the personal data being used.