top of page

AI Guidance by the UK and Data Protection Measures

Writer's picture: BlockSuitsBlockSuits

Artificial Intelligence (‘AI’) has been increasingly penetrating the data-driven economy in the United Kingdom (‘UK’). Recently, on July 30, 2020, the Information Commissioner’s Office (‘ICO’) in the UK released Guidance on AI and Data Protection (‘AI guidance’). The ICO reiterates the risks that AI may pose to an individual’s privacy and provides a methodology for audit and compliance mechanisms. AI guidance is focused on AI ethics and accountability standards for the use of AI. Providing necessary risk mitigations with regards to AI and its implication on privacy has become increasingly important, especially considering the recent surge in the use of technologies such as facial recognition by public authorities. Interestingly, the Court of Appeal in the UK, in the case of R v. The Chief Constable of South Wales Police on August 11, 2020, ruled that the use of facial recognition technology by the South Wales police force was unlawful as it violated privacy and data protection rights of citizens. The technology was held to be a ‘discriminatory mass surveillance tool’ as it was based on ‘racial profiling’. Proper legislation in the use of AI systems would serve the purpose of curbing illegal use of AI systems and ensure compliance with privacy norms.


The UK is currently in transitioning period for Brexit and after January 2021, the UK Data Protection Act 2018 will become prevalent for UK authorities, Part 3, and Part 4 of which are concerned with data protection governance by public authorities and intelligence services. Regulators around the globe have been focusing on deeply regulating the scope of AI and data protection regimes. Recently, on June 29, 2020, the European Data Protection Supervisor had released an opinion for use of AI and its intertwining nature with data protection and the General Data Protection Regulation (‘GDPR’).


Key points in the guidance

· Accountability and Governance standards for AI

The section of governance standards in the AI guidance is focussed on Data Protection Officers (‘DPOs’), placing emphasis on risk management systems. The AI guidance suggests that the privacy implications of AI would be dependent upon its use cases, the population these systems are deployed on, the political, social, and cultural considerations. The privacy considerations in the use of AI cannot be delegated to technical staff and shall be addressed by senior management and DPOs. The ICO is also underway in developing a general compliance toolkit which even though not specific to AI, will focus on providing compliance norms with the GDPR. The guidance states that a Data Protection Impact Assessment (‘DPIA’) shall be conducted as the use of AI systems may involve processing mechanisms that may harm an individual’s rights and freedoms.

· Ensuring transparency, lawfulness, and fairness in AI systems

This section of the AI guidance is focused on compliance focused roles such as senior management of the organisation. The emphasis shall be placed in ensuring that the AI system is ‘statistically’ accurate and does not discriminate in any aspect and the impact of an individual’s expectation shall also be taken into consideration. The example provided in this regard is that of an AI machine predicting loan defaults which may divide men and women into different groups asserting bias. The processing of data by the AI machine shall be on a lawful basis and such a basis must be included in the privacy notice. The AI guidance has also provided a distinction between ‘developments’ and ‘deployment’ which is essential as many organisations test their systems on real-life individual’s in the demise of the ‘testing stage’. In this regard, the AI guidance suggests that every development shall specify the lawful basis for deployment at every stage. For example, a facial recognition technology can be used for tagging friends on social media while at the same time identifying criminals and in intelligence services. The lawful basis for the purpose of AI shall be explained by the organisation before deployment. The AI systems shall also consider the legitimate interests of individuals.

· Assess security and data minimisation in AI

Compliance with AI data protection standards is more challenging when compared with other established technologies. The AI system is deployed with a multitude of data networks and information flows. When processing personal data through an AI system, the security standards shall consider the: a) way the technology is built and deployed; b) the complexity of the organisation deploying it; c) strength of existing risk management; and d) context and purpose of processing personal data. The organisation shall also consider attacks from third parties and take mitigating actions against it while ensuring strict security standards. Many AI applications utilise personal data for training purposes. For example, the AI system may use a biometric analysis of individuals to identify a specific type of identification. In this regard where training data is being utilised through the processing of personal data, the infrastructure of the AI and software vulnerabilities shall be taken into account by the organisation.

The UK authorities and the ICO have provided adequate settings and examples in which security measures around an AI system may prove to be vulnerable. AI profiling has been used extensively during the Covid-19 pandemic. Moreover, many organisations use automated decision making systems for enhancing risk liability. In this regard, large amounts of personal data are processed by an AI system. In such regards, it has become increasingly important to incorporate highly defined data protection measures while applying such AI models.


The article is authored by Samaksh Khanna, Co-founder, BlockSuits.

0 comments

Recent Posts

See All

Comments


bottom of page