Addressing the relation between the EU General Data Protection Regulation (GDPR) and artificial intelligence (AI) this report considers challenges and opportunities for individuals and society, and the ways in which risks can be countered and opportunities enabled through law and technology. The study led by Professor Sartor for the Future of Science and Technology (STOA), within the Secretariat of the European Parliament, discusses the tensions and proximities between AI and data protection principles, such as purpose limitation and data minimisation. The report makes a thorough analysis of automated decision-making, considering the extent to which it is admissible, the safeguard measures to be adopted, and whether data subjects have a right to individual explanations. The study then considers the extent to which the GDPR provides for a preventive risk-based approach, focused on data protection by design and by default. Read the report here.
UK's Committee on Standards in Public Life
On 29 July 2020, more than New Zealand 20 Government agencies signed the Aotearoa Algorithm Charter, committing to be transparent about when they use algorithms and how those algorithms operate. This makes New Zealand the first country to develop standards governing the use of algorithms by the public sector, after Statistics Minister James Shaw made his annoucements. The charter, which has already been signed by 21 ministries and agencies, requires signatories to be transparent about when they use algorithms and how those algorithms function. “Most New Zealanders recognise the important role algorithms play in supporting government decision-making and policy delivery, however they also want to know that these systems are being used safely and responsibly. The Charter will give people that confidence,” Shaw said. The Algorithm Charter for Aotearoa New Zealand is an evolving piece of work that needs to respond to emerging technologies and also be fit-for-purpose for government […]
On 10 February 2020, the UK’s Committee on Standards in Public Life published its report on AI and its impact on Public Standards to ensure that high standards of conduct are upheld as technologically assisted decision making is adopted more widely across the public sector. The report makes clears that on issues of transparency and data bias in particular, there is an urgent need for guidance and regulation. The report also emphasises that public bodies must comply with the law surrounding data-drive technology and implement clear, risk-based governance for their use of AI.