In celebration of International Access to Information Day and Right to Know Week in NSW 2020, we held an event on AI Transparency in Digital Government with NSW Information Commissioner Elizabeth Tydd, Victorian Information Commissioner Sven Bluemmel and Dr Jat Singh, Senior Research Fellow at the University of Cambridge. The discussion focused on the duty government agencies have to disclose algorithms used in providing services and making decisions about services and benefits to citizens. The Commissioners highlighted that robust procurement processes are essential where technology using algorithms are being procured by agencies. Commissioner Bluemmel said the bar needs to be set really high where the algorithmic decision-making involves people and their liberties and livelihood. Transparency is necessary to understand how the decisions are made in order to assert our rights. Dr Singh pointed out that transparency needs to be meaningful so that it allows us to be able to interrogate, scrutinize and challenge, and it requires organisations to give careful consideration […]
AI & Ethics
EU/US data transfers under Privacy Shield invalidated
On 16 July 2020, the European Court of Justice invalidated its previous Decision (2016/1250) and the EU-US Privacy Shield, which enabled certified companies to transfer personal data between the EU and the US. The case was brought by Max Schrems (the founder of NYOB) an Austrian resident, who had lodged a complaint with the Irish privacy regulator that the transfer of his Facebook data from Facebook’s servers in Ireland to the US did not provide sufficient protection against access by the US public authorities. The Court found that, based on the use and access by US public authorities and surveillance programmes, interfered with the fundamental right of persons whose data is transferred to the US. However, the Court also determined that the Commission Decision 2010/87 on standard contractual clauses for the transfer of personal data to processors established in third countries is valid. Read more Court of Justice of the […]
EDPS Opinion on the EU Commission’s White Paper on AI – the European approach to excellence and trust
As a part of a wider package of strategic documents, the European Commission published a White Paper on “Artificial Intelligence: A European approach to excellence and trust”, which we brought you in our April newsletter and is available here. This Opinion presents the EDPS views on the White Paper as a whole, as well as on certain specific aspects, such as the proposed risk-based approach, the enforcement of AI regulation or the specific requirements for the remote biometric identification (including facial recognition). The EDPS recommendations in this opinion aim at clarifying and, where necessary, further developing the safeguards and controls with respect to protection of personal data. Read the EDPS Opinion here.
The impact of the GDPR on Automated-Decision Making
Addressing the relation between the EU General Data Protection Regulation (GDPR) and artificial intelligence (AI) this report considers challenges and opportunities for individuals and society, and the ways in which risks can be countered and opportunities enabled through law and technology. The study led by Professor Sartor for the Future of Science and Technology (STOA), within the Secretariat of the European Parliament, discusses the tensions and proximities between AI and data protection principles, such as purpose limitation and data minimisation. The report makes a thorough analysis of automated decision-making, considering the extent to which it is admissible, the safeguard measures to be adopted, and whether data subjects have a right to individual explanations. The study then considers the extent to which the GDPR provides for a preventive risk-based approach, focused on data protection by design and by default. Read the report here.
NZ – Algorithm Charter for Aotearoa New Zealand
On 29 July 2020, more than New Zealand 20 Government agencies signed the Aotearoa Algorithm Charter, committing to be transparent about when they use algorithms and how those algorithms operate. This makes New Zealand the first country to develop standards governing the use of algorithms by the public sector, after Statistics Minister James Shaw made his annoucements. The charter, which has already been signed by 21 ministries and agencies, requires signatories to be transparent about when they use algorithms and how those algorithms function. “Most New Zealanders recognise the important role algorithms play in supporting government decision-making and policy delivery, however they also want to know that these systems are being used safely and responsibly. The Charter will give people that confidence,” Shaw said. The Algorithm Charter for Aotearoa New Zealand is an evolving piece of work that needs to respond to emerging technologies and also be fit-for-purpose for government […]
AI Standards: From Principles to Implementation
With the proliferation of AI principles worldwide1, industry is faced with a new challenge: how to implement these AI principles? Since 2017, the international committee responsible for the standardization of AI (SC 42) has been tackling this challenge: it is developing standards covering both technical and organisational specifications to enable responsible and trustworthy AI. Forty-four countries are currently involved in the work of SC 42, and Australia plays an active role in the development of the AI international standards, as it has formed standards committee IT-043 to be Australia’s voice at SC 42. When it comes to AI, it is essential to provide for interoperability and global governance, and this is why AI international standards have the buy in from key governments (such as China, the US and the EU). Australia has also identified AI standards as an important national priority. In March this year, Standards Australia released its Artificial […]
UK – AI and its impact on Public Standards
On 10 February 2020, the UK’s Committee on Standards in Public Life published its report on AI and its impact on Public Standards to ensure that high standards of conduct are upheld as technologically assisted decision making is adopted more widely across the public sector. The report makes clears that on issues of transparency and data bias in particular, there is an urgent need for guidance and regulation. The report also emphasises that public bodies must comply with the law surrounding data-drive technology and implement clear, risk-based governance for their use of AI.
US – Draft Guidance for AI regulation
On 7 January 2020 the White House’s Office of Science and Technology Policy (OSTP) released a draft Guidance for Regulation of AI for Federal Agencies to consider for the use of AI including: public trust, public participation, fairness, scientific integrity, risk assessment, benefits and costs, flexibility, fairness, transparency, safety and security, and interagency co-ordination. In contrast to the position taken by the Europeans, the US government position is that it does want AI to be highly regulated, with the OSTP draft Guidance stating, ‘Federal agencies must avoid regulatory or non-regulatory actions that needlessly hamper AI innovation and growth’. On 24 February 2020, the Department of Defense adopted 5 Principles for AI: Responsible, Equitable, Traceable, Reliable and Governable. Secretary Esper stated, ‘AI technology will change much about the battlefield of the future, but nothing will change America’s steadfast commitment to responsible and lawful behaviour’.
Singapore – Model AI Governance Framework
The Singaporean Privacy Data Protection Commission (PDPC) released the second edition of the Model AI Governance Framework in January 2020. The Guiding Principles includes that decisions made by AI should be explainable, transparent and fair and that AI systems should be human-centric. Along with the framework is a Compendium of Use Cases, demonstrating how local and international organisations, across different sectors and sizes, implemented or aligned their AI governance practices with all sections of the Model Framework. There is also an Implementation and Self-Assessment Guide for Organisations (ISAGO), which is a collaboration with the World Economic Forum’s Centre for the Fourth Industrial Revolution.
Automated Speech Recognition
While Automated Speech Recognition (ASR) technology has been present in various forms for decades, advances in statistical modelling, artificial intelligence (AI) and automation connectively have resulted in a new frontier for speech-based interaction between humans and computer systems. In this article Dr Peter Chapman, Director in the KPMG Forensic Technology team and InfoGovANZ advisory board member, details some of the current applications of ASR technology and offers guidance on a number of emerging governance issues associated with these technologies. As a concept, computerised Automated Speech Recognition (ASR) has been around almost as long as the computer itself. However, only in the last decade have the capabilities of ASR technology reached the point where wide-scale commercial adoption is viable1. Natural human speech contains slang terms, dialect peculiarities, abbreviations and other “non-standardised” content. While humans are very adept at managing these issues, the enormous variability of human speech makes ASR a very […]
Closer to the Machine: Technical, social and legal aspects of AI
OVIC has collaborated with experts in AI including Professor Toby Walsh (UNSW and CSIRO’s Data61), Professor Richard Nock (Australian National University and CSIRO’s Data61), Dr Jake Goldenfein (Cornell Tech, Cornell University) and others to produce an e-book on AI. You can download the e-book Closer to the Machine from OVIC here
NIST plan for AI Standards Development
NIST has released a plan for prioritising federal agency engagement in the development of standards for AI. The plan recommends the federal government bolster AI standards-related knowledge, leadership and coordination among agencies that develop or use AI; promote focused research on the trustworthiness of AI systems; support and expand public-private partnerships; and engage with international parties. Read more here
50 Principles for Responsible AI
Roger has followed up the Guidelines with ‘Principles and Business Processes for Responsible AI’ with a view to protecting the organisation’s own interests and also those of its stakeholders and society as a whole. He presents a set of 50 Principles for Responsible AI, arising from a consolidation of proposals put forward by a diverse collection of 30 organisations. To apply those Principles, he recommends adapted forms of the established techniques of risk assessment and risk management. Read the article
Principles and Business Processes for Responsible AI
The promise of data analytics brings with it considerable risks. Canberra-based consultant and researcher Roger Clarke recently published a set of Guidelines, whose purpose is to intercept ill-advised uses of data and analytical tools, prevent harm to important values, and assist organisations to extract the achievable benefits from data, rather than dreaming dangerous dreams. Read the article
Policy and Investment Recommendations for Trustworthy AI
In June 2019, the second deliverable of the AI HLEG was published, the Policy and Investment Recommendations for Trustworthy Artificial Intelligence. Read the report
IEEE Ethically Aligned Design
The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems (The IEEE Global Initiative) has launched Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems, which is a crowd-sourced global treatise regarding the Ethics of Autonomous and Intelligent Systems. Read the report
Principles on Artificial Intelligence
In May 2019, the OECD adopted its Principles on Artificial Intelligence, the first international standards agreed by governments for the responsible stewardship of trustworthy AI, which include recommendations for public policy and Principles to be applied to AI developments around the world. The Principles ‘promote AI that is innovative and trustworthy and that respects human rights and democratic values. Visit the website
AI: Australia’s Ethics Framework
A discussion paper developed by CSIRO’s Data 61 has been released to inform the Government’s approach to AI ethics in Australia. Your views and submissions can be made on the consultation hub here to help ensure AI is developed and applied responsibly with accountability.
The Ethics Guidelines for Trustworthy Artificial Intelligence (AI)
Published in April 2019 and prepared by the High-Level Expert Group on Artificial Intelligence, which is an independent expert group set up by the European Commission. Read the report
AI and the Law – the future is here
Artificial intelligence (AI) is already making significant inroads to the practice of law and producing efficiencies and cost savings. This article looks at how AI is being utilised in different parts of legal practice and the transformation of legal practice that is already underway in the delivery of legal services from litigation through to contract management and Chatbots. Litigation & eDiscovery The production of documents has traditionally been a very expensive part of the litigation process. The development of eDiscovery software tools to identify, retrieve, process, filter and search provides significant costs savings in the litigation process. These cost savings are even more significant with latest software tools and right expertise are utilised. The latest developments in the eDiscovery industry include the use of AI technology. Early forms of AI were built into the globally dominant eDiscovery platforms. For the past 10 years these platforms enabled document clustering and concept […]