As we put 2020 behind us and look forward to 2021, we reflected in an interactive virtual discussion forum on the key IG learnings from the past 12 months and the insights and actions we now need to be taking to make the most of the opportunities and challenges on the road to recovery in 2021.
Our expert panel included InfoGovANZ International Council member Aurelie Jacquet, who works on leading global initiatives for the implementation of Responsible AI with both the International Standards Organisation (ISO) and the Institute of Electrical and Electronics Engineers (IEEE).
With ISO, Aurelie is the chair of the Standards Australia committee representing Australia at the international standards on Artificial Intelligence. With the IEEE, she is an expert for the Ethics Certification Program for Autonomous and Intelligent Systems, and leads the work stream on Dignity and Agency as part of IEEE’ s Digital Inclusion, Identity, Trust, and Agency (DIITA) program.
She is also a member of the European AI Alliance and a member of the editorial board of Springer’s new Journal AI & Ethics.
Aurelie’s insights:

- In the past few years, there has been a proliferation of AI ethics principles developed by many different actors (e.g., private companies, governments, intergovernmental organisations, civil society, etc.) that has mostly helped with:
- Defining a north star for the adoption and use of the powerful technology that is AI;
- Identifying the level of commitments/expectations of the various actors; and
- Identifying areas of consensus.
- When considering a technology such as AI, predicted to add $15 trillion to the world’s economy[1], form a part of our everyday life and is the subject of global governance efforts[2], identifying areas of consensus is a valuable exercise. In January 2020, to uncover these areas of consensus, the Berkman Klein Center created a visualization tool called ‘Principled AI’.[3] (As a result of this exercise, they “uncovered eight key thematic trends: privacy, accountability, safety and security, transparency and explainability, fairness and nondiscrimination, human control of technology, professional responsibility, and promotion of human value.”[4]
- There are growing concerns that “principles alone cannot guarantee ethical AI.”[5] In his article, Brent Mittelstadt, Senior Research Fellow at the Oxford Internet Institute, challenges whether ethics principles are appropriate for AI, and explains that “the real work of AI ethics begins now: to translate and implement our lofty principles and, in doing so, to begin to understand the real ethical challenges of AI.”[6]
- 2021 is likely to be a year focused on operationalising AI principles, but also a year where we see more often ‘AI in court,’ reminding us that decisions automated by AI are still subject to existing laws. Even if the AI model used is a black box, organisations still need to demonstrate that the recommendations or decisions made were reasonable and in compliance with the applicable laws. To quote the Australian Human Rights Commission, "any business should ensure that its decision-making is fair, accurate and avoids bias or discrimination. This proposition should be equally true for decision-making that uses AI systems.”[7] The reminder that AI is not lawless came as early as January this year when an Italian court found that Deliveroo's rider-ranking algorithm was in breach of local labour laws as the algorithm would penalise riders who cancelled pre-booked rides less than 24 hours even in circumstances where riders are sick or had to attend to an emergency[8]. This case reemphasises the need for organisations to ‘build in compliance,’ especially when it can be a costly exercise to retrain machine learning models.
Actions for leaders:
- To implement AI responsibly in 2021, while technical requirements, such as data quality and model performance remain essential, focus strongly on broader organisational processes and controls, such as reviewing the adequacy of existing governance processes, embedding dynamic risk management and impact assessments to ensure better oversight of AI, mitigating risks and achieving compliance by design.
Given the expected ongoing complex and uncertain operating environment in 2021, robust information governance is needed to provide a system for the effective control and management of information assets to enable access to real-time and accurate information, as well as optimise data assets and value-generating activities while minimising risks.
You can read the insights from the rest of our expert panel in our InfoGovANZ Key Learnings from 2020 – Action and Insights for 2021 Report. The report was developed from a virtual forum discussing the impact of COVID-19 and IG implications for organisations on data, access to information, trust, transparency and accountability, cybersecurity, global privacy regulatory developments, eDiscovery, ethics and artificial intelligence.
You can also watch the recording of the 28 January 2021 webinar here.
[1] “AI Will Add $15 Trillion To The World Economy By 2030,” Frank Holmes, Forbes, published 25 February 2019
[2] “AI & Global Governance: Using International Standards as an Agile Tool for Governance,” Peter Cihon, UNU-CPR Centre for Policy Research, published 8 July 2019
[3] “Principled Artificial Intelligence, Mapping Consensus in Ethical and Rights-based Approaches to Principles for AI,” Jessica Fjeld, Adam Nagy, Nele Achten, Hannah Hilligoss, Madhulika Srikumar, Berkman Klein Center, published 15 January 2020.
[4] Ibid 2
[5] “Principles alone cannot guarantee ethical AI,” Brent Mittelstadt, Oxford Internet Institute, University of Oxford, Article in Nature Machine Intelligence, published in November 2019
[6] Ibid 4
[7] “Businesses warned AI use risks breaches of anti-discrimination laws,” James Eyers, Australian Financial Review, published 24 November 2020
[8] “Italian court rules against “discriminatory” Deliveroo rider-ranking mechanism,” Natasha Lomas, published 5 January 2021 Techcrunch