Ethics in the Age of Disruptive Technologies: An Operational Roadmap (ITEC Handbook) by José Roger Flahaux, Brian Patrick Green, and Ann Skeet, offers organisations a strategic plan to enhance ethical management practices, empowering them to navigate the complex landscape of disruptive technologies such as AI, machine learning, encryption, tracking, and others while upholding strong ethical standards. The Institute for Technology, Ethics and Culture (ITEC), housed at the Markkula Center for Applied Ethics at the Santa Clara University, is a collaboration between the Center and the Vatican’s Dicastery for Culture and Education. The Institute convenes leaders from business, civil society, academia, government, and all faith and belief traditions, to promote deeper thought on technology’s impact on humanity. Download the ITEC Handbook via the link here
AI & Ethics
Safe and Responsible AI Discussion Paper
The Government’s Safe and Responsible AI in Australia Discussion Paper was released by Science and Industry Minister, Ed Husic MP released last week. The Discussion Paper canvasses existing regulatory and governance responses in Australia and overseas, identifies potential gaps and proposes several options to strengthen the framework governing the safe and responsible use of AI. The paper builds on the recent Rapid Research Report on Generative AI delivered by the government’s National Science and Technology Council. Also released is the National Science and Technology Council’s paper Rapid Response Report: Generative AI assesses potential risks and opportunities in relation to AI, providing a scientific basis for discussions about the way forward. Access the Safe and Responsible AI in Australia Discussion Paper here Access the Rapid Research Report on Generative AI here You can have your say on the discussion paper by answering some or all of the 20 questions on the Government’s online survey and upload a separate submission if needed – access the link here Make […]
Regulating AI in the UK (part 2)
Last month we brought you the UK Government’s White Paper released on 29 March 2023, to implement a pro-innovation approach to AI regulation and the EU’s AI Act with Tom Whittaker’s flowchart to assist in navigating the proposed EU AI. Tom Whittaker of Burgess Salmon (UK) has developed a further flowchart to assist in navigating the proposed UK approach to AI regulation. It identifies the key decisions to be considered and references the relevant sections of the White Paper. As Tom points out, organisations may find they need to navigate multiple regulatory regimes and jurisdictions. How they comply with each of those regulations (and other relevant laws) may look very different. For example, you can see the different approaches being taken by looking at the one-page visual on anticipated AI regulations in the UK, EU and US, see the horizon scanning access here; and the glossary of existing and anticipated AI definitions […]
The State of AI Governance in Australia
The Human Technology Institute has just released a report into the The State of AI Governance in Australia providing a timely overview of how organisations are approaching the governance of AI in Australia today. Its findings are based on surveys, structured interviews, and workshops engaging more than 300 Australian company directors and executives, as well as expert legal analysis and extensive desk research. The report reveals that corporate leaders are largely unaware of how existing laws govern the use of AI systems in Australia. The report finds that both company directors and senior executives see huge opportunities for AI systems to improve productivity, process efficiencies, and customer service. But investment in AI systems and technical skills has not been matched by investment in AI system management and governance. Furthermore, corporate leaders report that they lack the awareness, skills, knowledge and frameworks to use AI systems effectively and responsibly. The report suggests four […]
A Taxonomy of Trustworthiness for Artificial Intelligence
A new report published by the UC Berkeley Center for Long-Term Cybersecurity (CLTC) aims to help organizations develop and deploy more trustworthy artificial intelligence (AI) technologies. A Taxonomy of Trustworthiness for Artificial Intelligence: Connecting Properties of Trustworthiness with Risk Management and the AI Lifecycle(opens in a new tab), by Jessica Newman, Director of CLTC’s AI Security Initiative (AISI) and Co-Director of the UC Berkeley AI Policy Hub, is a complement to the newly released AI Risk Management Framework, a resource developed by the U.S. National Institute of Standards and Technology (NIST) to improve transparency and accountability for the rapid development and implementation of AI throughout society. “This paper aims to provide a resource that is useful for AI organizations and teams developing AI technologies, systems, and applications,” Newman wrote. “It is designed to specifically assist users of the NIST AI RMF, however it could also be helpful for people using any kind […]
GPT and Generative AI: How it works, the risks and how it impacts the legal profession and legal services?
2023 appears to be well and truly the year of AI. Ever since the release and world-wide attention of ChatGPT by OpenAI in late in 2022, followed recently with the release of GPT-4, it seems there is a new release or a new revelation on a daily basis about the way in which these Generative AI tools appear to be able to do with ease, tasks that have previously been very labour intensive and manual. Regardless of how far we will be able to utilise these tools and integrate them into the corporate workplace, clearly the advent of AI systems being able to generate text with little or no effort or cost, is the precipice of significant change. The purpose of this article is to provide some insights into these tools, and some considerations in how they could be used, and some tips for crafting prompts and using LLMs. […]
Navigating the new EU AI Act
Tom Whittaker of Burges Salmon UK has developed a flowchart to assist navigating the new EU AI Act. This new act is is extra-territorial and the obligations (and risk of penalties + enforcement) still arise where certain legal entities are based outside of the EU. Originally posted here
Regulating AI in the UK
The UK Government released on 29 March a White Paper setting out plans for implement a pro-innovation approach to AI regulation. The intention is not to introduce AI legislation ‘too early’ and relies on collaboration between government, business and empowering regulators to take the lead. The principles to guide regulator responses are: safety, security and robustness; appropriate transparency and explainability; fairness; accountability and governance; contestability and redress. The consultation period for the Paper and related Impact Assessment closes on 21 June 2023 – you can read more about the pro-innovation approach to AI regulation here.
ChatGPT Proves a Mediocre Law Student
[Note: InfoGovANZ thanks Craig Bell for permission to republish his article here, which was first published on Ball in Your Court] I recently spent a morning testing ChatGPT’s abilities by giving it exercises and quizzes designed for my law and computer science graduate students. Overall, I was impressed with its performance, but also noticed that it’s frequently wrong but never in doubt: a mechanical mansplainer! If you’re asking, “What is ChatGPT,” I’ll let it explain itself: “ChatGPT is a large language model developed by OpenAI. It is a type of machine learning model called a transformer, which is trained to generate text based on a given prompt. It is particularly well-suited to tasks such as natural language processing, text generation, and language translation. It is capable of understanding human language and generating human-like text, which makes it useful for a wide range of applications, such as chatbots, question-answering systems, and […]
NIST AI Framework
The U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) released its AI Risk Management Framework (AI RMF 1.0) a guidance document for use by organisations designing, developing, deploying or using AI systems to help manage the many risks of AI technologies. It was released along with a companion NIST AI RMF Playbook, AI RMF Explainer Video, an AI RMF Roadmap, AI RMF Crosswalk, and various Perspectives. To get an overview of the AI national and international regulatory landscape at October 2022, you can read the NSW Information and Privacy Commissioners high-level analysis and overview here – AI National and International Regulatory Landscape – InfoGovANZ
AI National and International Regulatory Landscape
The NSW Information and Privacy Commissioners have undertaken a high level scan of the national and international regulatory relevant to AI, which includes:
- Governance models used internationally in regulating AI and a recognition of Horizontal and Hybrid (broad based and legislative/policy) and Vertical (rights specific and single treatment type) approaches to AI regulation.
- High level categorisation of risks to information access and privacy rights that arise in the use of AI together with treatments to manage identified risks.
Read the full report here.
UK proposals for new AI Rule Book
The UK Government has put forward proposals on the future regulation of Artificial Intelligence, to help develop consistent rules to promote innovation and protect the public. It comes as the Data Protection and Digital Information Bill is introduced to Parliament which will transform the UK’s data laws to boost innovation in technologies such as AI. The Bill will seize the benefits of Brexit to keep a high standard of protection for people’s privacy and personal data while delivering around £1 billion in savings for businesses. The new AI paper outlines the government’s approach to regulating the technology in the UK, with proposed rules addressing future risks and opportunities so businesses are clear how they can develop and use AI systems and consumers are confident they are safe and robust. This approach will create proportionate and adaptable regulation so that AI continues to be rapidly adopted in the UK to boost […]
NSW AI Assurance Framework
All NSW government agencies are required from March 2022 to use the AI Assurance Framework. The Framework assists project teams using AI to comprehensively analyse and document their projects’ AI specific risks. It also assists teams to implement risk mitigation strategies and establish clear governance and accountability measures. Under the AI Policy and AI Assurance Framework, agencies are required to abide by the following ethical principles: Community benefit – AI should deliver the best outcome for the citizen, and key insights into decision-making. Fairness – use of AI will include safeguards to manage data bias or data quality risks. Privacy and security – AI will include the highest levels of assurance. Transparency – review mechanisms will ensure citizens can question and challenge AI-based outcomes. Accountability – decision-making remains the responsibility of organisations and individuals. View the NSW AI Assurance Framework here.
Urgent action needed over AI risks
UN High Commissioner for Human Rights Michelle Bachelet stressed the urgent need for a moratorium on the sale and use of artificial intelligence (AI) systems that pose a serious risk to human rights until adequate safeguards are put in place. She also called for AI applications that cannot be used in compliance with international human rights law to be banned. As part of its work on technology and human rights, the UN Human Rights Office has published a report that analyses how AI – including profiling, automated decision-making and other machine-learning technologies – affects people’s right to privacy and other rights. Read the UN High Commissioner for Human Rights’ statement or report.
EU bodies call for facial recognition ban in public
The European Data Protection Board (EDPB) and the European Data Protection Supervisor (EDPS) have supported the call on the European Commission’s Proposal for a Regulation on the use of artificial intelligence technologies but expressed concern by the exclusion of international law enforcement cooperation from the scope of the proposal. The EDPB and EDPS go further than the European Commission’s proposal for a regulation issued in April — urging that the planned legislation should be broader to include a ‘general ban on any use of AI for automated recognition of human features in publicly accessible spaces, such as recognition of faces, gait, fingerprints, DNA, voice, keystrokes and other biometric or behavioural signals, in any context.’ Read the EDPB release here and the Proposal here.
Australians deserve tech that protects their rights
A new report by the Australian Human Rights Commission calls for far-reaching changes to ensure government, companies and others safeguard human rights in the design, development and use of new technologies like artificial intelligence (AI).
The Human Rights and Technology Final Report makes 38 recommendations to ensure human rights are upheld in Australia’s laws, policies, funding and education on AI. This includes the recommendation to modernise Australia’s regulatory system to ensure AI-informed decision making is lawful, transparent, explainable, responsible and subject to appropriate human oversight, review and intervention. Stronger laws are recommended to protect the community from misuse of facial recognition and other biometric technology.
The Report recommends the creation of a new AI Safety Commissioner to help lead Australia’s transition to an AI-powered world. It is envisaged that this new regulatory body would be a key source of expertise on AI, providing guidance to governments and the private sector, and providing independent advice to policymakers and Parliament.
You can read the Human Rights and Technology Final Report and find out more about the project at tech.humanrights.gov.au
New regulations for Artificial Intelligence in the EU
The European Commission has just announced a package of proposed regulations to make sure that AI systems used in the EU are safe, transparent, ethical, unbiased and under human control. They are categorised by risk with checks imposed on any ‘high-risk’ technology and also include a ban on most surveillance including live facial scanning, as well as AI systems to filter school, job or credit scoring. AI applications used in critical infrastructure migration and law enforcement would also be subject to strict safeguards. The proposed regulation is one of the broadest of its kind to be introduced by a Western government, and part of the EU’s expansion of its role as a global tech enforcer. Regulators could fine a company up to €30million or 6% of annual world-wide revenue for the most severe violations. Read more about the EU’s approach to AI in Excellence and trust in artificial intelligence | European Commission (europa.eu) and New rules for […]
Using artificial intelligence to make decisions: Addressing the problem of algorithmic bias
This technical paper is a collaborative partnership between the Australian Human Rights Commission, Gradient Institute, Consumer Policy Research Centre, CHOICE and CSIRO’s Data61. We explore how the problem of algorithmic bias can arise in decision making that uses artificial intelligence (AI). This problem can produce unfair, and potentially unlawful, decisions. We demonstrate how the risk of algorithmic bias can be identified, and steps that can be taken to address or mitigate this problem. AI is increasingly used by government and businesses to make decisions that affect people’s rights, including in the provision of goods and services, as well as other important decision making such as recruitment, social security and policing. Where algorithmic bias arises in these decision-making processes, it can lead to error. Especially in high-stakes decision making, errors can cause real harm. The harm can be particularly serious if a person is unfairly disadvantaged on the basis of their […]
AI Transparency in Digital Government Highlights
To celebrate Right to Know 2020, Information Governance ANZ were delighted to host a timely discussion on the right to access information and the use of algorithms in government decision-making. This interactive forum was facilitated by Susan Bennett, Founder of InfoGovANZ and our special guests included: NSW Information Commissioner – Elizabeth Tydd Victorian Information Commissioner – Sven Bluemmel Senior Research Fellow, University of Cambridge – Dr Jat Singh The increasing adoption of technology across society, including in government, requires the preservation, assurance and assertion of information access rights. The right of access to government information also extends to information held by contractors that provide services to the public on behalf of government. Applying and challenging these rights becomes more complex in the new world of automated decision-making. Legal frameworks, licensing and contracts must evolve to ensure information governance is applied to the design and use of algorithms, so that rights and […]
Principles of Explainable AI
The US National Institute of Technology and Standards (NIST) has released a draft paper of 4 principles of explainable AI, which is one of several properties that characterise trust in AI systems. Other properties may include resiliency, reliability, bias and accountability. Usually, these terms are defined as a part or set of principles or pillars. This draft paper sets out there are 4 principles encompassing the core concepts of explainable AI as follows: Explanation: Systems deliver accompanying evidence or reason(s) for all outputs. Meaningful: Systems provide explanations that are understandable to individual users. Explanation Accuracy: The explanation correctly reflects the system’s process for generating the output. Knowledge Limits: The system only operates under conditions for which it was designed or when the system reaches a sufficient confidence in its output.