On 21 December 2023, the UN Secretary-General’s Advisory Body launched its Interim Report: Governing AI for Humanity. The report calls for a closer alignment between international norms and how AI is developed and rolled out. The central piece of the report is a proposal to strengthen international governance of AI by carrying […]
AI Reports
Computational Power and AI
Given the push to build AI at ever increasing scale and the risks, this timely report from the AI Now Institute looks at the material costs and why concentration in compute is driving a race to the bottom. As the report explains, computational power, is a core dependency in building large-scale […]
Singapore IMDA launches Generative AI Evaluation Sandbox
In the day between the U.S President’s Executive Order on AI and the Bletchley Declaration being signed, Singapore’s IMDA and AI Verify Foundation launched the “Generative AI Evaluation Sandbox”, a new initiative to build knowledge and develop new benchmarks and tests for generative AI (GAI) systems. This is part of the effort […]
The risks for professionals relying on Generative AI
Two recent examples where reliance was placed on Generative AI generated content have highlighted the risks and the consequences when independent checking and verification are not undertaken. One involved two lawyers in the US where closing submissions referred to Chat GPT generated cases that did not exist, and the other […]
Zoom clarifies that it won’t use data without consent for AI training
In the past few weeks, there have been media reports pointing out that Zoom’s updated Terms of Service introduced in March, would enable Zoom to use data collected for AI training purposes. Last week, the CEO of Zoom this has led Zoom to announce that it will not use data […]
Ethics in the Age of Disruptive Technologies: An Operational Roadmap
Ethics in the Age of Disruptive Technologies: An Operational Roadmap (ITEC Handbook) by José Roger Flahaux, Brian Patrick Green, and Ann Skeet, offers organisations a strategic plan to enhance ethical management practices, empowering them to navigate the complex landscape of disruptive technologies such as AI, machine learning, encryption, tracking, and others […]
A Taxonomy of Trustworthiness for Artificial Intelligence
A new report published by the UC Berkeley Center for Long-Term Cybersecurity (CLTC) aims to help organizations develop and deploy more trustworthy artificial intelligence (AI) technologies. A Taxonomy of Trustworthiness for Artificial Intelligence: Connecting Properties of Trustworthiness with Risk Management and the AI Lifecycle(opens in a new tab), by Jessica Newman, […]
GPT and Generative AI: How it works, the risks and how it impacts the legal profession and legal services?
2023 appears to be well and truly the year of AI. Ever since the release and world-wide attention of ChatGPT by OpenAI in late in 2022, followed recently with the release of GPT-4, it seems there is a new release or a new revelation on a daily basis about […]
ChatGPT Proves a Mediocre Law Student
[Note: InfoGovANZ thanks Craig Bell for permission to republish his article here, which was first published on Ball in Your Court] I recently spent a morning testing ChatGPT’s abilities by giving it exercises and quizzes designed for my law and computer science graduate students. Overall, I was impressed with its […]
Preventing Digital Harm
The World Economic Forum published Pathways to Digital Justice report to address systemic legal and judicial gaps, and help guide law and policy efforts towards combating data-driven harms. This is particularly important with the increase in online activities and digitization of services, which – when misused – can present new […]