A Taxonomy of Trustworthiness for Artificial Intelligence
A new report published by the UC Berkeley Center for Long-Term Cybersecurity (CLTC) aims to help organizations develop and deploy more trustworthy artificial intelligence (AI) technologies. A Taxonomy of Trustworthiness for Artificial Intelligence: Connecting Properties of Trustworthiness with Risk Management and the AI Lifecycle(opens in a new tab), by Jessica Newman, Director of CLTC’s AI Security Initiative (AISI) and Co-Director of the UC Berkeley AI Policy Hub, is a complement to the newly released AI Risk Management Framework, a resource developed by the U.S. National Institute of Standards and Technology (NIST) to improve transparency and accountability for the rapid development and implementation of AI throughout society. “This paper aims to provide a resource that is useful for AI organizations and teams developing AI technologies, systems, and applications,” Newman wrote. “It is designed to specifically assist users of the NIST AI RMF, however it could also be helpful for people using any kind […]