In January 2024, NIST published a report on AI security titled, ‘Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations’ which was authored by Apostol Vassilev (NIST), Alina Oprea (Northeastern University), Alie Fordyce (Robust Intelligence), Hyrum Anderson (Robust Intelligence). The publication develops a taxonomy of concepts and defines terminology in the field of adversarial machine learning (AML). The abstract explains that the taxonomy is built on surveying the AML literature and is arranged in a conceptual hierarchy that includes key types of ML methods and lifecycle stages of attack, attacker goals and objectives, and attacker capabilities and knowledge of the learning process. Importantly, the publication references real-life examples and provides corresponding methods for mitigating and managing the consequences of attacks and points out relevant open challenges to take into account in the lifecycle of AI systems. The terminology used in the report is consistent with the literature on AML and is complemented by a glossary […]