Principles of Explainable AI
The US National Institute of Technology and Standards (NIST) has released a draft paper of 4 principles of explainable AI, which is one of several properties that characterise trust in AI systems. Other properties may include resiliency, reliability, bias and accountability. Usually, these terms are defined as a part or set of principles or pillars. This draft paper sets out there are 4 principles encompassing the core concepts of explainable AI as follows: Explanation: Systems deliver accompanying evidence or reason(s) for all outputs. Meaningful: Systems provide explanations that are understandable to individual users. Explanation Accuracy: The explanation correctly reflects the system’s process for generating the output. Knowledge Limits: The system only operates under conditions for which it was designed or when the system reaches a sufficient confidence in its output.