xxx
On August 21, 2024, the National Institute of Standards and Technology (NIST) released its second draft for the fourth revision of the Digital Identity Guidelines. The revised draft guidance puts forth a risk-mitigating framework and requirements for “identity proofing and authentication of users (such as employees, contractors or private individuals)” accessing government services or interacting with government information systems online. Building off the first draft of the revised guidance, the latest draft now includes an entire section on AI in identity systems, recognizing both the benefits and risks that AI poses in identity systems and proposing three broad requirements to mitigate these risks.
From: NIST Releases Updated Draft Guidance for Federal Agencies’ Use of AI in Identity Verification Systems — AI: The Washington Report | Mintz – Antitrust Viewpoints – JDSupra.
xxx
Balancing the benefits and risks of AI, the new draft guidance proposes three requirements for AI in identity systems, focused on transparency and risk mitigation:
Organizations that rely on AI would be required to document and communicate about their AI usage in identity systems. Identity providers and content security policies that leverage AI would be required to document and communicate their AI “usage to all [relying parties] that make access decisions based on information from these systems.”
Organizations that utilize AI would be required to provide certain information “to any entities that use their technologies,” including information about the techniques and datasets used for training their models, the frequency of model updates, and the results of any tests of their algorithms.
Lastly, organizations that utilize AI or rely on systems that use AI would be required to adopt the NIST’s AI Risk Management Framework for AI risk evaluation, and also consult the Towards a Standard for Managing Bias in Artificial Intelligence. Both NIST publications lay out practical steps to reduce AI bias, including using datasets with balanced statistical presentation, documenting potential sources of human bias into datasets, updating and testing AI models regularly, creating a fairness metric by which to evaluate AI models, and assembling diverse and inclusive teams to design and deploy AI systems.