NIST’s New AI Testing Tool Helps Identify Risks and Meet Standards
August 1, 2024
NIST’s New AI Testing Tool Helps Identify Risks and Meet Standards
In response to President Biden’s Executive Order on AI, the National Institute of Standards and Technology (NIST) has introduced Dioptra, a new testbed designed to evaluate AI systems’ performance claims. According to an article by Mia Rendar, Senior Associate at Pillsbury Law, NIST’s new AI testing tool aids users in identifying potential attacks that could degrade model performance and quantifying resulting failures.
Its development underscores the importance of thorough AI testing to address risks related to reliability, security, and fairness. Users should verify that their testing practices align with their AI service agreements, avoid disrupting AI performance, and consider local deployment to prevent sharing the model with third parties.
In addition to Dioptra, Rendar notes that NIST has released several other key documents:
- A guide on managing misuse risk for dual-use foundational models
- An AI Risk Management Framework (RMF) focusing on generative AI, secure software development practices for generative AI
- And a plan for global AI standards engagement.
These documents provide frameworks for managing AI risks, ensuring secure development practices, and promoting international collaboration to advance responsible AI. As the AI industry lacks established standards, NIST’s new AI testing tool and other efforts are pivotal in setting industry benchmarks and ensuring transparency and accountability in AI technologies.
Read full article at:
Get our free daily newsletter
Subscribe for the latest news and business legal developments.