Colorado AI Law to Protect Consumers from Algorithmic Discrimination

August 22, 2024

generative AI in the legal profession

Colorado AI Law to Protect Consumers from Algorithmic Discrimination

According to an article by Ankura, Colorado’s new artificial Intelligence (AI) law mandates that any organization developing or using AI systems that affect Colorado residents prevent algorithmic discrimination, particularly against protected classes such as age, race, and religion. 

Developers of high-risk AI systems must disclose information about their AI’s potential impacts, conduct AI impact assessments, and report any algorithmic discrimination to both deployers and the attorney general within 90 days of discovery.

Employers or businesses using these high-risk AI systems must also adhere to stringent guidelines, including implementing risk management policies, conducting impact assessments, notifying consumers of AI decisions affecting them, and providing options to opt-out or correct personal data use. Employers must also disclose the types of AI systems in use and any potential risks to the attorney general within 90 days of discovering any issues.

The article highlights that organizations affected by the law are advised to assess compliance with the National Institute of Standards and Technology’s AI Risk Management Framework (NIST AI RMF), which offers a structured approach to identifying and mitigating AI risks. 

The law encourages transparency, incident reporting, and a robust governance structure to manage AI risks. Compliance involves creating data inventories and privacy notices while integrating AI into privacy-by-design processes. Adhering to these requirements will help organizations align with Colorado’s AI regulations and maintain the integrity of the AI system.

Read full article at:

Get our free daily newsletter

Subscribe for the latest news and business legal developments.

Scroll to Top