AI » The Risks of Unsanctioned AI Use and Strategies for Responsible Governance

The Risks of Unsanctioned AI Use and Strategies for Responsible Governance

The Risks of Unsanctioned AI Use and Strategies for Responsible Governance

January 30, 2024

According to an article in Risk Management Magazine, organizations are facing a growing challenge with the rise of unsanctioned artificial intelligence (AI) use, a phenomenon similar to the longstanding issue of “shadow IT.”

Gartner predicts that 75% of employees will be using unauthorized AI by 2027, posing significant data security risks as companies may be unaware of potential threats. The risks associated with unauthorized AI usage include questionable integrity of AI-generated information, lack of standards leading to unreliable results, and potential biases in data used for training AI tools.

Information leakage is a prominent concern, as AI tools often operate on proprietary data, and once released, retrieving such information becomes challenging. This could lead to violations of privacy requirements like GDPR and CCPA, as well as intellectual property laws, exposing organizations to additional risks. Addressing these challenges requires a strategic approach to regulating AI use among employees.

Two primary options exist for organizations: implementing a company-wide ban on AI or responsibly developing solutions to enable business innovation. A total ban may hinder employee productivity and lead to circumvention, while the latter option demands a nuanced strategy given the lack of external regulation and guidance. Organizations are encouraged to understand the AI tools relevant to their use cases, seek demos from providers, and establish guiding principles for responsible AI use.

Setting up rules and supporting processes for AI use, similar to existing measures for IT and security, can mitigate risks. This involves considering elements such as privacy, accountability, fairness, transparency, and security. Educating staff becomes crucial, creating a knowledgeable frontline defense against incorrect AI use and potential business risks.

By permitting the use of secure AI tools aligned with business needs and enforcing established rules, organizations can strike a balance between supporting employees and reducing security vulnerabilities. As AI technologies continue to evolve rapidly, organizations must adapt and proactively manage their approach to AI governance.

Read full article at:

Share this post: