Managing AI Risks During M&A

February 28, 2025

Managing AI Risks During M&A

Managing AI Risks During M&A

According to attorney Bruce C. Doeg from the law firm Baker Donelson, acquiring a company that utilizes AI requires a thorough understanding of its algorithms, data sources, usage, and governance. This evaluation goes beyond traditional legal due diligence when managing AI risks during M&A and must include assessments of the technology and operations involved.

Doeg points out that a key consideration is the functionality and ownership of the AI system. Buyers must determine whether the AI is proprietary, licensed, or open-source, how it was developed and trained, and whether it operates transparently. However, the true value often lies in data ownership and accessibility rather than the AI algorithm itself. 

A buyer must assess whether the target company has clear rights to its data, whether it is encumbered by licensing or confidentiality agreements, and whether it creates a competitive advantage. The AI’s use case also plays a crucial role: higher-risk applications, such as those in regulated industries or critical decision-making, require heightened scrutiny.

Doeg says risk assessment must cover legal, operational, and technological factors. Legal risks include unauthorized use of proprietary data, misleading AI-related claims to investors or customers, and regulatory compliance.

Operational risks stem from AI accuracy, oversight, and third-party vendor management. Technology risks involve data quality, bias mitigation, explainability, and cybersecurity. Buyers must evaluate the target’s approach to risk management, including AI governance frameworks, validation processes, and regulatory compliance.

Managing AI risks during M&A transactions involves comprehensive due diligence, contractual protections, and potential post-acquisition changes. Given AI’s evolving legal landscape, standard representations and warranties may be inadequate. Buyers should carefully negotiate risk allocation and consider insurance coverage, though AI-related claims remain a developing area. 

In some cases, Doeg says that restructuring AI use or enhancing compliance frameworks post-acquisition may reduce risks. Ultimately, understanding when to walk away is critical. Red flags include AI use in high-risk applications without safeguards, unclear data rights, privacy violations, and cybersecurity weaknesses.

AI can be a powerful asset, but its risks must be thoroughly evaluated. A disciplined approach to managing AI risks allows buyers to capitalize on AI’s benefits while protecting long-term business value.

Get our free daily newsletter

Subscribe for the latest news and business legal developments.

Scroll to Top