• Contact us
      • Contact Us
      Have a question?
      Try speaking to one of our experts
      Contact us
      Information
      • Careers
      • Privacy Notice
      • Cookie Notice
      • Terms of Use
      • Office Locations
      Sign up for industry updates
      Stay up to date on Celent's latest features and releases.
      Sign up
      • Privacy Notice
      • Cookie Notice
      • Terms of Use
      BLOG
      Taming the AI Dragon: Effective Governance Frameworks for Managing AI Risks
      Session Summary from Celent's GenAI Symposium
      11th September 2025
      //Taming the AI Dragon: Effective Governance Frameworks for Managing AI Risks

      As artificial intelligence (AI) continues to evolve, organizations in regulated industries like insurance and wealth management are learning to balance its transformative potential with the inherent risks. At Celent’s AI Symposium: Generative and Beyond | Celent, I hosted a panel discussion in which experts shared how businesses can embrace AI responsibly, using well-established governance frameworks that manage risks while enabling innovation.

      The Power and Risk of AI

      AI, especially Generative AI (GenAI), has the potential to revolutionize industries. As the session title references, like a powerful dragon, its capabilities can become unpredictable if not properly guided. Effective AI governance is crucial to ensure that AI technologies align with organizational values, regulatory standards, and operational needs. By integrating AI governance with existing frameworks like third-party risk management, data privacy, and security, organizations are taking a strategic approach to unlock AI's value while mitigating its risks.

      Governance Structures and Risk Management

      AI governance structures vary across organizations, but common themes include robust committees, risk-based frameworks, and risk assessments. One organization focuses on a “find a way to say yes” approach, ensuring that AI initiatives are evaluated based on a risk lens, considering factors like security, data privacy, and operational impact. High-risk use cases, such as customer-facing AI or autonomous agents, undergo thorough reviews, while lower-risk cases benefit from streamlined approval processes.

      AI governance models also involve multiple governance bodies. One model features a steering committee for strategic decisions and a separate group for day-to-day use cases. Use cases are categorized into risk tiers (Prohibited, High, Moderate, and Low), with high-risk cases requiring full committee reviews. Training initiatives reinforce a responsible AI culture while fostering cross-functional collaboration.

      Balancing Risk and Innovation

      AI governance is not about slowing down innovation but ensuring that risks are managed effectively. Organizations are focusing on scaling their governance processes to handle an increasing number of AI initiatives. For low-risk use cases, the goal is to streamline approval while maintaining adequate oversight. For more experimental solutions, especially those with emerging risks, a careful, slow-and-steady approach is favored.

      A key concern remains ensuring that human oversight stays in place. Many AI solutions are already highly accurate, but human engagement is crucial to prevent over-reliance on AI outputs. This oversight is non-negotiable, with human reviewers remaining at the center of decision-making.

      Human-in-the-Loop: Ensuring Accountability

      One of the primary challenges of AI governance is maintaining human control over AI outputs. Even as AI models become more accurate, the risk of human complacency remains a concern. Organizations are addressing this through comprehensive training and awareness programs that emphasize the importance of reviewing AI decisions. Moreover, AI solutions are often used in low-risk areas, such as repetitive tasks, translations, and marketing, where human oversight ensures that critical decisions are not left to AI alone.

      Vendor Risk and Compliance

      Working with vendor-built or embedded GenAI tools requires rigorous vetting processes. Proof-of-concept (POC) testing helps validate the functionality of vendor solutions before they go through detailed due diligence, which assesses data privacy, security, and compliance with the organization’s governance framework. Contracts are scrutinized to ensure that third-party vendors uphold data protection agreements and do not use company data to train their models without consent.

      Navigating Regulatory and Privacy Requirements

      In sensitive domains like insurance and wealth management, balancing AI innovation with data privacy is critical. Governance frameworks ensure that data classifications are reviewed, and sensitive data undergoes thorough scrutiny. For vendor solutions, data protection agreements are a must, and if vendors fail to meet security standards, organizations may opt for alternative solutions.

      Organizations are proactively designing AI policies to meet current and future regulatory requirements, ensuring compliance with frameworks like the EU AI Act. AI use is typically limited to low-risk applications, such as marketing content and internal tools, to avoid sensitive data and autonomous decision-making. As regulations evolve, these organizations are building adaptive, future-proof policies to keep pace with both technological advancements and regulatory changes.

      Building an AI Risk-Aware Culture

      An effective AI governance program requires more than just policies; it needs a cultural shift toward responsible AI use. Many organizations are embedding AI governance into company-wide training, focusing on risk-awareness and human-in-the-loop oversight. Leadership plays a pivotal role in reinforcing responsible AI as a business and ethical priority, while cross-functional teams ensure that AI risk is addressed across departments.

      Training modules on AI risks are becoming standard across many organizations. These programs cover various aspects of AI, including hallucinations, deepfakes, and the importance of human oversight. Employees are encouraged to challenge AI outputs and consider their ethical implications, fostering a culture where AI is used responsibly.

      Future-Proofing AI Governance

      As AI continues to evolve, organizations are focused on future-proofing their AI governance frameworks. They are planning ahead to ensure compliance with emerging regulations and incorporating responsible AI use into their culture. For vendor solutions, ongoing reviews ensure that AI tools remain compliant, secure, and trustworthy over time. Regular recertification of tools is also part of the governance process, ensuring that changes to AI solutions do not compromise safety or compliance.

      A Collaborative, Iterative Approach to AI Governance

      Building and maturing an AI governance framework is an ongoing process that requires collaboration across departments and careful consideration of both risk and opportunity. By fostering a culture of responsible AI use, staying ahead of regulatory changes, and continuously refining governance processes, organizations can harness AI’s full potential while managing its risks. The goal is not to slow down innovation, but to provide the right guardrails so that AI can move forward safely, responsibly, and ethically, benefiting both the business and its customers.

      Author
      Ashley Longabaugh
      Ashley Longabaugh
      Head of Wealth Management
      Details
      Geographic Focus
      North America
      Horizontal Topics
      Artificial Intelligence, Artificial Intelligence - Generative AI e.g. ChatGPT, Data & Analytics, Ecosystems and Partnerships, Emerging Technologies, Innovation
      Industry
      Health, Life Insurance, Property & Casualty Insurance, Wealth Management