The AI Governance roundtable at Loeb's AI Summit had two rounds of robust conversation that revealed that organizations are operating under a wide range of AI governance structures. Some have adopted centralized models with oversight concentrated in a core AI governance team, while others use federated structures in which business units maintain responsibility with designated AI leads. Regardless of structure, participants consistently identified speed as a primary friction point. Lengthy review and approval cycles are perceived as barriers to innovation and business adoption.
Most organizations have moved beyond standalone AI policies and now maintain more developed governance frameworks and operational processes. AI governance teams are generally tasked with reviewing and approving AI tools and use cases, implementing guardrails and assessing risk prior to launch. However, several participants noted that frameworks are unevenly socialized, and in some cases, tools or use-cases slip through without review.
Training and literacy emerged as a major gap. Companies reported insufficient education on AI risk, governance obligations and practical tool usage. Even where enterprise AI tools are licensed, underutilization remains a challenge if teams lack the knowledge or confidence to deploy them effectively.
Participants also emphasized lifecycle governance challenges. Oversight often focuses on pre-launch review, but fewer organizations have mature processes for post-deployment monitoring, version updates or ongoing risk reassessment. This gap is expected to widen with the rise of agentic AI, which may significantly alter governance models and make centralized visibility more difficult.
Finally, companies with mature privacy governance programs appear better positioned to integrate AI governance into existing risk management structures. Where strong data governance foundations existed, AI oversight could be layered onto established processes. In contrast, organizations without these foundations are still building baseline structures while simultaneously responding to AI-specific risk.
Overall, the discussion reflected a shift from theoretical AI governance to operational execution challenges—particularly speed, visibility, lifecycle oversight and workforce enablement.

/Passle/63ef8bdcf636e911c850090e/SearchServiceImages/2026-02-17-18-14-58-659-6994b022051add2362095853.jpg)
/Passle/63ef8bdcf636e911c850090e/SearchServiceImages/2026-02-13-22-32-26-427-698fa67a751df97040699a4f.jpg)
/Passle/63ef8bdcf636e911c850090e/SearchServiceImages/2026-02-13-18-50-31-701-698f727799773b21433a7328.jpg)