How to Make AI Work in a World of Regulation

Learn how financial institutions are aligning innovation with compliance, auditability, and control.

Although AI is often framed as a compliance risk, and should indeed be properly governed to prevent misuse, it can also boost compliance by reducing human error, inconsistencies, and more.

In this guide, we explore how agentic AI can enhance compliance across every stage of loan servicing. It serves as a great example of how this translates to other workflows in financial services, too:

AI adoption in financial services isn’t stalled by technology. It’s slowed by regulation, liability, and trust.

Maria Vullo, former New York Superintendent of Financial Services, shares how she helps financial institutions and AI vendors align innovation with complex regulatory demands.

From model explainability to auditability and data control, she lays out what it really takes to bring AI into regulated workflows without introducing risk you can’t manage.

We covered:

  • Why regulation depends on how AI is used, not just what it is

  • What scares banks most: audit trails they can't explain or defend

  • How AI can reduce fraud by removing human intent, but only if models are built for accountability

  • Why off-the-shelf SaaS is losing favor in favor of private, controlled deployments

  • The tradeoff every compliance team faces: visibility vs. liability

Agentic AI can drive major efficiency gains, but without the right systems in place, it can also introduce serious risks.

In regulated sectors like finance and insurance, that’s not an option. This post outlines key failure points, including traceability gaps and prompt vulnerabilities, and offers a blueprint for deploying AI agents with confidence. Learn how to design for control, not just capability.