Season 3
September 30, 2025

S3 | E24: AI Oversight: Why Program Risk Assessment Is Essential

Artificial intelligence (AI) is moving faster than any other technology banks have seen before. From fraud detection to lending platforms, AI is being embedded in nearly every vendor solution. For community banks and credit unions, this acceleration brings both opportunity and risk. This present risk and compliance teams with new demands from the board and regulators for stronger oversight.

In this episode of the Banking on Data podcast, host Ed Vincent continues the conversation with compliance expert, Beth Nilles, who helps unpack how financial institutions can take a program risk assessment approach to AI. This builds on a previous episode, How Community Financial Institutions Can Build a Responsible AI Approach, where Beth outlined guardrails and governance for adopting AI responsibly. In Part 2, she explains why program risk assessments are critical, what they look like in practice, and how often institutions should conduct them.

Listen or watch the full episode, or continue reading the summary below to learn more.

What Is a Program Risk Assessment?

Beth describes a program risk assessment as a top-down view of risk in a particular bank program — whether that’s fair lending, BSA/AML, or now, artificial intelligence. Instead of evaluating one control or risk at a time, institutions look at the “30,000-foot view”: the overall quantity of risk plus the effectiveness of risk management. Together, these create an overall risk rating for a program and provide clarity on whether the institution is operating within its stated risk appetite.

“Program risk assessment is a top-down look. You’re looking at your total risk, then how well you manage that risk, and together you get an overall risk rating for a program.”

Why Apply This to AI and Why Now?

AI is showing up in nearly every vendor solution: fraud monitoring, AML systems, lending platforms, customer engagement tools. The velocity of change is unlike anything banks have seen before. Beth emphasizes that this creates both opportunity and exposure, particularly for smaller institutions that may not have large risk teams. A program risk assessment gives leaders a structured way to evaluate AI risk without starting from a blank sheet of paper.

She notes that her team drew on established frameworks from NIST, OCC, and FDIC to shape an AI program risk assessment tool. By aligning to these standards, institutions can measure attributes such as regulatory oversight, fairness, bias, and drift. Early on, metrics may be more qualitative, but over time banks can add quantitative measures and strengthen their monitoring.

“It seemed like the perfect time… banks don’t have to invent this themselves. We looked at NIST, OCC, FDIC guidance and asked: if I had to measure AI risk at a high level, what would I look at?”

How Often Should AI Program Risk Assessments Be Conducted?

Unlike other program risk assessments, which may be annual, AI requires much more frequent evaluation. Customer-facing AI in particular must be reviewed regularly for fairness, bias, and drift. Beth recommends starting with semi-annual assessments and increasing to quarterly or even more often as AI adoption grows within the institution.

Who Should Use an AI Program Risk Assessment?

Beth stresses that institutions should conduct a program risk assessment at the very beginning of their AI journey. It can serve as a readiness check - helping banks compare their current state against their board’s risk appetite and providing a roadmap for where they want to go. For institutions already experimenting with AI, it’s not too late: starting now still helps establish a defensible, repeatable approach that will evolve alongside new use cases.

“You should do it right up front… it tells you where you are and maybe where you want to go because you’re comparing it to that risk appetite your board has set.”

Key Takeaways for FI Leaders

Beth’s guidance boils down to a few practical actions for boards, CROs, and compliance leaders:

  1. Adopt a top-down view of AI risk. Pair overall risk exposure with the strength of controls to get a clear risk rating.
  2. Leverage existing frameworks. Use NIST, OCC, and FDIC guidance to inform attributes and metrics, rather than starting from scratch.
  3. Make it continuous. Unlike annual program reviews, AI requires semi-annual or quarterly updates due to the velocity of change.
  4. Use it as a readiness tool. Assessments help align AI adoption with board-defined risk appetite from day one.

Responsible AI adoption doesn’t just happen at the control level, it requires program-level oversight that evolves as fast as the technology itself. By applying program risk assessments, community banks and credit unions can embed AI within their risk frameworks, protect their customers, and create a foundation for innovation.

To learn more catch up on part 1 of this conversion here or learn more about Lumio Program Risk Assessment solution, now with an AI risk assessment.