Season 3
September 23, 2025

S3 | E23: How Community Financial Institutions Can Build a Responsible AI Approach

Artificial intelligence (AI) is no longer just a buzzword in financial services. From lending to fraud detection to customer service, AI is steadily finding its way into community banks and credit unions. But for leaders, boards, and compliance teams, one pressing question remains: how do we adopt AI responsibly?

In this episode of the Banking on Data podcast, host Ed Vincent sits down with Beth Nilles, who brings more than 30 years of banking leadership across lending, operations, and compliance. Beth offers practical guidance for financial institution leaders who may be exploring AI for the first time - or wrestling with how to scale responsibly without falling behind on regulatory expectations.

Listen or watch the full episode, or continue reading the summary below to learn more.

Why Responsible AI Matters for Community Banks and Credit Unions

Community financial institutions (FIs) face unique pressures. Boards and regulators increasingly expect them to demonstrate AI risk oversight that aligns with existing frameworks like model risk management (SR 11-7) and operational risk practices. As Beth explains, this isn’t just about technology. AI can materially impact credit underwriting, fraud detection and AML/BSA compliance, marketing personalization, and sensitive data management across multiple channels. Whether your FI has $1 billion or $50 billion in assets, the mandate is clear: AI must be governed with fairness, transparency, monitoring, and regulatory alignment at its core.

What Is The First Step In Adopting AI Repsonsibly?

Beth’s first piece of advice for banks that have only dabbled in AI is simple: start small. By identifying one manageable use case, boards and CROs can build confidence, establish guardrails, and learn without risking customer data or reputation. Prove this use case out, capture the lessons learned, and then expand into new areas - turning small wins into sustainable momentum for responsible AI adoption.

“Pick a narrow use case with clear benefits, perhaps start in an operational setting, not a customer-facing setting, do an assessment and then start to build a feedback loop about it.”

Accountability: Who Owns AI Risk?

AI is not just an IT issue, it’s an enterprise-wide responsibility. Beth underscores that risk, compliance, and business leaders must collaborate from the start, with ownership beginning at the top. Shared accountability ensures AI isn’t siloed. It creates an enterprise-wide culture of responsibility that balances innovation with oversight.

“The Board and C-Suite should set the appetite and the tone for use… then risk and compliance set the guardrails… and finally, a specific human being should owns it — not a department, but an individual who can be held responsible.”

Guardrails Every FI Needs Before Rolling Out AI

Beth emphasizes that AI requires continuous oversight, not a “set it and forget it” approach. That means more frequent check-ins than traditional risk areas.

She points to AI frameworks from the OCC and NIST as helpful guides. Institutions don’t need to start from scratch, they can extend existing model risk and operational risk frameworks to cover AI. The NIST AI Risk Management Framework in particular highlights core functions: govern, map, measure, and manage that align well with what many community FIs already do in model risk management. For example, mapping an AI use case to specific risks (bias, drift, data privacy), measuring outcomes with clear metrics, and managing those risks through ongoing monitoring and feedback loops.

“You’re really just focusing on fairness, accountability, transparency and monitoring… don’t reinvent the wheel. Embed AI into the existing policies you already have.”

By incorporating these principles, community banks and credit unions can avoid reinventing processes, while still demonstrating to regulators and boards that AI adoption is being guided by a structured, defensible framework.

Education and Continuous Collaboration Are Essential

Beth closes by reminding leaders that adopting AI responsibly begins with education. A small, informed team can serve as “trailblazers” inside the institution, modeling responsible practices and building confidence across departments. But education shouldn’t stop there, it should cascade throughout business units so employees at every level can explain AI decisions to clients and regulators alike.

Equally important is collaboration between the first line of defense (business teams using AI day-to-day) and the second line (risk and compliance). Beth stresses that this interaction must be more frequent than annual check-ins. AI can drift, produce bias, or generate unintended outcomes quickly, so institutions need active monitoring, mini-audits, and feedback loops to stay ahead. By embedding education and collaboration into the governance structure, community financial institutions create the foundation for scaling AI in a way that is both innovative and defensible.

Download AI Guide: Getting Started with AI ML in Community Banking

Key Takeaways for Community FI Leaders

For boards, CROs, and compliance officers, the message is clear: AI is here, and regulators expect responsible adoption. Beth offers sound advice to anyone looking to get started with AI at their instituon and distilled into three key actions:

  1. Start small with manageable pilots and expand as governance matures.
  2. Build shared accountability across technology, risk, and business lines.
  3. Establish guardrails: fairness, transparency, monitoring, vendor diligence, and regulatory alignment that scale with institution size and customer impact.

Responsible AI isn’t just about compliance - it’s about building trust, protecting customers, and ensuring community banks and credit unions can innovate with confidence.