Implement AI guardrails to safeguard members’ trust 

While artificial intelligence (AI) is a powerful tool, credit union leaders must understand where it’s effective, where it overpromises, and how it can introduce pitfalls into your business, according to Dr. Jennifer Golbeck, director of the Social Intelligence Lab at the University of Maryland. 

Failing to do so can destroy members’ trust and open institutions to regulatory violations, says Golbeck, who addressed America’s Credit Unions’ 2024 Governmental Affairs Conference (GAC) Monday in Washington, D.C., in a presentation sponsored by Glia. 

She focused on two types of AI: predictive and generative. 

Predictive AI “guesses at things you might like,” Golbeck says, citing recommendations from Netflix and Amazon. “This type of AI doesn’t freak us out. And if you do it right, it can be transformative. But you can do it wrong.” 

Case in point: Golbeck obtained a Prozac prescription to treat her rescue dog’s trauma. When she received a text message from the national pharmacy chain that supplies the prescription, she realized the company used medical information for marketing purposes. 

“I didn’t know there wasn’t a firewall between health data and marketing, and I no longer trust them with my data,” Golbeck says. “This is an important lesson: Members’ trust is your most valuable asset. You can’t lose it.” 

She also cited Wells Fargo, which developed an algorithm to determine which customers would receive subprime mortgages. Investigators found that Black and Hispanic consumers were a respective 1.5 and 2 times more likely to receive the higher-priced loans compared to white people. 

That’s because the algorithm replicated the flawed human decisions it learned from, Golbeck says. “AI doesn’t know it should be unbiased. It has a veneer of objectivity, and we think it should get rid of bias. But it replicates our decision-making and our biases, too.” 

This may spark the first phase of regulation around AI, she says. “You need to understand if algorithms have any bias. This regulation is coming soon.” 

Generative AI, including ChatGPT, uses algorithms to create content. While this can save time, the output requires close review, Golbeck says. 

“You need to put up guardrails,” she says. “On the surface, something may look ok. But when you look closer, you’ll see that it’s messed up. AI will raise issues of bias and the need to maintain trust. Keep this in mind as you move forward.” 

heelo