the use of Artificial intelligence reveals built-in biases
For years, Better Banking Options has talked about racial discrimination in the banking industry and how it contributes to the racial wealth gap. Although some of these biases are perpetuated by the personal prejudices of those in the industry, it’s mostly an institutional problem, with many industry regulations originating from a time where discrimination was the norm. As artificial intelligence becomes more widely used by banks and credit unions to make lending decisions, new issues arise that fuel the same problem.
In a recent study from Lehigh University, chatbots were given 6,000 sample mortgage loan applications and asked to deny or approve the loans. This excerpt from the independent news source, Kentucky Lantern, details the results: the chatbots recommended denials for more Black applicants than identical white counterparts. They also recommended Black applicants be given higher interest rates and labeled Black and Hispanic borrowers as “riskier.” This correlates with higher rates of denial for Black households when human mortgage underwriters are the ones making these decisions. Theoretically, neither the algorithms nor the humans coming up with these results should be taking race into account, as it is illegal to make these decisions based on race.
Although algorithms can exclude explicit race categories, when they reflect a system which is inherently unequal, they can still build discrimination into every process. For example, a borrower’s zip code may reflect generations of redlining, which an AI algorithm may “interpret” as riskier. A borrower’s credit score may reflect job market discrimination that becomes part of the AI’s calculations.
This is particularly alarming when using these technologies in systems that have historically discriminated against people of color, whether these be financial or government institutions. If we don’t continue to monitor and adjust these technologies with this in mind, they will reflect our biases in ways that will become extremely difficult to see and understand. As in many industries, artificial intelligence systems create fewer new problems than they exacerbate existing ones, rapidly accelerating decision making but with the same outcomes and less oversight.
Although there is no clear solution to these emerging problems, the way forward is to continue to examine these systems and algorithms for bias and attempt to consider these biases when making lending decisions. Additionally, we should push for legislation that regulates the use of AI when making incredibly important decisions regarding lending and legal issues.
Ideally, banks and credit unions that are Better Banking Options will build relationship and community with their customers, because they realize that people are not just a collection of statistics. This is something that AI algorithms are incapable of, and for that reason and others, financial institutions should use these tools sparingly.
Comments