facebook twitter linkedin google gplus pinterest mail share search arrow-right arrow-left arrow print vcard

Adoption of Artificial Intelligence in the Mortgage Industry Is Drawing Increased Scrutiny – and New Rulemaking – From the CFPB and Other Regulators


By Martin S. Frenkel and Brian A. Nettleingham

The rapid advancement of artificial intelligence (AI) technology is raising both hopes and eyebrows across government and industry. Where industry players see tremendous potential benefits in productivity, convenience, and innovation, regulators are equally concerned with the potential risks and dangers AI technology may pose to consumers.  

This dichotomy between the positive aspects of AI and its possible negative repercussions can be seen in how the mortgage industry’s growing use of AI  has resulted in increased scrutiny by the Consumer Financial Protection Bureau (CFPB) and other regulatory and governmental authorities. As is often the case when businesses and society adopt new technology, the legal and regulatory landscape governing the mortgage industry’s AI use remains unclear and underdeveloped. 

As mortgage lenders, servicers, and others in the industry integrate and leverage AI into various aspects of their operations, they must remain cognizant of the issues the CFPB appears to focus on as it plays regulatory catch-up. For now, those AI concerns seem to center on two broad areas: bias and unfairness in underwriting and appraisals, and the impact of AI on the overall customer experience. 

How the Mortgage Industry Is Leveraging AI

The advantages and efficiencies that AI brings to the mortgage industry can be seen in many aspects of origination, underwriting, lending, and servicing, including:

  • Automating routine tasks: AI can take over menial yet essential tasks such as document processing and management and data input, reducing the rate of errors, improving efficiencies, and freeing up advisors and others to focus more on strategic, big-picture, and value-adding work. 
  • Predictive analytics: One fundamental aspect of “generative” AI is its ability to “learn” from the vast universe of data it consumes and processes. In the mortgage industry, AI can analyze and synthesize data from a wide range of sources to develop forecasts and insights into customer behavior and market trends. This information can then help advisors make more proactive and tailored lending decisions.  
  • Better risk analysis: AI algorithms can help mortgage companies make better risk assessments and underwriting decisions. As discussed below, however, regulators are expressing serious concerns about algorithmic underwriting and AI using this data in a way that is inadvertently yet structurally biased against certain groups or geographic areas.
  • Improved fraud detection: AI can quickly and more comprehensively detect fraud at various points in the mortgage lifecycle, reducing losses and increasing protection for lenders, servicers, and customers alike. 

Concerns About Discriminatory Effects of Algorithmic Underwriting and Appraisals

The Biden Administration has ramped up its efforts to get ahead of what it sees as AI’s potentially negative impacts on consumers, including in mortgage lending and housing. Perhaps the most significant area of concern is AI’s “potential to perpetuate unlawful bias, automate unlawful discrimination, and produce other harmful outcomes.”

That language comes from the “Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems” issued on April 25, 2023, by the Civil Rights Division of the United States Department of Justice, the Consumer Financial Protection Bureau, the Federal Trade Commission, and the U.S. Equal Employment Opportunity Commission. 

As it relates to discrimination and bias in mortgage lending, the statement referenced the CFPB’s previously expressed concerns and actions involving algorithmic underwriting and appraisals. Algorithmic underwriting refers to AI’s use of algorithms to evaluate loan applications and a borrower’s creditworthiness. The concern is that these algorithms may perpetuate existing biases in the lending industry, such as discrimination based on race, gender, or income. The statement alluded to two recent CFPB actions on this front.

It mentioned the CFPB’s May 2022 circular advising that when the AI technology used to make credit decisions is too complex, opaque, or new to explain adverse credit decisions, companies cannot claim that same complexity or opaqueness as a defense against violations of the Equal Credit Opportunity Act.

It also referred to the CFPB’s and other agencies’ concerns about “digital redlining,” specifically, the potential discrimination and bias that could arise from using Automated Valuation Models (AVMs) in home appraisals and valuations. The Dodd-Frank Act defines AVMs as “any computerized model used by mortgage originators and secondary market issuers to determine the collateral worth of a mortgage secured by a consumer’s principal dwelling.” 

As we discussed here, the CFPB released a 42-page outline in February 2022 detailing several potential rulemaking options regarding the use of AVMs, rules it sought because, in its words, “without proper safeguards, flawed versions of these models could digitally redline certain neighborhoods and further embed and perpetuate historical lending, wealth, and home value disparities.”

These concerns culminated in six federal regulatory agencies, including the CFPB, issuing a Notice of Proposed Rulemaking (NPRM) on June 23, 2023, seeking public comment on a proposed rule governing the use of AI and other algorithmic systems in appraising home values. Specifically, the proposed rule would “require institutions that engage in certain credit decisions or securitization determinations to adopt policies, practices, procedures, and control systems to ensure that AVMs used in these transactions… comply with applicable non-discrimination laws,” among other compliance issues. It remains to be seen what these standards will look like in their final form, but they are coming, and lenders and servicers need to be prepared to adopt them.

Use of AI Chatbots in Consumer Interactions 

While the CFPB is not as far down the regulatory road as it is with AVMs, the Bureau has recently raised some red flags about the mortgage industry’s use of AI chatbots in interactions with borrowers. 

On June 6, 2023, the agency released the results of its research regarding chatbots, citing both its benefits as well as what it sees as its potential shortcomings. The report noted that “financial institutions risk violating legal obligations, eroding customer trust, and causing consumer harm when deploying chatbot technology. Specifically, it said:

“Like the processes they replace, chatbots must comply with all applicable federal consumer financial laws, and entities may be liable for violating those laws when they fail to do so. Chatbots can also raise certain privacy and security risks. When chatbots are poorly designed, or when customers are unable to get support, there can be widespread harm and customer trust can be significantly undermined.”

While no new rules or regulations on the use of chatbots by lenders appear imminent, the CFPB noted that it “is actively monitoring the market, and expects institutions using chatbots to do so in a manner consistent with the customer and legal obligations.”

The bottom line regarding incorporation of AI technology into regulated activites – including AVMs, chatbots, or other applications – is that mortgage lenders and servicers need to pay close attention to how the government responds to the industry’s increased use and integration of AI into their operations. We will continue to monitor these developments and keep our clients appraised about the ever-evolving AI legal landscape.