Artificial intelligence and machine learning (AI/ML) is ushering in a new era of digital mortgage technology, one in which computers perform specific tasks formerly handled by humans. Heralded by some as a modern miracle, AI/ML has its skeptics, including workers afraid of displacement and regulators who want to forestall the possibility of algorithm bias by requiring lenders to “show their work.”
Data scientist Boaz Reisman develops AI/ML mortgage solutions at Black Knight. On the Sound of Vision podcast, he dug into that skepticism and admits that, at least to some degree, it may be warranted.
“You train [a neural network] on a million images, and it can tell all sorts of animals: this is a dog, this is a cat, this is a cow. And then you give it a picture of what is obviously a cat in a pasture, and it says that’s a cow.” Reisman speculates that while farm-like imagery might suggest to the machine that it’s seeing a cow, in that scenario we can’t be certain how it drew its conclusion – and that’s not good enough.
But Reisman also says that example is where the skepticism should end, because the technology is just a tool and there are humans hard at work, controlling that tool and accounting for its output.
“It’s so important to have humans at the end of every decision,” Reisman said, “to temper the exact closeness of the [AI’s answers] with domain-centered, human expertise. I feel like that’s the way to move forward: to keep it human-centric.”
In an industry as highly regulated as the mortgage world, shortcuts like the cat/cow example that don’t show the work are verboten. Enter “AI explainability.” It’s the concept that lenders can use technology and documentation standards to check the AI’s math, validate its answers, and update its decisioning models; and it’s something that must be at the core of any successful lending solution.
Regulators are particularly concerned that algorithms based on historical data might inadvertently “learn” to reject or otherwise discriminate against minorities and other underserved groups in violation of fair-lending laws. Reisman acknowledges this concern but says the mandate for AI explainability requires him and other data scientists to analyze outcomes with a specific eye toward detecting and correcting perceived bias.
After the podcast recorders stopped rolling, we asked Reisman about his personal ethos, and the perspective he brings to the table at Black Knight. For him, it’s all about putting human expertise first in the machine learning process to increase transparency and confidence in models. In fact, Reisman has created his own patent-pending query language, ZQL, to help “teach” machines how to find specific data in unstructured documents using conversational or “heuristic” instructions, similar to the informal way someone might guide a friend to their home using vague measurements – “a little ways” – or informal landmarks, such as a dirt road, or a weird tree. Created for the mortgage industry, ZQL’s potential applications are far-reaching.
Boaz is proud of the effectiveness of ZQL and excited to be contributing to the larger field of AI/ML.
Listen to his full conversation in the Sound of Vision podcast here.