ºÚÁϳԹÏÍø

ºÚÁϳԹÏÍø Webinar:
Bringing ML-Powered Decisioning to Life in the Real World

View our LendIt Fintech webinar, where our Founder and CEO, Evan Chrapko, alongside Steve Malone, CEO of $10+ Billion CanCap, discuss how to harness the power of AI and ML modelling in the lending industry.

Transcript:

[Bo]: We are in a panel called bringing ML-powered decisioning to life in the real world and I am excited to kick this off. So with me are Evan Chrapko of ºÚÁϳԹÏÍø and Steve Malone of CanCap and I think we’re just going to sort of let me just tell you what we’re going to try to accomplish today. I am going to let the gentlemen give some introductions and then we can dive into it. Really we want to figure out out how to prepare for success in using AI and ML, what the common pitfalls are and sort of where we are in the adoption cycle of machine learning and then what to expect from here. So let’s do introductions first and I’m going to hand it over to Evan. Evan do you want to start off and give us an intro?

[Evan]: Sure, thanks Bo, really appreciate you and the sponsors all pulling this together. Pretty excited about what we have to discuss here with Steve. So Steve is the CEO of CanCap and one of our partners/early adopters. We’re a platform as a service, so we are offering an ML-powered credit scoring service as ºÚÁϳԹÏÍø. We’re doing this for the benefit of lenders and specifically focused on consumers down-market. So we’re in the position of helping subprime borrowers or “conventionally scored subprime” but we say often “wrongly scored”, we are helping lenders find the invisible primes or the people who are wrongly scored as subprime.

[Bo]: Alright, Steve.

[Steve]: Thank you, thanks everyone for having us. I am Steve Malone, I am the CEO of CanCap group. I have been in the finance business in the Canadian marketplace for about 25 years, majority of that with Wells Fargo then started this business with two partners about 7-8 years ago and we are in the auto and unsecured credit card lending vertical across prime and non-prime originations and servicing. I am looking very much forward to today’s discussion.

[Bo]: Great, yes, thank you very much for joining. So let’s kick it off and just to level set a little bit, where are we in the industry lifecycle in terms of adoption of AI.

[Evan]: From our perspective it is still really really early days. You get the references to AI and when I started this 15 years ago I got every meeting I ever wanted but a lot of them devolved into joking around about HAL [9000] and Skynet because to their defense what the C-suite knew about AI at the time came from the movies. We are now as practitioners and having productized the ability to deliver this kind of service, we are now working with still early adopters, still a lot of laggards or myths and busting that needs to happen and its a regulated industry so you can’t just go off like a halfcocked cowboy, cowboy style, it has to be done in a compliant way and it’s not easy. It’s been many many millions of dollars in all of these years and now a patent portfolio approaching dozens to three dozen with five more patents pending. It’s a testament to there still being a lot of inventing going on. These are still early days.

[Bo]: Steve would you agree? Do you feel like you are an early adopter here?

[Steve]: I think so for sure. As we started our journey to implement an AI-based credit model, we went into it with a hypothesis that an AI-based model will create lift, improve, or call it augment from our traditional models. Traditional models have been around for 50+ years, they work very well, but when you have the ability of an AI model to ingest those traditional models, add additional data and then the velocity at which that model learns, I mean that where the real test is and I think we’re still early in that journey but we certainly have started down that road and a lot of our originations are, again, focused in the non-prime and when I think of that, you know, it’s like scaling a 20-foot wall and our traditional credit models get us 12 to 13 feet of the way up, but to get over that wall easily enough you need an extension ladder like AI and we’ve recently started that in the past year and have made significant progress.

[Bo]: So I guess you’ve started to touch on this next question of mine which is: why is an AI based model better than a traditional model and I love the analogy of the extension ladder and let me take that analogy one step further. Can you essentially purchase a ladder gets you over that wall and it serves a great deal of your portfolio, or do you need like I need the 6-foot, the 8-foot, the A-frame, the actual proper extension ladder, do you need a whole bunch of different tools to make this thing work?

[Steve]: No, I mean look, I think Evan can speak to that in more detail as it pertains to other lenders. Our approach wasn’t a swap-out one and kind of disrupt the whole process. What we had was working, but to get the additional lift and to augment, layering the AI top, learning, growing that evolution. Can it eventually replace the whole ladder? I would say yes, but our approach to get over the wall first was definitely on the augment approach.

[Bo]: Got it, got it. So then, you know, you’ve recently been through this, what advice would you give to lenders that are considering implementing an AI-based credit model and this is of course a question to both of you. But, you know, if we’re early days there’s still a lot of implementation to do, so some advice for the audience would be great.

[Steve]: Sure I’ll give my perspective and then pass it over to Evan for sure. I would probably approach it from 3 different items. First, across the entire organization, you would want full buy-in: it needs to be completely transparent, there needs to be a willingness to share all the data, the business rules, whatever you think is proprietary. If you go into it and you are sheepish about the data then I think it will take a lot longer and there will be a lot more bumps on the road so I think one good advice to getting full commitment across your organization to be willing to share everything. Two, I would look at the structure and the format of your data. It certainly doesn’t need to be pristine and perfect but if there’s some foundational challenges, getting the data in order first would be my recommendation. The AI-based model is still only as good as the information that is going into it. And then third, our approach was to focus to get comfortable on that journey, a niche segment or the fringe of our buy-box where we would say: hey we’re really challenged today with thin-file or bankruptcy, let’s start with one segment and build out from there as we get comfortable and those three approaches have served us well.

[Bo]: Evan, do you want to jump in there?

[Evan]: Yeah I would just add from that. So CanCap is a great example of being forward thinking and fearless, the right partner makes a difference, I’d say there is a lot that can be done or attempted in-house but some of the structural advantages to having a view on the market that’s outside your four walls. And so we’ve deliberately, again this took extra time and money to do, to accomplish, made it so that we are a partner of the in-house folks, so data scientists and risk officers and modelers as well as any data suppliers who live outside your four walls. You want to bring everybody along, so we call that the plus model. I think the mistake is, and this is easy to understand in the much much bigger organizations, but the mistake is that you can do it all yourself, and you probably can, but you clearly put some disadvantages against yourself without realizing it. It’s the same reason that the big bureaus exist today outside of everybody’s four walls. They’re not slinging software at you that you are spinning up in-house, if that makes sense. It would be someone, further to Steve’s point, that is open and transparent, the other solutions or service providers need to do that with you.

[Bo]: Got it, so the theme that I am really hearing here is that partnering in the true sense of the term is really important here and we need to engage with the vendors in a way that is extremely trusting. Is that a common pitfall, and if so, what are some of the other mistakes that you think people are making, Evan?

[Evan]: Yeah, you’ve got to ºÚÁϳԹÏÍø and be trustworthy yourself and we are doing something that is inherently leading edge and having been at it for a very long time can share a lot of lessons learned with you our early adopter customers. And then just watching the marketplace, you heard Experian at the beginning of the day talk in terms of productizing, so if you are trying to do one-offs, or if you are trying to do this on the basis that you are going to have to maintain yourself forever, that’s tough, speaking from brutal cost of both time and money. It’s hard to have this be scalable and compliant as well as being as secure as anything else you’ve ever done or run across. I’d say the other issue is trying to evaluate the efficacy of this kind of thing using old methods and techniques from the prior century or, as Steve said, we’ve been doing the scoring the same way and model-building for a long time and so there’s just a lot of habits or ways that people are accustomed to thinking about the evaluation. Some of that can benefit from a refresh or hitting reset. And I’d say that it’s a common mistake unless you’re on a platform that’s built and fit for the purposes. It’s a common mistake to assume that the models won’t be touched or modified or updated for another year or two or three, so you have to get it right 100% and try somehow to get rid of all the risk upfront and when you’re dealing with this arena and supervised learning and the ability to refresh the models more rapidly than would make conventional practitioners’ heads spin, that’s a common mistake: not appreciating that it’s that capable of harnessing chaos and turning volatility into an asset. And then I would say don’t take any shortcuts on compliance, there’s a reason that with explainable AI that ºÚÁϳԹÏÍø was one of the first commercial entities in the world to really dive all over this. AI is super hard, costly to do, but to get it right in doing this which is a highly regulated environment, don’t be tempted into shortcutting that. Not that anybody in this audience would do that but I think that sometimes it’s a source of going slower but you want to go a little be slower to go faster in the end. And then to Steve’s earlier point, he already said, make sure you are accommodating your entire credit quality ladder, so make sure you are on something that can address that. Because you’ll follow your customers for life, especially if you are getting new to credit or new to country or structurally excluded underbanked and unbanked people, when you are getting them into a relationship, you are probably going to have them for life because no one else has warmed up to them as on the conventional scores they suffer, and so you want to be with a service than can move up the credit quality ladder as well.

[Bo]: Yeah, got it. And I guess I’m trying to recall that saying, I think is a military saying, it’s a really good one, it’s slow is steady and steady is fast or something along those lines. Anyone that has spent any time in the military can correct me but I think it’s a really valid one. We did just get a question come in, and I’d love to just pop this in, I also have my timer looking at me which says I have 4 minutes left so the question from the audience here is what are some of the best practices for rapidly deploying AI or ML models from the analytical platform to a loan origination platform.

[Evan]: Well Steve is actually a great example of this, when he pulled his team together and us, five days later, scores, intelligent, predictive scores, were being expressed in the user interface of his loans officers. And there was a national holiday between that so from Monday to Friday with Wednesday being a national holiday.

[Bo]: Only the Canadians would have a national holiday on a Wednesday.

[laughing]

[Steve]: That’s a good one.

[Evan]: The best practice there is having the foresight or the patience and time to get the integration with your LMS done and not to jury-rig or use binder twine/baler twine and duct tape to make an LMS be a decision system or part of decision management suite, it’s just that they’re two different activities. So I would say that best practice is holding to… you’re trying to get risk adjusted profit, right, and so whether it’s making sure that you’re not disrupting, you’re augmenting your workflow rather than ripping and replacing, or you’re enhancing, so you want something that’s going to be bi-directional or capable of holding a conversation with your workflow. I would say that if you are trying to use software or point to a solution, then you better be prepare to set up a miniature software company in-house or a SaaS company in-house. Again that’s tough from personal experience, tough to do.

[Steve]: Yeah I would say confidence in the model too, like we knew when we did the analytics and saw the lift there would be some time spent on explainability of the model with code and the rules, but I think some of that is just table stakes in good AI today, you’re going to get all that or see the results adopt early up-front and not backend that process. But we didn’t spend an inordinate amount of time on the front end on that pilot, and, in speaking with some counterparts, that’s where they get bogged down in the process.

[Evan]: The final point on best practices: make sure that you’re architected to get the consent from the borrowers or applicants. Make sure that what you’re using is able to go there because the legislation and the regulations not only in North America but around the world that are coming down the pike on privacy or consumer protection will demand that for sure.

[Bo]: So let me ask you this, and we’ll have to make it crisp because we’re running short on time, but, in the last panel, we had Haiyan from Prosper say, and these are my words not her, but basically take the data that we learned in the pandemic and throw it out because that’s not what we want our models to learn. There’s just so much noise in the system. That’s just my interpretation of what she said anyways. But what did we learn in the pandemic, if anything, and what can we take forward from it?

[Steve]: Go ahead Evan.

[Evan]: It’s just that volatility and chaos are actually prime learning territory. So if tomorrow is going to look like today then you’re okay, but what the pandemic taught us is that it really proved the value of something that is dynamic and learning. The edge cases teach and if the system is learning the kind of chaos that happens, whether that’s , god forbid, world war three or something very very positive, the old way of doing scorecarding and credit model building depends on tomorrow looking a lot like today and yesterday, and depends on consumer behaviour.

[Bo]: And it’s increasingly not, correct?

[Evan]: Correct

[Bo]: Chaos is constant.

[Evan]: And that’s where a learning system is a secret weapon.

[Bo]: Yeah. Well okay I’m afraid we’re going to have to leave it there, that was actually a fantastic 20 minutes well spent. Thanks both of you for joining and I appreciate you being here.

Want to chat? Simply fill out this form!