Four years ago — well before crisis of fall 2008 or the housing meltdown, I wrote a little piece called Against Systems that got a little bit of press attention.
The bottom line was that if you design a point-based system, it’ll likely have flaws, and if you allow the system to enforce the rules (say, make it a computer program, or an algorithm) human beings will tend to exploit those flaws.
Wouldn’t you know it but this month’s Inc. Magazine has an article, “Rewriting the rules of credit”, where Economis Amar Bhide answers the following question:
Lending used to be a subject matter: Why did we wind up with system of stringent rules?
With this answer:
First, there was an ethos that developed in academia that said that all risks can be quantified. What economists did was say the tuff that we cannot quantify is really on the margin. And what’s essential to riks, we can present to reduce to one or two nubers. Once you do that, then you can create a machine. If you’re required to think of risk in a broad, holistic kind of way, it’s much more time-consuming.
Implicitly and explicitly, the government embraced this view of risk. Almost unwittingly [Fannie Mae and Freddie Mac] created the largest mechanist model of lending in the world simply by saying we will underwrite the risk of mortgages if they meet XYZ criteria. If you followed the model for a loan, the government would take it.
The interesting part is that not all lending can be equally mechanized and scaled up. And therin lies the rub. It means if I’m a bank, and I want to expand, I’m going to favor the activity where I can put the pedal to the metal fastest.
And small-business lending does not fit that model?
Correct. It was and remains an activity that requires a banker to go and talk to the borrower. Analysts can pretend that all housing loans are the same, but with small business, the pretending completely defies belief. So small business gets the short end of the stick.
Now think about that ‘pretending’ that all loans are the same: It meant not human being was looking at the whole balance sheet for holes. Often, it meant that no human was physically examining pay stubs.
We all know how that works out.
So what happens when we rely on an impersonal, mechanistic process to look at our risk – both for our process and for our product?
I hope you can see where I’m going with this.
It’s a great interview, and should be up on Inc. com in a few days; I am a subscriber and get the “content” early.
I’ll link to it when it becomes available.
In the mean time, I hope you’ll join me in embracing that ‘broad, holistic’ view of risk that Amar is talking about.
Otherwise, we’re not testers, but just check-ers. Ya know?