When I was an undergraduate at the University of Tulsa, I was at the computer lab1 writing my final paper for the semester. As was common, it was quite late in the evening and the computer lab was full of students cramming in that last assignment… just trying to get done.
At one point in the evening, the football player a few seats over stood up to walk to the printer. On his way, he tripped over the power cord, which strangely, was just laying across the aisle where people walk. Being desktop computers (not laptops with batteries, because they didn’t exist yet), the bank of computers on that table immediately went black. The football player was really pissed off because he had not saved his work. Keep in mind, back in those days, you had to bring a 3.5” floppy disk and insert it into the floppy drive to save your document. There was no saving to the hard drive or to the cloud at the computer lab.2
So, this guy was out of luck. It was a simple mistake but the only thing he could do was to start over. Fortunately, his content was probably still fresh in his mind. So, off to work he went, feverishly re-typing his thoughts before they vanished into the fog of midnight brain.
About an hour later, presumably wanting a break, the same guy stood up and, once again, as if on a B-rated comedy tour, he tripped over the same cord. The same bank of computers obeyed the same laws of physics and again, promptly went black. Partly embarrassed, partly mad (because he still had not saved his work… yeah, I know. He probably didn’t bring a floppy disk to the lab), the football player, a large hulk of a man, packed up and left mumbling something resembling “Screw it.”, but not these exact words.
This story epitomizes the failure we all have in business and in life to evaluate our predictive models of risk of what might go wrong and our general lack of effort to mitigate even known risks. The first time he tripped over the extension cord could be considered an unknown risk (to him at least). But the second time, it should have been a known risk (although unmitigated in this situation).
No one can fully manage risk. We can mitigate possible negative outcomes by listing them, assigning probabilities of occurrence, scoring metrics of probable impact and by taking measures to prevent them from happening. Based on our thoughtful risk analysis, some risks we choose to mitigate, some we choose to accept because they are either too improbable or too onerous on the budget to act upon. This is about the best we can do. And this is fine so far… for those things we have considered.
The problem with our risk analysis exercises is that we are often lulled into believing two untruths:
Untruth #1 – an unlikely event is not just improbable, but impossible, precisely because it has never happened… so we assign a 0% probability to it.
Untruth #2 – we have thought of everything. Aside from mitigating (or ignoring) known risks, the biggest risks are unknown. So-called off-model risks.
Off-Model Risks
Be careful of Black Swans [ref: Nassim Taleb]. Unexpected events. Unknown risks are not modeled because no one thought of them. These off-model risks can be a tsunami compared to the waves and swells of on-model risks.
Off-Model Risk – an Example
Most business owners do not realize that they are subject to potentially large fines and penalties from fraudulent transaction attempts if they accept credit card payments as a merchant. It’s in the fine print of the merchant agreement to accept credit cards as a method of payment. Internet businesses are particularly vulnerable.
How it Works
For every transaction attempt, the technology sends the information from your website to Visa/Master Card/etc. to check for verification. Every verification attempt charges the merchant about $0.15. Not a big deal… because, as a percentage of the transaction, this is negligible. However, if a scammer uses your website to test unknown credit cards, say 100,000 of them over a Saturday and another 100,000 on Sunday when you are likely not paying attention, suddenly, you have $30,000 in credit card fees over the weekend without making a single sale. A rather unfortunate surprise.
Are you contractually obligated to pay this fee? Yes, if you honor your signature on the agreement with your credit card merchant. When something similar happened to Target, they incurred a $68 million fine.
Fortunately, credit card processors have fraud detection suites that help eliminate this potential threat. You can activate these to limit the transaction velocity – the number of transactions per hour or per day. You can also limit the number of transactions per IP address per day (although the fraudsters are clever and typically hide their IP addresses or anonymize them) and a host of other protections like requiring the CVC code on the back of the card to also match.
The point is, these fraud detection filters are typically turned off from the outset and the business must proactively manage this themselves (which is strange, one might assume it would be better for the customer to have these fraud filters turned on by default and let the customer relax them as needed. But keep in mind, the credit card merchant provider benefits financially from this mistake. Misaligned incentives).
Surprisingly, most merchants do not realize these fraud detection filters exist nor realize that they could incur a nasty financial surprise. I ran PEI (www.PrivateEquityInfo.com) 12 years, accepting credit card payments online before I learned of this risk. We accidentally stumbled across it when our web development team was testing the system, which caused the credit card processor to see unusual activity and place a temporary hold on our account. Fortunately, they called and explained the situation. As I asked more questions (a lot of questions), it dawned on me we had been unwittingly accepting a fairly sizable, seemingly random, off-model risk.
I’m sure there are other risks we still unknowingly accept. I suppose, if we knew them all up front, no one would start a business.
Not Being Unlucky
A portion of success in business is sometimes just getting lucky. We normally think about luck on the upside. But there’s also a great deal of luck on the downside… sometimes we are lucky in that the potential negative event never materialized. This might have been the case with this particular risk for PEI. It was stopped before it occurred. Disaster avoided.
UPDATE – about 10 months after this discovery… we had a credit card hacker run a credit card validation check on the PEI website. I first noticed this after about 240 declined transaction notices hit my email inbox. They were coming in at a rate of about one attempt every three seconds. I was receiving these declined transaction notices by email because they triggered the fraud protection filters I had setup just 10 months prior.
By the time we blocked the perpetrator’s IP address and killed all on-going credit card processes running on the server, 533 transaction attempts had been made – in about 30 minutes total. Fortunately, this was during the normal work day, so I caught it and killed it quickly. Also, fortunately, I had setup the fraud filters. Consequently, these transactions were rejected, and I owed no fees to my credit card merchant.
Had I not setup the filters and had this happened over the weekend when I might not have noticed so quickly, there could have been up to 72,000 attempts made. At $0.15 each, that could have been a $10,800 bill over the weekend due to fraud. I would say we got lucky there.
Summary
The point here is two-fold:
- Off-model risks (the risks we DON’T think of when we consider the various risks our businesses are exposed to) can eclipse the known risks that we model.
- Sometimes we get lucky on the upside and sometimes we get lucky by missing a potential downside. Near-misses are also lucky.
FOOTNOTES:
- In the early 90’s, before we all had our own computers, universities had computer labs where you could sit in front of one and work on it. I suppose these rooms have long since been converted into study halls.
- Incidentally, I once accidentally deleted the operating system from a computer at the computer lab at school. To completely erase a diskette in the disk drive, you would enter the DOS command del *.* …this means “Delete every file name with any extension on this drive.” “Are you sure?”. “Yes”. Click. When the screen began to scroll through all the files being deleted, it occurred to me that I was not on the floppy drive for my disk when I entered this command. I was on the C: drive. Specifically, the DOS directory (c:/DOS). DOS was the operating system for that computer. And off it went, deleting itself until it started deleting the file that allows it to delete files (or to find them) and it just permanently hung there. It has always been humorous to me that DOS could delete itself. I looked at my roommate, Brad. He looked at me, immediately knowing what had just happened. We simultaneously looked around the computer lab to see if anyone was looking. Nope. We quickly packed up and left. Brad later heard Stefan (the student lab technician) complaining that some idiot had deleted DOS from the computer and it had taken him hours to figure it out and to restore the computer to working condition again. It was a proud moment for me, in retrospect.