This is list number six. If you haven’t yet, go back and read the post Top Ten Reasons Why I Hate Lists. As with all lists, this is short, blunt, and intended to start a discussion.
Risk mitigation, risk assessment, risk management … we insure ourselves from risk, put buffers in our estimates to compensate for risk, and we make decisions based on risk.
Or do we?
We think risk is a thing. It’s rarely a thing. Risk is part of the system we are creating. It’s the variation, its the unknowns, its the bit that makes “value add” something someone hasn’t done before, and risk is always a result of experiments we are running.
Quantifying risk is like quantifying love. We don’t say … There is a 0.9 correlation between your attractiveness attributes and my desire criteria therefore love, marriage, and children are a reasonably low-risk venture and I am willing to invest my life in it. With eharmony we could do exactly that. But we know that people don’t work that way. Even with firm evaluation criteria, people themselves are fraught with variation.
Risk, too, is fraught with variation and regardless how much actuarial science we bring to the table - it’s always going to be a gamble.
Risk, therefore, is actually an emotional determination. Are we (personally) willing to accept the risk we see before us? Is the gamble of the venture appealing enough to take the risk?
Risk always involves some degree of unknown. 100% certain things are not risky - they may be foolish, but not risky. If there is a man around the with a machine gun shooting everyone with perfect accuracy, I can be certain that if I come around the corner he will shoot me. Turning the corner is not risky, it’s just a really bad idea.
Time and again I see one project manager, one CEO, one person making decisions on risk. It’s their job to assess risk, they tell me. But they are one person addressing a sea of unknowns. They are one perspective on a large, multifaceted, and evolving target.
Risk, therefore, requires some care in consideration. Different people bring different viewpoints. Engineers, designers, managers, finance, sales, marketing, regulatory compliance, HR … all bring different professional views of risk.
One perspective is a limited perspective. One perspective guarantees your view of risk will be stilted.
Your product, your project, and your context are always evolving. Risk is evolving with them. Very few people leave the house assessing all the risks for their drive. We start the drive and watch the road vigilantly for bicycles, potholes, impaired drivers, or other hazards. While backseat drivers may annoy us, from time to time they do call our attention to risks from their perspective as well (see #3).
With many products (medicine, software, insurance policies, electronics) even after the release of a product we are still watching to make sure the product works right, doesn’t hurt anyone, and isn’t becoming illegal in some way. Doing a risk assessment up front, creating a plan, and then not reassessing during the project is shortsighted.
The framing effect, experimenters bias, the availability heuristic, and a host of other cognitive biases compliment and are the product of the previous four risks. Business, after all, is a human endeavor by humans and certainly involving them. We are the creatures that make the choices, we interpret the data, we take the risks. When we convince ourselves that risk is solely based on statistical models and that our interpretation of those statistics is rational simply “because the numbers say so” we are placing ourselves in a very precarious position indeed.
We have all seen the leaders of our world gather “facts” and then dismiss them because they have a vision. When they are lucky and succeed (Steve Jobs, Winston Churchill) they are lauded forever. When they are unlucky and get nailed … they are not. Either way, they made decisions like we all do … partially on information and partially on intuition.
When we understand that part of the variation in our risk assessment system is our own interpretation of the data … that’s when we can start to honestly (not perfectly) assess risk.