Monday, May 27, 2024

Process Controls Can Avert Disaster

Datamation content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

When talking with IT people, negative perceptions of process controls invariably surface as soon as the subject is raised. Unfortunately, many people have limited exposure to control theory and the benefits that can be gained.

In fact, many don’t realize they already have controls in some processes they perform constantly, but have never labeled them as such. The intent of this article is to discuss why they are needed to create reasonable assurances that objectives are met.

To start, we need to define what process controls are. Simply put, process controls are mechanisms designed to manage the variation associated with the attainment of the processes objective. This is an important perspective because an organization should have a goal and each process should be designed to support the goal by either mitigating risks or directly adding value.

To achieve this, there should be careful consideration of what controls are needed to make the outcomes of processes reasonably predictable. For example, a tape backup process should be designed to ensure that data is properly safeguarded in the event of an incident or outright disaster. To achieve the goal of protecting vital data, the organization must be able to count on that backup process running as expected.

And Then Reality Intruded

In a perfect world, operations would schedule the tasks in the archiving software, backup the data to the tape and store the newly used tapes in the tape library. The problem is that reality enters in and introduces all kinds of variation that can make attaining the goal of safeguarding data unpredictable.

The variation comes in the form of risks. Did someone insert a new set of tapes for the backup to have sufficient storage space? Were there any files skipped because an application had them locked open? Did the tape job terminate abnormally for some reason? Where are the tapes? Can we read the data? The list can go on and on.

The whole point is that these variables, and the ones specific to your firm, represent risks that can create unpredictable attainment of the objective – in our case, the outcome could be potentially worthless tapes that people are betting their jobs on when the next crisis hits.

Building on this tape example, process controls are implemented to ensure that goals are reached. The intent is to insert sufficient controls at key points in the process to reduce the likelihood and/or impact of identified risks. By assigning responsibilities and having log sheets that are reviewed, one can track that tapes are stored and rotated properly. By reviewing the backup application’s log files, one can see if jobs completed successfully and so on.

This logic of assessing risks to objectives and then mitigating risks to an acceptable level via controls must be central to any process design effort.

Just What’s Necessary

To be clear, the intent is to put in only the minimum amount of process controls necessary to achieve a reasonable expectation of the intended goal. There is no reward for seeing who can pile the most controls into a process! To be in absolute control is not the purpose for the process. One can imagine a draconian death march in the processes surrounding our tape example with two or more people and expensive technology involved with every task in the process, validating what the other did, generating a mountain of paperwork, signing everything in sight, and in the meantime costing a fortune beyond what management would deem necessary.

There is a very real and legitimate need to temper process controls with costs and this is why IT must work with management to discuss the risks associated with goals. From that starting point, they must then evaluate what controls and costs make sense to protect the value in question. From a high-level perspective, the board sets the risk appetite for the overall organization. At the management level, the risk tolerance to objectives is then set in alignment with the risk appetite.

For a given risk to a process, the inherent risk is what exists before mitigation. The remaining risk after a control has been put in place is known as “residual risk” and that is ultimately what management must be willing to either accept and live with or request that there be further investigation of compensating controls.(1) If there are no controls in place, or no controls possible, then the residual risk is equal to the inherent risk.

In cases where the residual risk is unacceptable and the only controls possible require budgetary approval, then a business case must be developed showing the options explored and why the proposed control(s) require funding to meet expectations. The point is that residual risk requires a binary decision from management – accept or correct. Those types of decisions are sometimes difficult but necessary.

But What About Productivity?

In some cases, controls do drive down perceived productivity significantly. One must step back and assess if the original throughput was really a delusion. As the old saying goes, “if you don’t have time to do it right the first time, where will you find the time to fix it?” If you thought you were doing 100 changes a day and more than half of them failed requiring major crisis management, were you really productive?

To be productive means moving toward a goal – or the objective of a process. Anything that doesn’t move you toward the goal is a waste of resources, including time and money. On that note, it is imperative to assess costs from an organizational level – not within a single area. Optimizing one area at the expense of the overall business doesn’t make any sense.

In this day and age of cost containment and quality expectations, how can organizations afford not to build processes with proper checks and balances to reasonably ensure the goals of the processes are achieved? In instances where productivity does suffer or guidance is needed for whatever reason, leverage best practice resources(2) to compare your processes and determine if there are means to improve throughput while controlling variation. Bear in mind a fundamental truth – ideal processes have controls built into them as one aspect of the design requirements. Controls should not simply be “lumped” onto existing processes or productivity is very likely to suffer and the resulting increased complexity may create new and unexpected levels of variation.

Saying There is ‘No Risk’ is Dangerous

The goal of risk management and controls is to reduce the residual risks down to acceptable levels, not to zero. The idea that risks can be 100% nullified is a dangerous delusion at best. First, trying to reduce to zero can cost a fortune. Second, it very often dangerously unrealistic. The makers of the Titanic thought their ship was the best thing to ever float on water. Just imagine if they’d had better rivets, bulkheads that sealed sections of the ship all the way to the top, sufficient lifesaving processes and equipment. Just imagine if they’d been willing to recognize that risks do exist. Would the outcomes have been the same? Would the results have been the same after the ship struck the iceberg?

Assurances that there is no risk breed a dangerous complacency that begs to be struck down. No application is flawless, no hardware failsafe, nobody is without fault, no control is perfect and absolute control is an utter delusion.

In summary, the minimum amount of controls must be implemented around objectives to create a reasonable assurance that goals will be achieved. Both the overall entity and IT are, and forever will be, judged on their ability to achieve goals. Organizations need to evolve their cultures beyond the misconceptions that process controls are a waste of time toward the understanding that they are key elements in process design that must be taken into account. IT is judged by what it achieves. Isn’t there value in making that achievement predictable? Absolutely. And therein lies the value of process controls.

(1) From a math perspective, Residual Risk = Inherent Risk x ((100%-% mitigated)/100). Note, there is a COSO ERM-based risk model for MS Excel that can be downloaded from the SGC website at It will be located next to the reference to this article.
(2)Such as COBIT, ITIL, ISO15000, ISO17799, NIST Special Publications, CMMI, PMI and PRINCE2. In fact, the trick often isn’t finding guidance, it’s selecting the best guidance for the situation at hand and then properly adapting the guidance to needs.

Subscribe to Data Insider

Learn the latest news and best practices about data science, big data analytics, artificial intelligence, data security, and more.

Similar articles

Get the Free Newsletter!

Subscribe to Data Insider for top news, trends & analysis

Latest Articles