In an ideal world, every shop would have a managed patching process that immediately responded to newly published patches, tested instantly and applied as soon as the patch was deemed safe and applicable. But the world is not an ideal one and in real life we have to make due with limited resources: physical, temporal and financial.
Patches are generally released for a few key reasons: security, stability, performance and, occasionally, to supply new features. Except for the addition of new features, which is normally handled through a different release process, patches represent a fix to a known issue. This is not a "if it is not broken, don't fix it" scenario but a "it is broken and has not completely failed yet" scenario which demands attention. The sooner the better!
Taking a "sit back and wait" approach to patches is unwise as the existence of a new patch means that malicious hackers have a "fix" to analyze. And even if an exploit did not exist previously, it will very shortly. The release of the patch itself can be the trigger for the immediate need for said patch.
This patch ecosystem creates a need for a "patch quickly" mentality. Patches should never sit, they need to be applied often as soon as they are released and tested. Waiting to patch can mean running with critical security bugs or keeping systems unnecessarily unreliable.
Small IT shops rarely, if ever, have test environments whether for servers, networking equipment or even desktops. Not ideal but, realistically, even if those environments were available few small shops have the excess human IT resources available to run those tests in a timely manner.
This is not as bleak as it sounds. The testing done for most patches is redundant with patching already tested by the vendor. Vendors cannot possibly test every hardware and software interaction that could ever happen with their products. But they generally test wide ranges of permutations and look at areas where interactions are most likely.
Its rare for a major vendor to cripple their own software with bad patches. Yes, it does happen and having good backups and rollback plans are important, but in day-to-day operations, patching is a relatively safe process so it is far better to do it promptly than to wait for opportunities that may or may not occur.
Like any system change, patches are best applied in frequent, small dosages. If patches are applied promptly then normally only one or a few patches must be applied at the same time. For operating systems you may still have to deal with multiple patches at one time, especially if patching only weekly, but seldom must you patch dozens or hundreds of files at one time when done in this manner. When done like this it is vastly easier to evaluate patches for adverse affects and to roll back if a patch process goes badly.
Applying many patches at once increases the chances that something will go wrong and, when it does, identifying which patch(es) is at fault and producing a path to remediation can be much more difficult.
Delayed patching is a process that provides little or no advantage to either IT or a business but does carry substantial risk to security, stability and performance. Best practices for patching in a small environment is either to allow systems to self-patch as quickly as possible or to schedule a regular patching process, perhaps weekly, during a time when the business is most prepared for patching to fail and patch remediation to be handled.
Whether you choose to patch automatically or simply to do so regularly through a manual process, patch often and promptly for best results.