Something that I see discussed quite often in IT circles is “which version of software should I install?” This could apply to a database, an application, firmware or, probably most often, an operating systems. And with the upcoming end of support life for Windows XP the topic has reached a fevered pitch.
There are effectively two sides to this discussion. One side believes that the latest and, presumably, greatest software should always be used. The other believes that software needs to mature and take a “wait and see” approach. This second approach even considers each version to be a different product and not a continuum of development.
Both approaches have their merits and neither should exist completely without the other. Blindly updating software willy nilly is not wise and avoiding patches and updates without reason is not wise, either.
First, there are two completely different scenarios to consider. One is the updating of current, existing software. The assumption being that the current state of things is “working” with the accepted possibility that “working” might include a security exposure that has been discovered and requires updating in order to close. The other scenario is a new deployment where there is nothing currently and we are starting from scratch.
Let’s start with the second case, as it is far easier to provide guidance on.
In the case of new software deployments (or new operating systems), always use the current, most recent version of the software unless there is a clearly known technology limitation preventing it – such as known bugs or software incompatibilities.
Software is not like other types of products, especially not in today’s world of online patch releases and updates. I assume that the mentality that “old versions of software might be preferable to current ones” comes from a combination of physical products (watches, cars, dishes, furniture, wine) where a specific year or model might be superior to a newer model for various reasons. It might also come from legacy software delivery modes where finished software products were just “thrown over the wall” and the final state was, quite simply, the final state – no reasonable opportunities for updates, patches or fixes. Neither of these cases applies to modern business software (with only the rarest of exceptions.)
Software development is roughly a continuum. Normal development processes have new software being built on top of old software either directly (by creating updates to an existing code base) or indirectly (by rebuilding based on knowledge gained from having built a previous version of the software.)
The idea is that each subsequent version of software is superior to the one preceding it. This is not guaranteed, of course – there are such concepts as regression errors and just bad development – but by and large, software improves over time, especially when we are talking about enterprise class software used in businesses and under active development.
New software is not just the next phase of the old software, it also represents, in nearly all cases, the current state of patches, bug fixes, updates and, when necessary, changes in approach or technique. New software, coming from quality shops, is almost exclusively better than old software. Software evolves and matures.
Beyond the quality of software itself, there is the concept of investing in the future. Software is not something that can sit on the shelf forever. It needs to stay, to some degree, up to date or it stops functioning because the platform that it runs on changes, some new artifact comes to light, security holes are discovered or needs change.
Installing old software means that there is an investment in the past, an investment in installing, learning, using and supporting old technology. This is called “technical debt.” This old technology might last for years or even decades, but old software loses value over time and becomes increasingly expensive to support both for the vendors, if they continue to support it, and for the end users, who have to support it.
The same concept of technical debt applies to the software vendors in question. There is a very large cost in creating software and especially in maintaining multiple versions of that software. Software vendors have a lot of incentive to reduce support for older versions to focus resources on current software releases (this is a major reason why SaaS deployments are so popular, the vendor controls the available versions and can eliminate legacy versions through updates.)
If customers require support for old versions, the cost must be absorbed somewhere and often it is absorbed both in monetary impact to all customers as well as a decrease in focus on the new product. Development teams must be split to support patching old versions as well as developing the new. The more effort that must go in to old versions, the less effort that can be put into new improvements.
Within the framework of what I have already said, it is important to talk about code maturity. Often code maturity is stated as a reason for deploying “old code,” but I think that this is an IT misunderstanding of software development processes. If we think about a released line of code, just because it is released and in use does not really make it more mature.
Code does not change in the wild, it just sits there. Its maturity is “locked” on the day that it is released. If it is patched, then yes, it would “mature” post-release. Later versions of the same software, based on the same code base but more up to date, is truly the more “mature” code as it has been reviewed, updated, tested, etc. to a greater degree than the early release of the same code.
This is in contrast to, say, a car, where each release is a fresh thing with new opportunities for mechanical problems and different reliability concerns – where waiting a few years gives you a chance to see what reliability issues get uncovered. Software is not like this. So the concept of wanting more mature software would push you to deploy the “latest and greatest” rather than the “tried and true.”
For new software deployments, the newer the software installed, the better opportunity to leverage the latest features and the most time before inevitable obsolescence takes its toll. All software ages, so installing newer software gives the best chance that that software will last for the longest time. It provides the best flexibility for the unknown future.
Following this line of thinking might lead us to feel that deploying pre-release or even beta software would make sense as well. And while there might be specific cases where this does make sense, such as in “test groups” to check out software before releasing it to the company at large, in general it does not. The nature of pre-release software is that it is not supported and may contain code that never will be supported. That brings us to the other situation, the one in which we are updating existing software. This, of course, is a completely different scenario from a fresh install and there are many, many more factors involved. One of the biggest factors for most situations is that of licensing. Updating software regularly may incur licensing fees that need to be factored in to the benefits and cost equation. Some products, like most open source software, do not have this cost and can be updated as soon as new versions are available.
The other really large factor in updating software is a human effort cost to updating – unlike in a fresh installation, where the effort of install is effectively a break even between old software and new.
In reality, new software tends to be easier to install than old software simply due to improvements and advancements. Maintaining a single version of software for a decade means that resources were not dedicated, during that time, to upgrade processes. Upgrading annually during that time means that resources were used ten times to enact separate upgrades. That makes updating much harder to cost justify.
But there is more than just the effort of the update process itself, there is also the continuous training needed for end users who will be forced to experience more changes, more often through constant upgrades.
This might make updating software sound like a negative, but it is not. It is simply an equation where each side needs to be weighed. Regular updates often mean small, incremental changes rather than large leaps allowing end users to adapt more naturally.
Regular updates mean that update processes are often easier and more predictable. Regular updates mean that technical debt is always managed and the benefits of the newer versions – which may be features, efficiencies or security improvements – are available sooner, allowing them to be leveraged for a longer period of time.
Taking what we have learned from the two scenarios above, however, there is another important takeaway to be found here. Once the decision to perform an update has been made, the question is often “to what version do we update?” In reality, however, every update that is more than a standard patching process is really like a miniature “new software” buying decision. And the logic as to why we “always” install the newest available version when doing a fresh install also applies here. So when performing an update, we almost always should be updating as far as we can – hopefully to the current version.
To apply the Microsoft example again, we can take an organization that has Windows XP deployed today. The business decides to invest in an update cycle to a newer version, not just continued patching. There are several versions of the Windows desktop platform that are still under active support from Microsoft. These include Windows Vista, Windows 7, Windows 8 and Windows 8.1. Updating to one of the less current versions results in less time before that version’s end of life, which increases organizational risk.
Using older versions means continued investment in already old technologies, which means an increase in technical debt and less access to new features that may prove to be beneficial once available. In this particular example, newer versions are also considered to be more secure and require fewer hardware resources.
Every business needs to find the right balance for their own needs for existing software update cycles. Every business and every software package is different. Enterprise software like Microsoft Windows, Microsoft Office or an Oracle Database follow these models very well. Small software projects and those falling near the bespoke range may have a more dynamic and unpredictable release cycle but generally will still follow most of these rules.
But the rules of thumb are relatively easy:
When deploying new or updating, shoot for the latest reasonable version of software. Use any deployment opportunity to eliminate technical debt as much as possible.
When software already exists, weight factors such as human effort, licensing costs, environmental consistency and compatibility testing against benefits in features, performance and technical debt.
Huawei’s AI Update: Things Are Moving Faster Than We Think
FEATURE | By Rob Enderle,
December 04, 2020
Keeping Machine Learning Algorithms Honest in the ‘Ethics-First’ Era
ARTIFICIAL INTELLIGENCE | By Guest Author,
November 18, 2020
Key Trends in Chatbots and RPA
FEATURE | By Guest Author,
November 10, 2020
FEATURE | By Samuel Greengard,
November 05, 2020
ARTIFICIAL INTELLIGENCE | By Guest Author,
November 02, 2020
How Intel’s Work With Autonomous Cars Could Redefine General Purpose AI
ARTIFICIAL INTELLIGENCE | By Rob Enderle,
October 29, 2020
Dell Technologies World: Weaving Together Human And Machine Interaction For AI And Robotics
ARTIFICIAL INTELLIGENCE | By Rob Enderle,
October 23, 2020
The Super Moderator, or How IBM Project Debater Could Save Social Media
FEATURE | By Rob Enderle,
October 16, 2020
FEATURE | By Cynthia Harvey,
October 07, 2020
ARTIFICIAL INTELLIGENCE | By Guest Author,
October 05, 2020
CIOs Discuss the Promise of AI and Data Science
FEATURE | By Guest Author,
September 25, 2020
Microsoft Is Building An AI Product That Could Predict The Future
FEATURE | By Rob Enderle,
September 25, 2020
Top 10 Machine Learning Companies 2020
FEATURE | By Cynthia Harvey,
September 22, 2020
NVIDIA and ARM: Massively Changing The AI Landscape
ARTIFICIAL INTELLIGENCE | By Rob Enderle,
September 18, 2020
Continuous Intelligence: Expert Discussion [Video and Podcast]
ARTIFICIAL INTELLIGENCE | By James Maguire,
September 14, 2020
Artificial Intelligence: Governance and Ethics [Video]
ARTIFICIAL INTELLIGENCE | By James Maguire,
September 13, 2020
IBM Watson At The US Open: Showcasing The Power Of A Mature Enterprise-Class AI
FEATURE | By Rob Enderle,
September 11, 2020
Artificial Intelligence: Perception vs. Reality
FEATURE | By James Maguire,
September 09, 2020
Anticipating The Coming Wave Of AI Enhanced PCs
FEATURE | By Rob Enderle,
September 05, 2020
The Critical Nature Of IBM’s NLP (Natural Language Processing) Effort
ARTIFICIAL INTELLIGENCE | By Rob Enderle,
August 14, 2020
Datamation is the leading industry resource for B2B data professionals and technology buyers. Datamation's focus is on providing insight into the latest trends and innovation in AI, data security, big data, and more, along with in-depth product recommendations and comparisons. More than 1.7M users gain insight and guidance from Datamation every year.
Advertise with TechnologyAdvice on Datamation and our other data and technology-focused platforms.
Advertise with Us
Property of TechnologyAdvice.
© 2025 TechnologyAdvice. All Rights Reserved
Advertiser Disclosure: Some of the products that appear on this
site are from companies from which TechnologyAdvice receives
compensation. This compensation may impact how and where products
appear on this site including, for example, the order in which
they appear. TechnologyAdvice does not include all companies
or all types of products available in the marketplace.