One of the primary drivers for migrating to cloud computing is the cost savings of leasing compute time vs. making the capital investment in hardware, which eventually depreciates. The problem is that cloud computing costs are not properly managed and any potential savings is lost.
Cloud customers need to adopt a new mindset when it comes to cloud costs. With your on-premises hardware, your cost is in acquisition, and usage cost is usually limited to the electric bill. So it’s common for users to leave systems running overnight or when they are away.
With the cloud, everything is metered. This is true with AWS, Microsoft Azure or Google Cloud. Every bit of compute, every bit of memory, every bit passing through the network and onto storage, you pay for. It requires a frugality most users are not familiar with and don’t think about.
Thus, cloud computing costs can be a death by a thousand cuts. Individually, each resource (cut) seems financially insignificant, but add it up and you bleed out. Some companies get the sticker shock after their first month’s bill, while others see steadily increasing cost curves months or years after they adopt the cloud. Consequently, the need to reduce AWS costs and reduce Azure costs is pressing.
Here are nine tips to help reduce cloud costs:
- Stop Unused and Unresponsive Instances
- Create Alerts
- Utilize your Cloud Provider’s Autoscaling
- Monitor to Reduce Cloud Traffic
- Buy Reserved and Spot Instances
- Serverless Computing
- Don't Migrate Every App
- Use AI and Machine Learning
- Consolidated Billing
Most cloud users learn this the hard way. Virtual instances spun up on any cloud provider will cost you even if they sit idle and unused. Developers need to learn that if they are not using the instance any more, as when they are going to lunch, a meeting, or home for the night, they need to stop the instance.
There are several ways to do this. They can be stopped manually through the cloud provider's portal, by writing scheduling scripts or by automating the scheduling process, the latter of which is available through the many cloud management platforms, such as IBM Cloud Orchestrator, Apache CloudStack, or Symantec Web.
Automating the scheduling process is the most cost-efficient since it doesn’t require human intervention. You can set your cloud instances to run from between 8:00 a.m. and 8:00 p.m. Monday through Friday, and you can tag instances that need to stay alive, so they are not terminated after scheduled hours.
Cloud providers and third party cloud management platforms also offer policy-driven automation where you can create rules (“policies”) about not only what actions to be taken when certain events occur, but also notifications. These can include:
- Inform you when the projected monthly spending limit has reached a certain point, such as your monthly budget.
- Inform you if cloud storage costs increase beyond a certain point.
- Inform you when usage in an instance justifies changing its pricing plan.
- Inform you of unused instances or storage volumes after a set number of days.
All cloud platforms provide autoscaling mechanisms to handle growth in loads in both directions. You can find this from third parties as well. If you allocate 24 cores and 2TB of memory but are only using a dozen cores and half a terabyte of memory at best, the autoscaler will inform you of this and suggest a lower cost plan.
The same holds true in the opposite direction, since a more high capacity plan will be cheaper in the long run than a low capacity plan with monthly overruns.
Monitor to Reduce Cloud Traffic
With cloud computing’s automated nature, there are numerous issues that can lead to an explosion in costs. The root to efficiency: proactive monitoring. This is is a critical tactic for containing costs.
Closely related: a common mistake new cloud users make is forgetting (or not knowing) that the data transferred up and down to the cloud is metered, and therefore costs. Overall monitoring is a good idea, along with deciding what to keep on-premises and what should go into the cloud.
Depending on your business, you might also consider using edge computing to offload from the cloud. This is especially true if your business is data-intensive, since it also has the benefit of moving data closer to the customer and still keeping it out of the cloud.
Once you have several months of analytics to determine your monthly average usage, consider a reserved instance, which is a commitment to use the service for one to three years. All of the major cloud providers offer them. The savings can be considerable: up to 75% over equivalent on-demand capacity. If you go over capacity, you do have to pay for it, but even if you go modestly over the capacity, over time it’s still a big savings over on-demand because of the discounts.
Spot instances are unused instances you can bid for, and the provider is willing to sell for as much as 90% off the regular price because 10% of something is better than 100% of nothing. These are good for really short-term projects because your spot can be taken or halted if the provider needs the capacity back.
The term serverless is a misnomer since it still takes place on a server. It’s just that one is not dedicated full time to a function or service. Nor is it used for things like database serving, ERP, or Web serving.
Instead, it’s used for simple, basic functions, often just a single-purpose app. It starts up when needed, runs, and shuts down when it’s done. This uses a much smaller, more exact amount of resources, reducing waste. It’s the logical extension of containers, where just enough of an OS is loaded to run a specific app rather than a full blown Linux instance.
Not every app belongs in the cloud. If it requires maximum performance, the cloud is not a good choice for multiple reasons, from cost to the unpredictability of virtual instance performance. Many apps will actually cost more in the cloud than they would on premises.
Review the application's design and code with code analyzers to determine the amount of cloud resources the application will use, and make a decision accordingly. In addition to the app, data location also needs to be considered. It probably isn’t a good idea to migrate a multi petabyte data warehouse to the cloud, for example.
Also, once you decide which apps to migrate to the cloud, identify the impact of the changes by mapping out how your application data will flow between cloud platforms and your on-premises scenario. Look at the most data intensive and most latency-sensitive apps to make your determination.
Let’s face it, configuring on-prem and cloud instances is a complex, esoteric science with a ridiculous number of moving parts. Furthermore, as the systems are used, requirements change, which demands automation to initiate changes to the configuration.
Machine learning makes cloud optimization proactive. It studies historical data and learns meaningful patterns to predict future usage. It can raise or lower provisioning based on learned usage cases, such as noticing a rise in usage every day at a certain hour. You can configure the AI to require your approval to make a change or just do it automatically.
If you have multiple accounts, consider consolidating them into a single bill for two reasons: it provides a full picture of your usage to control spending, and because you might qualify for a discount. Consolidated billing enables you to see all of your AWS charges across all of your accounts, and cloud providers don’t charge extra for it.