by Bernard Golden
Cloud Storage and Backup Benefits
Protecting your company’s data is critical. Cloud storage with automated backup is scalable, flexible and provides peace of mind. Cobalt Iron’s enterprise-grade backup and recovery solution is known for its hands-free automation and reliability, at a lower cost. Cloud backup that just works.
Also see: Navigating AWS Pricing Structure
AWS pricing is, to be sure, filled with a certain complexity – and some unpleasant surprises. Yet with proper management and the right tools, AWS pricing structures can be clear and reasonably contained.
A little backstory. Cloud computing is transforming the face of IT. As I noted in this piece, the IT industry is in the midst of a sea change — JP Morgan’s survey of large enterprise CIOs indicated that cloud deployment of applications is forecast to grow from 16% to 41% by 2020.
It’s easy to understand the reasons for the shift — easy resource access, enormous scale and convenient elasticity, and pricing by resource use. You just pay for what you use. And you control what you use. Or don’t.
Stories are legion in the industry of people who adopt AWS and love it. What a change from the old dark days of dealing with central IT, aka “The Department of No.” The problem comes a few months down the line when the AWS bill arrives, and it’s a lot bigger than expected. I once consulted with the CIO of a very large media company who described the situation this way:
“We got started with AWS and it was fantastic. Our developers were much more productive, and much happier. The first month’s bill was $400 and everything was great. The next month’s bill came and it was $10,000. Now, we’re a big company, and $10,000 isn’t going to bankrupt us — but what changed between month one and two?”
AWS costs can escalate. Of course, one shouldn’t assume a large AWS bill is, per se, a bad thing. In the old way of doing things, IT spend was often artificially rationed – held down by resource unavailability. Now that it’s easier to get resources, one should expect IT spending to rise. Furthermore, in a “software is eating the world” environment, companies should and must spend more on applications, including the underlying cloud infrastructure.
However, another reality is that in a “software is eating the world” environment, your cloud costs are part of your Cost of Goods sold, COGS, and, as such, must be managed as carefully as labor costs or marketing expense.
So how can you manage your AWS spend? Here are three tools:
AWS Pricing Tip: Assign costs to users with Cost Allocation Tags and the Cost Explorer
AWS pricing can be best managed with this in mind: Many AWS users start with single accounts, which all users share, with the bill paid by a central organization.
This is fine, except when a few users consume a lot of resources. It’s like when you’re out at a restaurant with a group and someone orders the filet mignon, the expensive wine, and the signature dessert — boy, does that run up the bill! Except the AWS pricing situation is worse, because there’s no way to determine who’s been overindulging. In situations like this, it’s critical to make each party pay for its usage, because there’s nothing that concentrates the mind on managing cost like getting a bill.
Fortunately, AWS can help with this. It offers the ability to identify resources with Cost Allocation Tags — essentially, key/value identifiers that can assign costs to specific groups. So you could tag a set of S3 buckets with “user=video_transform_group” and that will associate those bucket costs with the video transform group.
AWS can export a file containing resource costs organized by these tags that can easily be imported into a spreadsheet, enabling slicing, dicing — and identifying who had the Waygu steak, err, who used a ton of S3 storage. The export can include monthly cost-to-date as well as forecasted costs for the period, allowing one to understand committed and likely AWS costs.
The Cost Allocation Tags can also be used in another tool, the AWS Cost Explorer. Think of this as an analytic tool to analyze trends and spot anomalies. The Cost Explorer reaches back over the past 13 months of AWS costs, presents the current month, and forecasts the next three months.
This allows one to understand underlying use patterns and put them into context. Perhaps the S3 video costs were extremely high last month — but the company does school photos and last month was September, when all the photos are shot and parents review them and order their favorites. Is high use good or bad in those circumstances? Probably good, no? Now, it would be important to understand how this September compared to last September, and that where the Cost Explorer can help.
So the first tool is to understand current spend and track it over time to determine who is running up the tab and how spending patterns are changing.
AWS Pricing Tip: Manage use with the AWS Trusted Advisor
The first tool tells one what is being spent, and even tracks who is doing the spending. But it doesn’t say if the usage is efficient.
What does that mean? Many IT organizations operate AWS resources with little consideration as to how well they’re being used. In a way, this makes sense. In the bad old days, once resources were pried out of the Department of No, it made sense to hang on to them, even if there was little actual use. This is what leads to the traditionally miserable server utilization stats of most IT organizations.
In AWS, it’s completely different. Poorly used resources cost just as much as fully utilized ones. It’s just that there’s a lot less value coming out of them. To manage AWS spend wisely, it’s critical to ensure that resources are actually being used, and to shut down ones that aren’t actually, you know, doing anything.
This is where the AWS Trusted Advisor comes in. It will look at an account’s AWS resources and report on utilization levels. This makes it easy to see AWS instances that are running but doing nothing — a common circumstance for development machines that are spun up to test something and then forgotten about after the test task is complete.
Trusted Advisor can also help identify situations in which resources are over-provisioned. For example, many IT organizations use a standard instance size even when actual instance loads vary. This can lead to a large instance that costs a lot being used to run a lightly-loaded web server application that could run just as well on a small instance — at half the cost or less.
Every IT organization should use AWS Trusted Advisor on a regular basis and look for under-utilized resources — and when poor resource choices are identified, the organization should take action to raise utilization rates by changing instance types or shutting down unused ones.
AWS Pricing Tip: Rearchitect Applications for Greater Efficiency
Many IT organizations making the move to cloud computing continue using the same application architectures that they applied within their own data centers. I call this the “lift-and-shift” model. It makes sense — the staff already know how to design, build, and operate this architectural pattern, so why not just go with it.
Despite the convenience of the “lift-and-shift” model, it fails to take full advantage of AWS’s services and capabilities — and my lead to higher overall bills, even when managed with the two tools discussed above.
For example, most IT shops use EC2 instances, because they are similar to the VMware virtual machines typically used in on-premise environments. However, greater efficiency could be available if the organization moved to container-based applications, since containers use less server resources than a virtual machine (instance) and are, therefore, correspondingly less expensive.
Restructuring an application into a container-based microservices design can segregate highly-used from lightly-used sections of code, allowing better matching of resources to demand.
AWS also offers many services that can replace application code and cost less, since no user computing resources are necessary at all. For example, many applications send email. Instead of operating an SMTP server as part of the application topology, an application could use AWS’s Simple Email Service (SES), which is very inexpensive.
Likewise, AWS offers the Relational Database Service (RDS), which provides a managed database service. Instead of running database servers and managing trying to tune resource use with application load, an application can use RDS and allow AWS itself to ensure sufficient – but not overprovisioned – resources are available.
Rearchitecting is not simple, but the cost savings can be significant. It might be good to this of this “tool” as representing the next phase of cloud use — where instead of treating the cloud as an outsourced data center, it’s treated as a new entity with its own characteristics. By understanding the characteristics and aligning application design with them, costs can be reduced to the bare minimum needed to support application load.
The Bigger Picture of AWS Pricing
Fully managing AWS pricing structures has its difficulties. Yet in a sense, being challenged to manage AWS costs is a good situation. It indicates that the organization is committed enough to AWS that use is growing. Many advantages are available to IT organizations that move to the cloud, but, inevitably, new challenges arise, including managing AWS spend.
This piece has identified three tools available to IT organizations to track, manage, and reduce their AWS spend and ensure that AWS is used cost-effectively. Going forward, managing AWS spend is only going to become more important, because cloud computing is going to grow like kudzu.
About the author: Tech visionary Bernard Golden was named by Wired.com as one of the ten most influential people in cloud computing. He is the author/co-author of five books on open source, virtualization, and cloud computing. He is also the author of a free whitepaper about smart contracts.