By all accounts, Amazon Web Services (AWS) is the world’s largest public cloud computing service. And although several other cloud computing providers are currently growing more quickly than Amazon, John Dinsdale, a chief analyst and research director at Synergy Research Group, said that Amazon remains “in a field of its own.”
According to the company website, AWS currently operates 42 different “availability zones,” which it defines as “one or more discrete data centers, each with redundant power, networking and connectivity, housed in separate facilities.” It currently has data centers in 16 different geographic regions, and it plans to expand to three more regions before the end of 2017.
A popular myth says that Amazon began selling public cloud computing services because it had “excess capacity” from running its ecommerce website. Executives have repeatedly contradicted that story, saying that Amazon Web Services was designed from the ground up as a service for outside customers. However, the company’s experiences with ecommerce did help lay the groundwork for AWS.
In the early 2000s, Amazon.com’s internal development team had a problem. They were adding a lot of software engineers, but despite the growing headcount, the pace of development was staying about the same. The issue was that each developer was setting up new and unique compute, storage and database resources for each project. The IT group realized that if they could standardize those resources and simplify the process of deploying new IT infrastructure, they might be able to speed things up.
In 2003, former Amazon employee Benjamin Black and his boss Chris Pinkham wrote a paper for Amazon founder and CEO Jeff Bezos. It described “a vision for Amazon infrastructure that was completely standardized, completely automated, and relied extensively on web services for things like storage.” In a blog post, Black explained, “Near the end of it, we mentioned the possibility of selling virtual servers as a service.”
That idea cropped up again that same year when Amazon executives were attending a retreat at Bezos’ house. As current AWS CEO Andy Jassy tells the story, the group was working to identify their core competencies when they realized they had become pretty good at running IT infrastructure. They began to consider the idea of offering those IT services to other companies. “In retrospect it seems fairly obvious, but at the time I don’t think we had ever really internalized that,” Jassy said.
“The thinking then developed that offering Amazon’s expertise in ultra-scalable system software as primitive infrastructure building blocks delivered through a services interface could trigger whole new world of innovation as developers no longer needed to focus on buying, building and maintaining infrastructure,” Werner Vogels, Amazon’s chief technology officer explained on Quora.
“From experience we knew that the cost of maintaining a reliable, scalable infrastructure in a traditional multi-datacenter model could be as high as 70%, both in time and effort, and requires significant investment of intellectual capital to sustain over a longer period of time,” he added. “The initial thinking was to deliver services that could reduce that cost to 30% or less (we now know it can be much less).”
The idea gained momentum, and in 2004, Black, Pinkham and their team began work on the project that eventually became AWS.
After the launch of S3 in the spring of 2006, AWS followed up by taking its Simple Queue Service into production and launching its Elastic Compute Cloud (EC2) that summer. By the following year, the company amassed a reported 180,000 developers as customers.
In the years that followed, Amazon’s cloud quickly expanded with additional services and more regions. In 2010, Netflix became the first company to announce publicly that it would run all of its infrastructure on AWS. After that, customers began to sign up even more quickly, and AWS developed the market share that put it far ahead of all the other competitors who began to offer their own cloud computing services.
Amazon divides its extensive portfolio of cloud computing services into 19 different categories:
- Compute — includes its best-known product, EC2, as well as its Container Service, Virtual Private Cloud, Elastic Beanstalk and the Lambda serverless computing service, among others
- Storage — includes S3, as well as Elastic Block Storage, Glacier, Snowball and others
- Database — includes both relational and NoSQL databases, including Aurora, Amazon RDS, DynamoDB, ElastiCache, Redshift and more
- Migration — includes services to help enterprises move from traditional data centers to the public cloud
- Networking and Content Delivery — includes Elastic Load Balancing, the Route 53 DNS service, the CloudFront content delivery network and more
- Developer Tools — includes multiple tools to support DevOps and Agile software development, including CodeCommit repositories, CodePipeline continuous integration and delivery, CodeBuild testing, CodeDeploy deployment automation, etc.
- Management Tools — includes services to help administrators monitor and manage hybrid cloud infrastructure
- Artificial Intelligence — includes Lex chatbot services, Polly text-to-speech, Rekognition image analysis and the Amazon Machine Learning platform
- Analytics — includes big data tools like the EMR Hadoop framework, Kinesis streaming data, Glue ETL, QuickSight business intelligence and more
- Security, Identity and Compliance — includes cloud security tools such as Identity and Access Management (IAM), Inspector security assessment, Shield DDOS Protection and many other services
- Mobile Services — includes mobile development tools like the Mobile Hub and the Mobile SDK
- Application Services — includes Step Functions, API Gateway and Elastic Transcoder
- Messaging — includes the SQS, Simple Notification Service (SNS), Pinpoint push notifications and Simple Email Service (SES)
- Business Productivity — includes Chime communications services, WorkDocs enterprise storage and sharing service and WorkMail secure email
- Desktop and App Streaming — includes the Workspaces desktop-as-a-service offering and AppStream, which streams desktop applications to a browser
- Software — includes third-party software as a service available through the AWS Marketplace
- Internet of Things — includes the AWS IoT Platform, Greeengrass and IoT Button programmable devices
- Contact Center — includes Amazon’s cloud-based, self-service call center, called Amazon Connect
- Game Development — includes GameLift game server hosting and the free Lumberyard 3D game engine
The table below highlights some of the more popular services offered by AWS. It is by no means exhaustive. Also, the pricing for cloud services varies on a wide number of factors and changes on a regular basis. However, the chart does provide a quick glimpse at some of Amazon’s offerings and an overview of how it prices those services:
Features and Costs of Popular AWS Services
|Elastic Compute Cloud (EC2)||• Secure, scalable compute capacity
• Free tier
• Pay for usage
• Fast deployment
|On-demand pricesfor a t2. Nano instance in the US East region start at $0.0059 per hour. Data transfer may incur additional fees.|
|Simple Storage Service (53)||• Object Storage
• 99.999999999% durability
• Suitable for primary storage or
• Automatic tiering to other services is
• Free tier
|The first 5OGB of standard storage costs $0.023 per GB, and prices decline for larger data volumes.
Standard lnfrequent Access
Storage costs $0.0125 per GB.
|Glacier||• Extremely low costs
• Suitable for archival storage
• Secure SSL data transfer
• Integration with S3
|In the US East region, storage costs $0.004 per GB per month. Retrieving data incurs an additional charge, which depends on the retrieval speed and starts at $0.0025 per GB.|
|Relational Database Service (RDS)||• Choice of six database engines
• Easy administration
• Highly reliable
|On-demand pricing for a
db.t2.micro instance of MySQL in the US East region start at $0.017 per hour.
|Amazon Machine learning||• Based on the same technology Amazon’s data scientists use
• Highly scalable
• Real-time predictions
• Visualization tools and wizards
|Data analysis and model building
cost $0.42 per hour. Batch predictions cost $0.10 per 1,000 predictions, and real-time predictions cost $0.0001 per prediction. Additional fees for
compute and storage may apply.
First is its sheer size. Amazon’s list of services has a breadth and depth that few other public cloud providers can match. It has data centers spread all around the world, and it has a long and growing list of customers. That’s part of the reason why Gartner concluded, “Although AWS will not be the ideal fit for every need, it has become the ‘safe choice’ in this market, appealing to customers who desire the broadest range of capabilities and long-term market leadership.”
Second, Amazon’s prices are generally comparable to the other major cloud vendors for most use cases. Economies of scale have given the company the ability to drop its prices repeatedly, and it continues to do so on a regular basis.
Third, AWS is very popular with developers. That’s not surprising, given that it was born out of Amazon’s need to simplify IT infrastructure for its own developers. Startup developers who use AWS often find that it is simplest to keep using the cloud service as their companies grow. And enterprise developers who like to use AWS for dev and test are often influential is selecting the service for production workloads as well.
AWS offers something for everyone — whether you are a developer working on a hobby project or a Fortune 500 company looking to become more agile. It is the generalist of the public cloud computing market with a huge array of services available. It is often used in hybrid IT.
As the first and largest cloud provider, AWS has very mature, tested offerings. It is unlikely to go out of business anytime soon, and it is a solid choice for most cloud computing use cases.
Additionally, the company is innovating at a breathless pace, and it’s reasonable to assume that its product and solution portfolio will expand considerably in the years ahead.
Before you compare cloud providers and choose what’s best for your business, read our comprehensive guide to cloud computing.
If AWS has a weakness, it is its lack of offerings for hybrid cloud deployments. Analysts say that most enterprises will be pursuing a hybrid cloud, multi-cloud strategy, and Amazon’s competitors Microsoft Azure and IBM have an advantage in this area. Because many large organizations already use Microsoft and IBM products in their data centers, they naturally gravitate to these other providers for the public cloud portion of their hybrid clouds.
And the jury is still out on whether AWS will be the best option for emerging technologies like artificial intelligence, machine learning, the Internet of Things and containerized deployments. All of the leading vendors are competing heavily in these areas, and AWS will have to continue to innovate if it wants to retain its position as the market leader. In the technology industry, markets can shift very quickly, and being the number one provider today is no guarantee of future performance.