AWS cloud services: winners, losers, and pet projects

The coronavirus pandemic and the Fourth Industrial Revolution have driven the rapid growth of cloud computing technology. Cloud computing offers many advantages, including replacing upfront IT infrastructure expenses with a low-cost pay-as-you-go model, increasing speed and agility, and upscaling and downscaling as needed. Amazon Web Services (AWS) is one of the leaders in cloud computing, with clients in over 190 countries. AWS cloud-based services are grouped into categories like storage, networking, compute, and databases. Understanding the different components of cloud-based architecture is essential to successfully adopting cloud computing for businesses of all sizes and industries.

About the author

Russ Langel

Sr. Consultant, Product Perfect

Russ is a Sr. Enterprise Cloud Architect, having build and maintained detailed and complex infrastructure for some of the largest organizations in the United States for over 20 years.

Werner Vogels, CTO of Amazon.com and one of the key architects behind AWS, summed-up the reality of the state of server and cloud computing when he said,

“The cloud is the new normal. It's no longer a question of if you should move to the cloud, but rather how quickly and how thoroughly you can do it.”

The cloud is now ‘table stakes’. We’re happily encircled by the walled garden of the cloud. There’s no bonus or pat on the back for moving your organization to the cloud. It’s now the expectation. Without it, you’re in the minority and running a tech deficit. So it’s the new norm. Got it. But here’s the catch, because the cloud is constantly evolving, it’s making our lives slightly more complicated.

The statistics were driven up and to the right even more in 2020 with the coronavirus pandemic. Organizations instantly pushed the throttle even harder forward to shift from an on-premises server model to a cloud-based model. The cloud computing model was first described as far back as 2006 when Amazon Web Services (AWS) offered its original cloud computing model or IT services to the public in the form of web services. This gave business organizations the option to purchase metered software services rather than proprietary packages. Companies that depended to a large extent on selling proprietary software suddenly knew how Blockbuster video felt in the early 2000s. Application platforms and even infrastructure itself joined software as cloud offerings, and suddenly organizations no longer needed to purchase and maintain computer hardware. This was the third “f”, was it not? If it flies, floats, or figures it is better to rent it than to buy it.

Companies now need only I/O hardware, as computing functions per se may now be obtained as a metered cloud service. Developers and proponents of this model have not looked back, relentlessly developing cloud-based technologies and architectures.

Cloud computing aspects include virtual server, application, container, serverless, database, object storage, data warehouse, and analytics.
The Cloud-Computing Model. Source: Medium.com/@BangBitTech/

This cloud-computing phenomenon is also also driven and inspired by the accelerated onset of the Fourth Industrial Revolution and its connected technologies like robotics, artificial intelligence and machine learning, IoT, and quantum computing.

Another contributor to the ongoing development of the cloud computing paradigm is the ever-growing need to store, transform, and analyze massive volumes of structured, unstructured, and semi-structured data. Actually, the need for massive storage capacity on the part of the Human Genome Project is what gave rise to the idea of shared infrastructure in the first place.

In order to support this evolving data storage, processing, and analysis requirement, there are several critical elements of a cloud-based software architecture that are necessary to create a reliable, robust software stack that will undergird the successful digital business in 2021. While Product Perfect provides services and support for all types of cloud-based architectures, and Amazon shares the cloud computing space with Google [Cloud] and Microsoft [Azure], this article will stay in the AWS ecosystem specifically and focus on AWS products exclusively.

The benefits of cloud computing from AWS’s perspective

From their own words, AWS leadership describe cloud computing benefits.

“The cloud offers a level of agility and scalability that simply cannot be matched by traditional on-premises infrastructure. ”
Jeff Barr, Chief Evangelist for AWS.
“AWS provides the building blocks that businesses need to stay ahead of the competition.”
Swami Sivasubramanian, Vice President of Amazon Machine Learning.
“With AWS, businesses can focus on what they do best, while leaving the management of IT infrastructure to us.”
Matt Garman, Vice President of AWS Compute Services.
“The cloud offers businesses the ability to rapidly experiment and iterate, without the need for expensive upfront investments in hardware or infrastructure. ”
Dave Ward, Senior Vice President of Engineering for AWS.

The AWS website describes some of the other advantages of cloud computing.

1. Benefit from massive economies of scale

The phrase “economies of scale” is generally defined as cost advantages derived by increasing production volumes that allow for a lower cost based on bulk discounts and volume-driven efficiency benefits. In the cloud-based use case, the term “economies of scale” translates into the benefits derived from the aggregated usage of hundreds of thousands of customers, resulting in the realization of substantial savings for every AWS customer.

2. Increase speed and agility

The cloud-based computing model includes the ability to spin up or add new resources to the existing infrastructure capacity at the click of a mouse. Therefore, these additional resources become available almost instantly, resulting in a dramatic increase in organizational agility and speed at which new resources are provisioned and deployed.

3. Upscale and downscale as needed

An organization’s IT infrastructure requirements are variable and are primarily dependent on how heavy the workloads are at any given point. When procuring on-premises infrastructure, it is necessary to budget for the capacity to run the heaviest workload, culminating in the average under-usage of these resources and driving up the cost of doing business which is not ideal under any circumstances.

The infrastructure in the cloud allows companies to scale up and scale back down again to match the compute requirements to run each workload, reducing costs and increasing operational efficiencies.

4. Benefit from the AWS Shared Responsibility Model

Yes, there’s a huge benefit to AWS’s Shared Responsibility Model (SRM). This SRM concept defines the relationship between AWS and the client, relieving the “customer’s operational burden as AWS operates, manages, and controls the components from the host operating system and virtualization layer down to the physical security of the facilities in which the service operates.” In this shared/almost roommate-like juxtaposition, the customer assumes some responsibility and assumes a level of ownership. AWS is responsible for securing the cloud infrastructure and core services, while customers are responsible for securing their own applications and data. Responsibilities can vary depending on the service used, but generally, customers have to be a little more careful in this model. They can be lured into a false sense of safety, thinking that Jeff Bezos or some mysterious Amazon tech person will magically, quietly, and automatically triple-check their configurations to ensure it’s all bulletproof, (which they don’t do of course).

AWS cloud services has responsibilities shared between itself and the customer and some in which the customer is primarily responsible for.
AWS cloud services and its key shared responsibilities model at the center

This model allows organizational IT departments to focus on projects that differentiate the business, not the heavy lifting needed when managing IT infrastructure.

“The AWS Shared Responsibility Model ensures that customers and AWS share responsibility for the security of applications and data in the cloud. By clearly outlining each party's responsibilities, the model helps to reduce the risk of security incidents and improve overall security posture.”
Chris DeRamus, Chief Technology Officer at DivvyCloud.

It can stack up visually this way:

The Shared Responsibility Model visualized, displaying which parts are the customer's responsibility and which are the responsibility of AWS.
AWS Shared Responsibility Model. Source: Aws.amazon.com

The AWS cloud stack: The important AWS cloud services, (we call them “Winners”)

AWS cloud-based services are typically grouped into categories like compute, networking, storage, databases, applications, and analytics. Here are the most popular and most critical AWS services:

  1. Amazon EC2 (Elastic Compute Cloud)
    A cloud-based service that allows businesses to easily launch and manage virtual servers with resizable computing capacity.
  2. Amazon S3 (Simple Storage Service)
    A highly scalable object storage service that enables businesses to store and retrieve data from anywhere on the internet.
  3. Amazon RDS (Relational Database Service)
    A managed database service that simplifies the setup, operation, and scalability of relational databases on the cloud.
  4. AWS Lambda
    A serverless computing service that enables businesses to run code in response to events or requests, without the need for dedicated servers.
  5. Amazon DynamoDB
    A fast and flexible NoSQL database service that provides low-latency performance at any scale.
  6. Amazon ECS (Elastic Container Service)
    A container orchestration service that enables businesses to run and manage Docker containers in the cloud with high scalability.
  7. Amazon API Gateway
    A fully managed service that simplifies the creation, publication, and management of APIs at any scale.
  8. Amazon CloudFront
    A content delivery network (CDN) that accelerates the delivery of content to users worldwide with global coverage. This is what makes many media-heavy websites lightning fast. It’s a key price-point area though, and there are several factors that can drive costs up when using Amazon CloudFront:
    a. Data transfer out.
    The amount of data transferred from CloudFront to end users is a major factor in cost. The more data transferred, the higher the cost.
    b. Requests
    The number of requests made to CloudFront, such as HTTP/HTTPS requests or API requests, can also impact cost. The more requests made, the higher the cost.
    c. Locations
    Having a lot of edge locations used to deliver content can also impact cost. CloudFront charges more for delivering content to certain regions than others.
    d. Usage patterns
    The usage pattern of the content being delivered can also affect cost. For example, delivering large files infrequently may cost less than delivering small files frequently. It just depends...
    e. Custom SSL certificates
    Using custom SSL certificates for secure content delivery can also increase cost, as CloudFront charges additional fees for certificate management.

Breaking these down into how we approach them in real-world scenarios:

Storage

Amazon S3 (Simple Storage Service) is AWS’s object storage that allows users to store data from anywhere on the Internet. It is designed to provide an infinite amount of storage and is delivered with near-perfect uptime. S3 is used as primary storage for structured, semi-structured, and unstructured data uploaded to the cloud, cloud-native applications, and backup and disaster recovery.

Moving data in and out of S3 is simple with Amazon cloud data migration options such as Amazon S3 Standard-Infrequent Access and Amazon Glacier used to archive data.

Networking

Amazon VPC (Virtual Private Cloud) is the customer’s cloud-based network environment. Users can create a private network within the AWS cloud, with many of the same concepts and constructs as an on-premises network. Amazon VPC gives clients complete control of the network configuration. Customers can define standard network configuration items like IP address ranges, subnet creation, route table creation, security settings, and network gateways.

This is an AWS foundational service and integrates with multiple other AWS services. For instance, EC2 (Elastic Cloud Compute) and RDS database instances are all deployed into the VPC as part of the provisioning and deployment lifecycle.

Secondly, Amazon Route 53 is a highly available and scalable cloud DNS (Domain Name Service) web service. It is designed to provide developers and business owners with a robust, reliable, cost-effective way to “route end users to Internet applications by translating human-readable names into the numeric IP addresses” that network devices use to communicate with each other.

Compute

Amazon Elastic Compute Cloud (EC2) provides secure, resizable cloud-based compute capacity. Organizations can provision and configure virtual computing capacity, selecting from multiple operating systems, CPU, memory, and storage required to run applications and workloads.

Users can increase or decrease capacity with EC2 within minutes as the need for compute capacity changes over time. It is possible to utilize as many server instances as needed (from one to thousands) simultaneously. Because the compute functionality is controlled by web service APIs, applications can automatically scale up or down as required.

EC2 is integrated with most AWS services such as Amazon S3 (Simple Storage Service), Amazon RDS (Relational Database Service), and Amazon VPC providing a complete, secure solution for computing applications.

Amazon Lambda is a service that lets users run code without managing servers. Customers only pay for the compute time consumed; there is no charge when the code does not run. Lambda provides the functionality to run any code for any application with no administration required. Simply stated, the code is uploaded, and Lambda takes care of everything needed to run and scale the code with high availability. Lastly, the code can automatically be triggered from any other AWS service. And it is callable directly from web or mobile applications.

Databases

Amazon RDS (Relational Database Service) makes it easy to set up, operate, and scale a relational database, such as PostgreSQL, Oracle, MS SQL Server, MySQL, and MariaDB, in the cloud. It provides resizable capacity and cost-effective access while managing DBA-administration tasks, providing users with the capability to focus on the business.

Amazon DynamoDB is a NoSQL database service that is quick and highly flexible, storing semi-structured data such as documents and key-value pair data models. It is ideal for “applications that need consistent, single-digit millisecond latency at any scale.” It is a fully-managed database service with a flexible data model and robust performance, making it an excellent fit for time series data (IoT telemetry), gaming, web, and mobile applications.

The AWS cloud stack: The less popular (but still useful) AWS cloud services

Well, to be clear, there are no losers. Just services that folks don’t use as much. But here’s a list of the services we don’t see very often:

  1. AWS Snowball
    A data transfer service that can be used to transfer large amounts of data to and from the cloud, without the need for high-speed internet connectivity. Example use cases include transferring large datasets between data centers or migrating data to the cloud.
  2. Amazon ElastiCache
    A caching service that can improve the performance of web applications by reducing the load on databases. Example use cases include caching frequently accessed data or performing complex calculations.
  3. Amazon WorkSpaces
    A virtual desktop service that can be used to provide remote access to applications and data from anywhere. Example use cases include remote work, BYOD, and disaster recovery scenarios.
  4. Amazon Kinesis
    A real-time data streaming service that can be used to collect and analyze large volumes of streaming data, such as IoT sensor data or website clickstreams.
  5. Amazon AppStream
    A service that can be used to stream desktop applications to users, without the need for local installations. Example use cases include delivering software demos or providing secure access to sensitive applications.
  6. Amazon Polly
    A text-to-speech service that can be used to generate natural-sounding speech from written text. Example use cases include creating automated voice responses for customer service or generating audio content for podcasts or audiobooks.

Pet projects - tools that developers are using to waste invest your company department budget dollars

If you want to tighten the screws and save money on cloud computing costs, you’d normally launch an audit of all cloud costs. And when you perform that audit, take a closer look at expenses in these 3 webservice products:

  1. AWS Lambda Endpoints
    An endpoint means their serving up calls to webservices. But sometimes the Lambda serverless computing approach is used for building and testing small-scale applications that somehow never go away. Be sure you’re using what you’re paying for.
  2. Amazon S3 (Simple Storage Service) Buckets
    A developer will store images, files, and all manner of blob nonsense in these S3 buckets. The buckets are cheap though, so don’t worry too much about these if you don’t see anything alarming.
  3. AWS Elastic Beanstalk
    Beanstalk is a fully managed service that allows businesses to easily deploy and scale web applications. Check to ensure the endpoints / web apps are real, and they’re not failed/forgotten pet projects.

Takeaways

  1. S3 cloud buckets are cheap. Don’t sweat those too much. Gigabytes of anything cost just pennies on the dollar to store in S3. The real costs are tied to proliferating those files from the S3 buckets into global content delivery networks around the world (using Amazon CloudFront). But even then, it’s not usually a deal-breaker for cost control. It just all depends on traffic, location redundancy, SSL, and file size, among other factors.  
  2. Lambda web app endpoints are always a great idea. If you can narrow-down and simplify your architectures into Lambda endpoints, you’re doing something right.
  3. AWS’s Shared Responsibility Model (SRM) by definition means that you, the consumer of AWS products and services, are responsible for your own usage of said services, and you’re 100% on the hook to ensure you configured your own stuff properly. This becomes critical as you grow, and can spell doom if you don’t triple-check everything before going to production.
  4. Think small. Try to get your developers to confine themselves into small micro-service apps that do tiny operations, instead of large, monolithic apps that try to do everything. Tiny apps that serve up domain-specific content/data are far cheaper to manage, maintain, and staff up for as compared to large and complex monolithic applications.
  5. Tag things well. In AWS, there are features for tagging products and applications. This can be a game-changer when parsing through the monthly bill. Ask every DevOps resource and every AWS power user to go through and tag all their services, buckets, resources, ... everything. And, create a google sheet or matrix somehow to organize your approach to these tags. It’s going to save you tons of time and money when it’s time to clean up the old stuff.


Subscribe to Product Perfect insights

Got it.
You're subscribed to the blog. Enjoy!
Oops! Something went wrong while submitting the form.

More on

Dancing with the Devil, Converting COBOL to C#

Continue reading

How IT executives leverage AI to predict the future and minimize surprise

Continue reading

The herculean lift: pulling your data stack from servers to the cloud

Continue reading

Marketing to the technology consumer

Continue reading

The omnicloud is becoming omniscient

Continue reading

The hard numbers: what cybersecurity really costs

Continue reading

See all topics

See All

Other Trending Topics

Connect with our team for a focused, collaborative session.

Schedule Call

Discovery or Introductory Call

Senior consultants with previous experience at with these types of projects. These usually set the stage for a well-formed and properly framed engagements.

Discovery Call Details

Industry or Product Deep-Dive

Focused session on your specific industry, or, your in-house software platform for migration, conversion, enhancement, or integration. 

Product Call Details