Get started with AWS Elastic Container Service

Photo by Ian Taylor on Unsplash

Get started with AWS Elastic Container Service

AWS ECS (Amazon Elastic Container Service) is a fully managed container management service provided by Amazon Web Services (AWS). It allows you to run, stop, and manage Docker containers on a cluster, making it simpler to operate containerized applications at scale. Here are some key features and concepts associated with ECS:

  1. Clusters: At the heart of ECS is the concept of a cluster. A cluster is a logical grouping of ECS container instances (EC2 instances with the ECS agent installed) onto which you deploy containerized applications.

  2. Task Definitions: These define your application in terms of its Docker containers. A task definition specifies which Docker images to use, CPU and memory configuration, port mappings, volume mounts, environment variables, and other settings.

  3. Services and Tasks: A task is a running instance of a task definition, while a service maintains a specified number of running tasks for a given task definition. For instance, if you want to run three instances (replicas) of a web application, you would create a service with the desired count set to three.

  4. ECS Agent: This is a component that runs on each instance in an ECS cluster. It communicates with the ECS service for managing and starting Docker containers on that instance.

  5. Launch Types: ECS supports two main launch types:

    • EC2: This launch type runs your containers on EC2 instances registered to an ECS cluster.

    • Fargate: With this launch type, you don’t need to provision or manage underlying instances. AWS Fargate runs containers without requiring you to manage the infrastructure.

  6. Integration with Other AWS Services: ECS integrates well with other AWS services such as:

    • Elastic Load Balancing (ELB): Distribute traffic across your containers.

    • Amazon ECR (Elastic Container Registry): A managed Docker container registry service where you can store and retrieve Docker images.

    • AWS VPC (Virtual Private Cloud): Securely isolate your resources and set up networking for your ECS tasks and services.

    • AWS IAM (Identity and Access Management): Control access and permissions for your tasks and services.

  7. Scaling and Load Balancing: With ECS, you can automatically scale your applications based on various metrics, such as CPU and memory usage. You can also distribute incoming application traffic across multiple tasks using load balancing.

  8. Logging and Monitoring: ECS integrates with AWS CloudWatch, allowing you to monitor and log application and infrastructure metrics.

ECS is useful for a wide range of applications, from simple web apps to complex microservices architectures. By abstracting away the infrastructure management, it simplifies the deployment, scaling, and management of containerized applications in the cloud.

How ECS is different from Docker Compose

Docker Compose and AWS ECS are tools designed for different stages and scales of container orchestration. Docker Compose is primarily a tool for defining and running multi-container Docker applications, especially in local development environments. In contrast, AWS ECS is a managed container orchestration service designed for production-level, scalable, and distributed applications in the AWS cloud. Here's a comparison highlighting their limitations and differences:

  1. Scope and Purpose:

    • Docker Compose: Designed for local development and testing. It simplifies the process of setting up and running multi-container applications locally.

    • AWS ECS: Designed for production deployments at scale in the AWS cloud infrastructure.

  2. Scaling:

    • Docker Compose: Does not natively support auto-scaling. You can scale services manually by specifying the number of replicas, but it's not dynamic.

    • AWS ECS: Supports auto-scaling of tasks based on CloudWatch metrics, making it suitable for variable workloads.

  3. High Availability:

    • Docker Compose: Lacks features for high availability. It's not designed to recover failed containers across a cluster of machines.

    • AWS ECS: Works across multiple Availability Zones in AWS, ensuring high availability and fault tolerance for applications.

  4. Networking and Service Discovery:

    • Docker Compose: Relies on Docker's built-in networking capabilities, which are sufficient for local development but might not meet the requirements of complex production environments.

    • AWS ECS: Offers integration with AWS VPC, Application Load Balancers, Network Load Balancers, and service discovery mechanisms suitable for production use.

  5. Storage:

    • Docker Compose: Uses Docker volumes for local persistence. This is not suitable for distributed storage or stateful services in a production scenario.

    • AWS ECS: Can integrate with Amazon EFS (Elastic File System) and other AWS storage services, making it more versatile for stateful applications in the cloud.

  6. Management and Operations:

    • Docker Compose: Requires manual intervention for upgrades, rollbacks, and monitoring.

    • AWS ECS: Provides deep integration with AWS services like CloudWatch, IAM, and more. It also supports blue-green deployments, rolling updates, and task placement strategies.

  7. Security:

    • Docker Compose: Relies on Docker's built-in security, which might not be comprehensive enough for strict production requirements.

    • AWS ECS: Integrates with AWS IAM for fine-grained access control, AWS VPC for network isolation, and AWS Secrets Manager for managing sensitive information.

  8. Cost:

    • Docker Compose: Essentially free, as it's a tool you run on your local machine or on servers you're already paying for.

    • AWS ECS: Has associated costs, especially when running tasks on EC2 instances or when using Fargate. However, you pay for the infrastructure and managed services, which include the benefits of scalability, manageability, and high availability.

While Docker Compose is a fantastic tool for local development and simplifying the setup of multi-container applications, AWS ECS is designed for robust, scalable, and production-grade deployments in the cloud. If you're moving from development to production in AWS, consider tools like the ECS CLI or AWS Copilot, which bridge the gap between Docker Compose and ECS by helping you deploy Compose-defined applications on ECS.

EC2 Launch Type

When using the EC2 launch type with AWS ECS (Elastic Container Service), you're running your containerized tasks on EC2 instances that you provision and manage. This is in contrast to the Fargate launch type, where AWS manages the underlying infrastructure on your behalf.

With the EC2 launch type, there's more manual management involved compared to Fargate. Here's what you typically need to manage and the activities involved:

  1. Instance Provisioning:

    • Launch EC2 instances, either manually through the AWS Management Console, using the AWS CLI, or using infrastructure-as-code tools like AWS CloudFormation.

    • Choose the right instance type based on your application's resource requirements.

  2. ECS Agent:

    • The ECS agent software needs to be running on each EC2 instance in the cluster. While Amazon ECS-optimized AMIs come with the agent pre-installed, you may need to ensure it's updated or install it if you're using a custom AMI.
  3. Cluster Registration:

    • When launching an EC2 instance, you have to ensure it's associated with an ECS cluster. This is often done using an ECS configuration file or specifying the cluster name as part of the user data script during instance launch.
  4. Scaling:

    • You must set up Auto Scaling Groups (ASG) if you want to automatically scale the number of EC2 instances in response to load or other metrics. This involves defining scaling policies and CloudWatch alarms.

    • Remember to deregister instances from the ECS cluster gracefully when they're being terminated.

  5. Networking and Security:

    • Security Groups: Define the inbound and outbound network traffic rules for your EC2 instances.

    • VPC and Subnets: Ensure EC2 instances are launched in the desired VPC and associated subnets.

    • IAM Roles: Attach the necessary IAM roles to EC2 instances to grant permissions for making AWS API calls.

  6. Maintenance and Patching:

    • Regularly update the EC2 instances for security patches and software updates.

    • Monitor for and address any failed EC2 instances.

  7. Monitoring and Logging:

    • Ensure integration with AWS CloudWatch for metrics, alarms, and logs.

    • You might also need to configure the ECS agent to send Docker logs to CloudWatch Logs or another logging service.

  8. Storage:

    • If your tasks require persistent storage, manage the associated EBS volumes, their lifecycle, backups, and any necessary resizing.
  9. Cost Management:

    • Monitor the cost of running EC2 instances and consider purchasing Reserved Instances or Savings Plans for long-running workloads to save costs.

    • Turn off unused instances to prevent unnecessary charges.

  10. Task Placement:

  • With EC2 launch type, you have control over task placement strategies and constraints, allowing you to influence where tasks are placed within the cluster based on requirements or constraints like CPU architecture, instance type, and more.

In contrast, with the Fargate launch type, many of these concerns, such as instance provisioning, scaling, and maintenance, are abstracted away, allowing you to focus solely on defining and running tasks. However, the EC2 launch type provides more granular control over infrastructure, which might be necessary for specific use cases or compliance requirements.

ECS Fargate

ECS Fargate is a serverless compute engine for containers that works with Amazon Elastic Container Service (ECS). It allows you to run containers without managing the underlying infrastructure, abstracting away the need to provision, configure, and scale clusters of virtual machines to run containers.

Here's a detailed explanation of ECS Fargate and its features:

  1. Serverless Infrastructure:

    • With Fargate, you don't need to choose server types or worry about the underlying infrastructure. You simply define your application's requirements (CPU, memory), and Fargate automatically provisions the necessary resources.
  2. Simplified Management:

    • There's no need to manage the container orchestration layer, update/patch the cluster software, or manage individual server instances.
  3. Dynamic Scaling:

    • Fargate tasks can scale in and out as needed, allowing you to meet the demands of your application dynamically.
  4. Integrated with Amazon ECS:

    • You can use the same ECS API calls, and you can define tasks just like you do for ECS with EC2 as the launch type.
  5. Security:

    • Fargate tasks have IAM roles at the task level, not the instance level, giving you more granular control over permissions.

    • Each task or pod runs in its own isolated compute environment, ensuring process and kernel-level isolation from other tasks.

    • Integration with AWS VPC allows tasks to have their own private IP, security group, and can access resources in a VPC.

  6. Networking Features:

    • Fargate supports ECS service discovery, enabling tasks to have a consistent and predictable hostname, making it easier for tasks to communicate with each other.

    • It also supports integration with Elastic Load Balancing, allowing you to distribute incoming application traffic across multiple targets, such as ECS tasks, in a specified VPC.

  7. Cost:

    • With Fargate, you pay for the vCPU and memory resources that your containerized application requests. This contrasts with EC2 launch type where you pay for the entire EC2 instance, regardless of the container's utilization.
  8. Integration with AWS Services:

    • Fargate is natively integrated with other AWS services such as Amazon CloudWatch, AWS Identity and Access Management (IAM), and Amazon Elastic Load Balancing (ELB), providing a rich set of features for monitoring, management, and access control.
  9. Persistent Storage:

  10. Region Availability:

  • Initially, Fargate was available in selected AWS regions, but over time, its availability has expanded. Always check the AWS Region Table for the latest availability.

ECS Fargate offers a way to deploy containerized applications without the overhead of cluster and server management, making it a preferred choice for developers and businesses looking for a simplified, serverless container solution.

Getting started with the console using Linux containers on AWS Fargate

Complete the following steps to get started with Amazon ECS on AWS Fargate.

Prerequisites

Before you begin, complete the steps in Set up to use Amazon ECS and that your AWS user has the permissions specified in the AdministratorAccess IAM policy example.

The console attempts to automatically create the task execution IAM role, which is required for Fargate tasks. To ensure that the console is able to create this IAM role, one of the following must be true:

Important

The security group you select when creating a service with your task definition must have port 80 open for inbound traffic. Add the following inbound rule to your security group. For information about how to create a security group, see Add rules to your security group in the Amazon EC2 User Guide for Linux Instances.

  • Type: HTTP

  • Protocol: TCP

  • Port range: 80

  • Source: Anywhere (0.0.0.0/0)

Step 1: Create the cluster

Create a cluster that uses the default VPC.

Before you begin, assign the appropriate IAM permission. For more information, see Cluster examples.

  1. Open the console at https://console.aws.amazon.com/ecs/v2.

  2. From the navigation bar, select the Region to use.

  3. In the navigation pane, choose Clusters.

  4. On the Clusters page, choose Create cluster.

  5. Under Cluster configuration, for Cluster name, enter a unique name.

    The name can contain up to 255 letters (uppercase and lowercase), numbers, and hyphens.

  6. (Optional) To turn on Container Insights, expand Monitoring, and then turn on Use Container Insights.

  7. (Optional) To help identify your cluster, expand Tags, and then configure your tags.

    [Add a tag] Choose Add tag and do the following:

    • For Key, enter the key name.

    • For Value, enter the key value.

[Remove a tag] Choose Remove to the right of the tag’s Key and Value.

  1. Cluster is automatically configured to run with Fargate

  2. Choose Create.

Step 2: Create a task definition

A task definition is like a blueprint for your application. Each time you launch a task in Amazon ECS, you specify a task definition. The service then knows which Docker image to use for containers, how many containers to use in the task, and the resource allocation for each container.

  1. In the navigation pane, choose Task Definitions.

  2. Choose Create new Task Definition, Create new revision with JSON.

  3. Copy and paste the following example task definition into the box and then choose Save.

     {
         "family": "sample-fargate", 
         "networkMode": "awsvpc", 
         "containerDefinitions": [
             {
                 "name": "fargate-app", 
                 "image": "public.ecr.aws/docker/library/httpd:latest", 
                 "portMappings": [
                     {
                         "containerPort": 80, 
                         "hostPort": 80, 
                         "protocol": "tcp"
                     }
                 ], 
                 "essential": true, 
                 "entryPoint": [
                     "sh",
             "-c"
                 ], 
                 "command": [
                     "/bin/sh -c \"echo '<html> <head> <title>Amazon ECS Sample App</title> <style>body {margin-top: 40px; background-color: #333;} </style> </head><body> <div style=color:white;text-align:center> <h1>Amazon ECS Sample App</h1> <h2>Congratulations!</h2> <p>Your application is now running on a container in Amazon ECS.</p> </div></body></html>' >  /usr/local/apache2/htdocs/index.html && httpd-foreground\""
                 ]
             }
         ], 
         "requiresCompatibilities": [
             "FARGATE"
         ], 
         "cpu": "256", 
         "memory": "512"
     }
    
  4. Choose Create.

Step 3: Create the service

Create a service using the task definition.

  1. In the navigation pane, choose Clusters, and then select the cluster you created in Step 1: Create the cluster.

  2. From the Services tab, choose Create.

  3. Under Deployment configuration, specify how your application is deployed.

    1. For Task definition, choose the task definition you created in Step 2: Create a task definition.

    2. For Service name, enter a name for your service.

    3. For Desired tasks, enter 1.

  4. Under Networking, you can create a new security group or choose an existing security group for your task. Ensure that the security group you use has the inbound rule listed under Prerequisites.

  5. Choose Create.

Step 4: View your service

  1. Open the console at https://console.aws.amazon.com/ecs/v2.

  2. In the navigation pane, choose Clusters.

  3. Choose the cluster where you ran the service.

  4. In the Services tab, under Service name, choose the service you created in Step 3: Create the service.

  5. Choose the Tasks tab, and then choose the task in your service.

  6. On the task page, in the Configuration section, under Public IP, choose Open address. The screenshot below is the expected output.

    
                         Screenshot of the Amazon ECS sample application. The output indicates
                             that "Your application is now running on Amazon ECS".

Step 5: Clean up

When you are finished using an Amazon ECS cluster, you should clean up the resources associated with it to avoid incurring charges for resources that you are not using.

Some Amazon ECS resources, such as tasks, services, clusters, and container instances, are cleaned up using the Amazon ECS console. Other resources, such as Amazon EC2 instances, Elastic Load Balancing load balancers, and Auto Scaling groups, must be cleaned up manually in the Amazon EC2 console or by deleting the AWS CloudFormation stack that created them.

  1. In the navigation pane, choose Clusters.

  2. On the Clusters page, select the cluster cluster you created for this tutorial.

  3. Choose the Services tab.

  4. Select the service, and then choose Delete.

  5. At the confirmation prompt, enter delete and then choose Delete.

    Wait until the service is deleted.

  6. Choose Delete Cluster. At the confirmation prompt, enter delete cluster-name, and then choose Delete. Deleting the cluster cleans up the associated resources that were created with the cluster, including Auto Scaling groups, VPCs, or load balancers.

ECS Task Definitions and Tasks

In the context of Amazon Elastic Container Service (ECS), both "task definitions" and "tasks" are central concepts, but they serve different purposes. Let's break each one down:

ECS Task Definition:

A task definition is like a blueprint for your application. It describes one or more containers that together form your application or a component of your application. Here are the key elements and purposes of a task definition:

  1. Container Definitions: It specifies the Docker images to use, the CPU and memory to allocate to each container, port mappings, volumes to mount, environment variables, and more.

  2. Resource Allocation: You can define how much CPU and memory is allocated for each container in the task.

  3. Networking Mode: You can specify the networking mode for the containers. AWS VPC is a common choice for Fargate tasks, while 'bridge' and 'host' are available for EC2 launch type.

  4. Task Role: An IAM role that grants permissions to the containers to make AWS API requests.

  5. Volumes: If your containers need to use persistent storage, you can define Docker volumes or bind mounts in your task definition.

  6. Task-level CPU and Memory: For the Fargate launch type, you need to define the task-level CPU and memory, which represents the aggregate resource of all the containers in the task.

  7. Logging Configuration: Specifies how the logging systems, like Amazon CloudWatch Logs, should handle the logs.

  8. Task Definition Versions: Each time you update a task definition, ECS creates a new revision. This allows you to maintain a version history and deploy different versions of the same application using different task definition revisions.

ECS Task:

An ECS task is a running instantiation of a task definition within a cluster. When you tell ECS to run a task, it schedules and launches the containers defined in the task definition to run on an instance within the cluster. Here's what's involved:

  1. Lifecycle: A task goes through several states in its lifecycle, such as PENDING, RUNNING, and STOPPED.

  2. Placement: Tasks can be placed on instances in the cluster based on built-in or custom placement strategies.

  3. Scaling and Management: Within a service, tasks can be automatically scaled up or down based on specified criteria.

  4. Networking: Once a task is running, it has networking attributes such as IP addresses and ENIs (Elastic Network Interfaces), depending on its networking mode.

  5. Monitoring and Logging: When a task is running, you can monitor its metrics and logs using integrated AWS services like CloudWatch.

In essence, the task definition lays out "what" should run, specifying all the configurations and requirements, while the task is the actual "running instance" of that definition in the ECS cluster. Think of the task definition as the recipe and the task as the cooked dish made from that recipe.

ECS Service

In the context of Amazon Elastic Container Service (ECS), a service is a higher-level construct that allows you to run and maintain a specified number of instances (referred to as "tasks") of a task definition simultaneously in an ECS cluster. It abstracts the underlying details and provides capabilities like load balancing, scaling, and service discovery. Here's a detailed explanation of the ECS service:

ECS Service:

  1. Desired Count: With a service, you specify the number of task instances you want to run. This number is referred to as the "desired count." If any of the tasks fail or stop, the service scheduler launches another instance of your task definition to replace it and maintain the desired count.

  2. Load Balancing: The service can distribute traffic across your tasks using Elastic Load Balancing (ELB). It supports the use of both Application Load Balancers (ALB) and Network Load Balancers (NLB).

  3. Service Discovery: ECS service offers integrated service discovery. This makes it easier for your containerized services to discover and communicate with each other.

  4. Scaling: The service allows you to scale the number of tasks up or down. You can adjust the desired count manually or configure automatic scaling with target tracking scaling policies based on CloudWatch metrics.

  5. Update and Deployment: ECS service provides a way to roll out new versions of your application. You can specify deployment configurations, like a rolling update, to determine how tasks are replaced when a new version is deployed.

  6. Placement Constraints & Strategies: You can set constraints and strategies to define how tasks are placed on instances within the cluster. For example, you can ensure that tasks from the same service are not placed on the same instance to improve fault tolerance.

  7. Task Health Checks: With integration to the ALB, ECS can perform health checks on your service's tasks, ensuring traffic is not routed to unhealthy tasks and replacing tasks that are not healthy.

  8. Service Auto Recovery: If a task in a service stops or fails for any reason, ECS will recognize this and launch a new task to ensure the service maintains its desired count.

  9. Integration with Other AWS Services: Besides ELB and CloudWatch, ECS services are integrated with AWS Identity and Access Management (IAM) for permissions, AWS VPC for networking, and more.

  10. Launch Types: ECS services can use either the Fargate or EC2 launch type. The choice determines whether your tasks are run on serverless infrastructure (Fargate) or on a cluster of EC2 instances that you manage (EC2).

An ECS service ensures the desired number of tasks are continuously running and healthy. It provides mechanisms for load balancing, scaling, and updating the tasks, making it essential for running production-grade, scalable, and resilient containerized applications on ECS.