Containers! Containers everywhere!

Just end of June AWS announced general availability of Amazon ECS Anywhere. And HashiCorp is also on-board! What does that mean? What is Amazon ECS and ECS Anywhere? In this blog post I want to introduce you to this service and the flexibility ECS Anywhere adds. Most importantly I will show you an example on how to implement it in AWS - of course in Infrastructure as Code (IaC), more specifically in Hashicorp’s Terraform.

Amazon Elastic Container Service (ECS) and Amazon ECS Anywhere

Amazon Elastic Container Service is a fully managed Container Orchestration Service which integrates with all other AWS services. You want to run containers easily without worrying about too much complexity? This is the service for you! Amazon ECS can use EC2 instances and AWS Fargate as Compute Layers. Using EC2 instances still requires some management of the instances whereas Fargate offers serverless compute power to be used in ECS. With ECS Anywhere it is now possible to use any Compute resources and therefore use the ECS Container Orchestration with machines on-premise.

After the installation of the SSM agent, ECS agent and registration to the cluster (shown in my example below) the ECS control plane will start scheduling tasks and services on those new resources.

What was the goal of this implementation?

This example shows you how to set up the ECS Control plane cluster for use with ECS Anywhere. For simplicity I am using EC2 instances as “external resources” which will also be set up via Terraform. The installation of necessary packages and the registration to SSM/ ECS happens with help of EC2’s UserData, which will be executed at instance launch.

Once the instances are registered the service will start running tasks and you can go and check out the published HTML webpage.

Use case implementation

In the following I will show and explain parts and modules of my solution. I ommited some parts to keep it short but please check out the source code on GitHub!

Cluster setup in Terraform

As you can see the cluster set up itself is quite slim, it is just important to add the proper policy to the ECS cluster’s role.

resource "aws_iam_role" "ecs-anywhere-test-task-role" {
  assume_role_policy = data.aws_iam_policy_document.ecs-anywhere-task-assume-policy.json
  managed_policy_arns = [
    "arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy",
  ]
  name_prefix = "ecs-anywhere-"
}

resource "aws_ecs_cluster" "ecs-anywhere-test" {
  name = "ecs-anywhere-test"
}

Task definition and service

To demonstrate you the way you can set up ECS mainly in IaC I also created a small Python Flask app. You can find the source for this example app also on our GitHub source repository in the flask-demo-app folder. It is necessary to have it uploaded into the Amazon Elastic Container Registry. Using the Terraform modules for aws_ecs_task_definition and aws_ecs_service starts the service once the cluster gets compute resources added.

resource "aws_ecs_task_definition" "ecs-anywhere-test-task" {
  container_definitions = jsonencode(
    [
      {
        cpu       = 256
        essential = true
        image     = "596305347017.dkr.ecr.eu-central-1.amazonaws.com/flask-demo-app:latest"
        memory    = 256
        name      = "flask-demo-app"
        portMappings = [
          {
            containerPort = 5000
            hostPort      = 80
          },
        ]
      },
    ]
  )
  family                   = "test-task-def"
  requires_compatibilities = ["EXTERNAL", ]
  task_role_arn            = aws_iam_role.ecs-anywhere-test-task-role.arn
}

resource "aws_ecs_service" "flask-demo" {
  name            = "flask-demo"
  cluster         = "ecs-anywhere-test"
  task_definition = aws_ecs_task_definition.ecs-anywhere-test-task.arn
  desired_count   = 2
  launch_type     = "EXTERNAL"
}

“External” resources in Terraform

For use with my newly created ECS Cluster I also added two EC2 instances into my Terraform script. They are placed in public subnets so they have direct access to the internet. More important than internet access is that they have to have proper IAM permissions in their IAM role since the SSM agent will be installed and used for registration in AWS alongside the ECS agent.

resource "aws_iam_role" "instance_ssm_role" {
  name = "test_role"
  managed_policy_arns = [
    "arn:aws:iam::aws:policy/service-role/AmazonEC2ContainerServiceforEC2Role",
    "arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore"
  ]
  assume_role_policy = file("ssm_role.json")
}

Another part of our solution is the SSM activation. This activation will return an activation ID and activation code which we will need to register the EC2 instances to AWS.

resource "aws_ssm_activation" "activation" {
  name               = "instance_ssm_activation"
  description        = "SSM ECS Anywhere"
  iam_role           = aws_iam_role.instance_ssm_role.id
  registration_limit = var.worker
}

These information will be parsed into our userData for the EC2 instance so that it will executed on launch.

resource "aws_instance" "EXTERNAL-resource" {
  count                       = var.worker
  ami                         = data.aws_ami.aws_ami_linux2.id
  instance_type               = var.instance_type
  subnet_id                   = aws_subnet.public-anywhere-subnet.id
  key_name                    = var.key_name
  associate_public_ip_address = true
  vpc_security_group_ids      = [aws_security_group.external-resource-sg.id]
  user_data = templatefile("install-ecs-anywhere.sh.tpl", {
    TF_ACT_ID       = aws_ssm_activation.activation.id,
    TF_ACT_CODE     = aws_ssm_activation.activation.activation_code,
    TF_CLUSTER_NAME = aws_ecs_cluster.ecs-anywhere-test.id
  })

    tags = {
    Name = format("EXTERNAL-resource-%s", count.index)
  }
  # [... ommited - full source on GitHub]
}

The needed resource packs for UserData will be installed first and afterwards the command for installation of SSM agent, ECS agent and registration will be executed. The following curl command including the activation will also be available for you in the ECS cluster console once you set up your ECS cluster successfully.

#!/bin/bash
amazon-linux-extras install epel
curl --proto "https" -o "/tmp/ecs-anywhere-install.sh" "https://amazon-ecs-agent.s3.amazonaws.com/ecs-anywhere-install-latest.sh" && bash /tmp/ecs-anywhere-install.sh --region "eu-central-1" --cluster "${TF_CLUSTER_NAME}" --activation-id "${TF_ACT_ID}" --activation-code "${TF_ACT_CODE}"

Once this registration is done you should be able to check on your cluster and see the service starting tasks from the task definitions for you. To access the webpage exposed by the container go to the public ip of one of your instances on port 80.

Conclusion

Of course if you had an actual project you wouldn’t use EC2 instances as External Resource but this shows how the workflow to add external resources to the ECS cluster looks like. Also, Terraform’s provider agnostic approach allows you to use any other resources you already set up in your project.

The full source can be found on our GitHub here.

Similar Posts You Might Enjoy

Automated ECS deployments using AWS CodePipeline

When developing applications, particularly in the realm of containerization, CI/CD workflows and pipelines play an important role in ensuring automated testing, security scanning, and seamless deployment. Leveraging a pipeline-based approach enables fast and secure shipping of new features by adhering to a standardized set of procedures and principles. Using the AWS cloud’s flexibility amplifies this process, facilitating even faster development cycles and dependable software delivery. In this blog post, I aim to demonstrate how you can leverage AWS CodePipeline and Amazon ECS alongside Terraform to implement an automated CI/CD pipeline. This pipeline efficiently handles the building, testing, and deployment of containerized applications, streamlining your development and delivery processes. - by Hendrik Hagen

Serverless Jenkins on ECS Fargate: Part 1

When setting up a Jenkins build server on a physical machine, right-sizing can become a challenging task. Long idle times followed by high, irregular loads make it hard to predict the necessary hardware requirements. One solution to this problem is the deployment of a containerized Controller/Agent-based Jenkins setup and to offload workloads to dedicated, transient agents. This is the first post of a three-post series. In this series, I would like to show you how you can leverage AWS Fargate and Terraform to deploy a serverless as well as fault-tolerant, highly available, and scalable Jenkins Controller/Agent deployment pipeline. - by Hendrik Hagen

Serverless Jenkins on ECS Fargate: Part 2

When setting up a Jenkins build server on a physical machine, right-sizing can become a challenging task. Long idle times followed by high, irregular loads make it hard to predict the necessary hardware requirements. One solution to this problem is the deployment of a containerized Controller/Agent-based Jenkins setup and to offload workloads to dedicated, transient agents. This is the second post of a three-post series. In this series, I would like to show you how you can leverage AWS Fargate and Terraform to deploy a serverless as well as fault-tolerant, highly available, and scalable Jenkins Controller/Agent deployment pipeline. - by Hendrik Hagen