Container and Registry

Learn about the differences between virtualization and containerization.

In the old days, physical servers ran single applications, resulting in the underutilization of hardware resources; it was impossible to enforce boundaries for running multiple applications on the server. This led to virtualization, which allowed the creation of multiple virtual instances of physical resources on a single physical machine. In this lesson, we will review virtualization and containerization and understand their differences. Then, we will learn about the container orchestration and different container orchestration services available in AWS, and lastly, we will focus on the Container Registry offered by AWS aka Amazon Elastic Container Registry.

Virtualization

Virtualization refers to the isolated logical dividing of a machine into virtual machines such that each application running has its own OS, libraries required and the application. Multiple VMs can exist on the same physical machine, resulting in better resource utilization than running a single application on a server. Virtualization requires a hypervisor that manages the multiple VMs on a single machine. Since each VM has its own OS, this approach has its own limits, as only a limited number of VMs can share a single physical machine due to the processing and storage limitations of a server.

Containerization

Due to such limitations, containerization emerged as the natural successor of virtualization. Containerization is logically isolated spaces on physical machines that use the same operating system. Containerization is more granular and flexible than virtualization as it does not require different OSs running on the same machine to occupy storage and CPU.

Containerization encapsulates everything an application needs to run, including the code, runtime, libraries, and dependencies, ensuring consistency and predictability across different computing environments.

Press + to interact
Virtualisation vs Containerisation
Virtualisation vs Containerisation

Just like virtual machines (VMs) require a hypervisor to run multiple operating systems on a single physical machine, containers require container runtimes to run isolated environments on a host operating system. The container runtime’s primary role is to interface with the operating system’s kernel to create, start, stop, and manage containers. Examples of container runtimes include Docker Engine, containerd, and CRI-O. In this lesson, we will look into Docker in detail.

What is Docker?

Docker is a platform that simplifies container development, shipping, and management of container applications. Docker separates the application from the underlying infrastructure such that the applications can be quickly run on different platforms irrespective of hardware.

Docker has two major concepts that are important to understand: Dockerfile and Docker images. A Dockerfile is a blueprint for creating Docker images. It contains a set of instructions on how to build a docker image. A Docker image is a set of read-only instructions that, when executed, creates a container.

Let’s look at an example of a Dockerfile that creates an image to deploy a container to serve a node application.

Press + to interact
Docker flow
Docker flow

Dockerfile

We have a Dockerfile used to deploy a node application on the container.

Press + to interact
# Use a base image from Docker Hub
FROM ubuntu:latest
# Set the working directory inside the container
WORKDIR /app
# Copy the application files from the host into the container
COPY . .
# Install dependencies (example: Node.js and npm)
RUN apt-get update && \
apt-get install -y nodejs npm && \
npm install
# Expose a port on the container (optional)
EXPOSE 3000
# Specify the command to run when the container starts
CMD ["node", "app.js"]

Explanation

Let’s dive deep into the Dockerfile.

  • FROM: Specifies the base image to use for the Docker image. Here, we’re using the latest version of Ubuntu as the base.

  • WORKDIR: Sets the working directory inside the container where subsequent commands will be executed.

  • COPY: Copies files from the host machine (the directory containing the Dockerfile) into the container.

  • RUN: Executes commands inside the container during the build process. Here, we’re updating the package repository, installing Node.js and npm, and then installing dependencies.

  • EXPOSE: Exposes a port on the container. This doesn’t publish the port; it just documents that the container listens on the specified port at runtime.

  • CMD: Specifies the default command to run when a container is started from the image. In this case, it runs a Node.js application.

Now, let’s dig deep and understand how Docker works; docker uses Docker Engine, which is a container orchestration service.

Container Orchestration

A Docker Engine is an open-source container orchestration service that allows developers to package application and their dependencies into lightweight containers. A Container Orchestration service is a tool to automate, scale, and manage containers.

AWS offers Amazon Elastic Container Service and Elastic Kubernetes Service for Container orchestration.

  • Amazon Elastic Container Service (ECS) is a fully managed container orchestration service by AWS. It simplifies the deployment and management of containerized applications, allowing us to run Docker containers at scale.

  • Similarly, we have Amazon Elastic Kubernetes Service (EKS), a fully managed Kubernetes service provided by Amazon Web Services (AWS). It serves as a container orchestration service, enabling users to deploy, manage, and scale containerized applications using Kubernetes on AWS infrastructure. We will go over ECS and EKS in detail in the upcoming lessons.

Container Registry

Organizations deploying multiple containers through ECS require a registry to store, maintain, and distribute different images. A Container registry is crucial in the Docker ecosystem to provide a reliable and scalable infrastructure for sharing and collaborating on container images across different environments.

ECS can use an Amazon Elastic Container Registry (ECR) for container management. This integration ensures reliable and efficient deployment of containers on ECS, facilitating version control and ensuring consistency in application delivery.

Amazon Elastic Container Registry

Amazon Elastic Container Registry (ECR) is an AWS managed Docker container registry service. ECR provides a secure and scalable repository to store, manage, and deploy Docker images. ECR offers both public and private registries that can allow multiple repositories in the registries.

Press + to interact
  • ECR is primarily designed as a private registry, meaning that it’s intended for use within organizations or teams. Users can push their Docker images to ECR and control access permissions to ensure that only authorized users can pull or modify those images. The private registry repositories are available to everyone with IAM permissions only.

  • A public registry is a container registry publicly accessible to anyone on the internet. It hosts a wide range of open-source and community-contributed Docker images, making them readily available for developers to pull and use in their projects.

Press + to interact
Types of registries
Types of registries


Get hands-on with 1300+ tech skills courses.