Introduction to Containers: Isolating Your Applications

Introduction to Containers: Isolating Your Applications | In this post, we’ll introduce you to the concept of containers and how they can be used to isolate your applications. This is just a brief overview, and we’ll dive deeper into containers, Docker, and Kubernetes in later sections.

Table of Contents


Understanding the Operating System File System

On a Linux computer, the operating system has a file system hierarchy that starts at the root directory (/). Underneath, you have various subdirectories like root, boot, bin, var, and etc. These directories contain configuration files, binaries, libraries, and other essential files.

Processes and Services

The files within the operating system are used by the processes running on the computer. For example, you might have processes like Apache Tomcat, Nginx, and MongoDB running simultaneously. These processes are part of a process tree that starts with the init or systemd process (PID 1).

The Need for Isolation

When running multiple main processes or services on a single computer, they share the same file system and directory structure. Any changes made to the configuration, binaries, or libraries will affect all processes. This lack of isolation can lead to conflicts and issues.

Traditional Isolation: Separate Computers

To achieve isolation, processes are often run on separate computers. Each computer has its own operating system and file system. These can be physical machines or virtual machines. However, this approach can be costly as more computers mean higher expenses.

A Better Solution: Containers

Containers provide a more efficient solution for isolation. Let’s explore how they work.

How Containers Work

  • File System Structure: Containers create isolated environments within a Linux computer. Each container has its own file system that looks like a miniature operating system.
  • Lightweight: Containers are lightweight and only contain the necessary files to run the process. For example, a Nginx container only includes files required to run Nginx.
  • Process Isolation: Each container has its own process tree, with PID 1 being the main process (e.g., Nginx or MongoDB process).
  • Virtual Network: Containers can be connected through a virtualized network, providing them with unique IP addresses.

Benefits of Containers

  • Isolation: Containers ensure that changes in one container do not affect other containers.
  • Portability: Containers can be archived as images, which can be easily shipped and run on different environments (e.g., from development to production).
  • Efficiency: Containers are lightweight and can run on a single host machine, reducing costs.

Containers provide an efficient way to isolate and manage your applications, offering benefits such as isolation, portability, and efficiency. We’ll explore Docker in detail in later sections, providing you with hands-on experience.

  • Containers offer Isolation and not virtualization. We can use a virtual machine for virtualization. To understand this, we can think of containers as OS virtualization.
  • We do need a Virtual Machine even if we have containers.

Container Engine: Docker

The containerization process is made possible by a container engine, also known as a container runtime environment. The most popular container runtime environment is Docker.

What is Docker?

Docker enables you to separate your applications from your infrastructure, allowing you to deliver software quickly and efficiently. Docker allows you to package your application and its dependencies into containers, ensuring consistency across different environments. Here are some key points about Docker:

  • Open Platform: Docker is an open platform that facilitates the development, shipping, and running of applications.
  • Containers: Docker applications run inside containers, which are lightweight, portable, and isolated environments.
  • Docker Daemon: The Docker Daemon is a service running on the host machine that manages Docker containers.
  • Docker Client: The Docker Client is the command-line interface (CLI) used to interact with the Docker Daemon.
  • Registry: A registry, such as Docker Hub, is a repository for storing and sharing Docker images.

How Docker Works

Docker simplifies the process of developing, shipping, and running applications by providing a standardized environment. Here’s an overview of the Docker architecture:

  1. Docker Host: This is the physical or virtual machine where the Docker Daemon runs.
  2. Docker Daemon: The service that manages Docker containers on the Docker Host.
  3. Docker Client: The CLI that you use to communicate with the Docker Daemon to build, pull, and run containers.
  4. Images: Docker containers are created from images. These images can be obtained from a registry, such as Docker Hub.
  5. Containers: Containers are instances of Docker images running as isolated processes on the Docker Host.

Docker Images and Containers

Docker provides the ability to package and run an application in a loosely isolated environment called a container. These containers are lightweight and contain everything needed to run a particular process. For example, a Nginx container includes only the files required to run the Nginx service.

Docker Hub

Docker Hub is a registry service that allows you to find and share container images. You can explore a wide range of readymade images available on Docker Hub, including Python, PostgreSQL, Ubuntu, Traefik, Redis, Node.js, MongoDB, OpenJDK, MySQL, Golang, Nginx, and many more.

Docker Hands-On

cd /C/workspace/
mkdir container
cd container

Let us create a Vagrant file, which will run the Ubuntu OS, install the Docker Engine on it. You can get docker install commands from here, put them in the Vagrant provisioning and start the Vagrant.

vi Vagrantfile

To place the content we have to enable insert mode in VIM using “I” button. Put the following into the file:-

Vagrant.configure("2") do |config|
  config.vm.box = "ubuntu/jammy64"
  config.vm.network "private_network", ip: "192.168.33.20" # IP address

  # Allocate 2 GB of RAM
  config.vm.provider "virtualbox" do |vb|
    vb.memory = "2048"
  end

  config.vm.provision "shell", inline: <<-SHELL
    # Update package information and install required packages
    sudo apt-get update
    sudo apt-get install -y ca-certificates curl

    # Add Docker's official GPG key
    sudo install -m 0755 -d /etc/apt/keyrings
    sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
    sudo chmod a+r /etc/apt/keyrings/docker.asc

    # Add Docker repository to Apt sources
    echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

    # Update package information again
    sudo apt-get update

    # Install Docker packages
    sudo apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

    # Enable and start Docker service
    sudo systemctl enable docker
    sudo systemctl start docker
  SHELL
end

Use “ESC” button to come out of insert mode. And to save the file use “:wq”. You can verify content of the file:-

cat Vagrantfile

Now, start the Vagrant.

vagrant up

Login to the vagrant:-

vagrant ssh
sudo -i
systemctl status docker
sudo docker run hello-world

Create a container:-

docker run --name web01 -d -p 9080:80 nginx
docker ps
docker inspect web01

Docker container web01 has “IPAddress”: “172.17.0.2”. If we give the request then it will show default page of the NGINX.

curl http://172.17.0.2:80

Let us find the IP address of the VM:-

ip addr show

We had configured 192.168.33.20 in the Vagrantfile. Find the host port, we had given 9080.

docker ps

On the host machine (Windows), give a request to http://192.168.33.20:9080/, it will be able to show the default page of NGINX.

Docker Command Basics

Docker Installation and Setup:

  • Install Docker (Ubuntu):
sudo apt-get update
sudo apt-get install -y docker-ce
  • Start Docker Service:
sudo systemctl start docker
  • Enable Docker Service at Boot:
sudo systemctl enable docker

Managing Docker Containers

  • Run a Container:
docker run -d --name container_name image_name
  • List Running Containers:
docker ps
  • List All Containers:
docker ps -a
  • Stop a Running Container:
docker stop container_name
  • Start a Stopped Container:
docker start container_name
  • Remove a Container:
docker rm container_name
  • Remove All Stopped Containers:
docker container prune

Managing Docker Images

  • List Docker Images:
docker images
  • Pull an Image from Docker Hub:
docker pull image_name
  • Build an Image from a Dockerfile:
docker build -t image_name .
  • Tag an Image:
docker tag source_image:tag target_image:tag
  • Remove an Image:
docker rmi image_name
  • Remove All Unused Images:
docker image prune

Managing Docker Networks

  • List Docker Networks:
docker network ls
  • Create a Network:
docker network create network_name
  • Connect a Container to a Network:
docker network connect network_name container_name
  • Disconnect a Container from a Network:
docker network disconnect network_name container_name
  • Remove a Network:
docker network rm network_name

Managing Docker Volumes

  • List Docker Volumes:
docker volume ls
  • Create a Volume:
docker volume create volume_name
  • Remove a Volume:
docker volume rm volume_name
  • Remove All Unused Volumes:
docker volume prune

Inspect and Logs

  • Inspect a Container:
docker inspect container_name
  • View Container Logs:
docker logs container_name
  • Follow Container Logs:
docker logs -f container_name

Docker Compose (For multi-container applications)

  • Start Services:
docker-compose up
  • Start Services in the Background:
docker-compose up -d
  • Stop Services:
docker-compose down

These commands cover the basic operations you’ll need to start working with Docker. They allow you to manage containers, images, networks, and volumes effectively. Docker offers much more, and as you dive deeper, you’ll discover its full potential.

Building an Image

mkdir images
cd images/
vim Dockerfile

Paste below content:-

FROM ubuntu:latest AS BUILD_IMAGE
RUN apt update && apt install wget unzip -y
RUN wget https://www.tooplate.com/zip-templates/2128_tween_agency.zip
RUN unzip 2128_tween_agency.zip && cd 2128_tween_agency && tar -czf tween.tgz * && mv tween.tgz /root/tween.tgz

FROM ubuntu:latest
LABEL "project"="Marketing"
ENV DEBIAN_FRONTEND=noninteractive

RUN apt update && apt install apache2 git wget -y
COPY --from=BUILD_IMAGE /root/tween.tgz /var/www/html/
RUN cd /var/www/html/ && tar xzf tween.tgz
CMD ["/usr/sbin/apache2ctl", "-D", "FOREGROUND"]
VOLUME /var/log/apache2
WORKDIR /var/www/html/
EXPOSE 80

Build Image

docker build -t tesimg .
docker images

Run the container from our Image:-

docker run -P -d tesimg
docker ps

Get more information:-

docker ps
ip addr show
docker ps

Go to the browser, enter IP:HostPort

Cleanup

docker ps
docker stop web01 condescending_ritchie
docker container prune
docker ps -a
docker images
docker rmi 53586a8bbb14 f876bfc1cc63 d2c94e258dcb
docker images

Running Multiple Containers

If we want to run multiple containers together then we can use docker compose concept. Let us create a docker-compose file.

mkdir compose/
cd compose/
vi docker-compose.yml

In docker-compose.yml put the below content:-

version: '3.8'
services:
  # Database service
  vprodb:
    build: 
      context: ./Docker-files/db
    image: vprocontainers/vprofiledb
    container_name: vprodb
    ports:
      - "3306:3306" # Expose and map port 3306
    volumes:
      - vprodbdata:/var/lib/mysql # Persist data in a named volume
    environment:
      - MYSQL_ROOT_PASSWORD=vprodbpass # Environment variable for MySQL root password

  # Cache service using Memcached
  vprocache01:
    image: memcached
    ports:
      - "11211:11211" # Expose and map port 11211

  # Message queue service using RabbitMQ
  vpromq01:
    image: rabbitmq
    ports:
      - "5672:5672" # Expose and map port 5672
    environment:
      - RABBITMQ_DEFAULT_USER=guest # Default RabbitMQ user
      - RABBITMQ_DEFAULT_PASS=guest # Default RabbitMQ password

  # Application service
  vproapp:
    build: 
      context: ./Docker-files/app
    image: vprocontainers/vprofileapp
    container_name: vproapp
    ports:
      - "8080:8080" # Expose and map port 8080
    volumes:
      - vproappdata:/usr/local/tomcat/webapps # Persist data in a named volume

  # Web service
  vproweb:
    build: 
      context: ./Docker-files/web
    image: vprocontainers/vprofileweb
    container_name: vproweb
    ports:
      - "80:80" # Expose and map port 80

volumes:
   vprodbdata: {} # Volume for database data
   vproappdata: {} # Volume for application data

Start the containers:-

docker compose up -d

It will read the docker-compose.yml file, and bring up all the containers.

docker compose ps
docker images

Check the VM IP address, we had configured 192.168.33.20. Check the IP of the vprocontainers/vprofileweb from docker compose ps. In the host browser enter IP:Port (like 192.168.33.20:80) to get the web page. Username & password “admin_vp”.

Clean Up

docker compose down
docker compose ps -a
docker images
docker system prune -a
docker images

Deploying a Microservice Application

We’ll explore the deployment of a microservice application named EMart. This application is designed using microservice architecture and comprises several services, each running in its own container.

Overview of EMart

EMart is an e-commerce application, similar to Amazon or Flipkart. It is designed with the following components:

  • API Gateway (nginx): Listens at three endpoints – /, /api, and /webapi.
  • Client App (Angular): Accessed through the root endpoint /, providing the front end.
  • Mart API (NodeJS): Serves the /api endpoint and connects to a MongoDB database.
  • Books API (Java): Serves the /webapi endpoint and connects to a MySQL database.

All these applications (nginx, Angular, NodeJS, Java, Mongo, MySQL) will run in separate containers.

We’ll deploy the EMart application using Docker and Docker Compose. Here’s how to do it step-by-step:

Step-1: Clone the repository.

1: Clone the EMart Repository

git clone https://github.com/devopshydclub/emartapp.git EMart-app
cd EMart-app
cat docker-compose.yaml
  1. Inspect the docker-compose.yaml File: The docker-compose.yml file contains the configuration to build images and run containers for different services:
version: "3.8"

services:
  client:
    build:
      context: ./client
    ports:
      - "4200:4200"
    container_name: client
    depends_on:
      - api
      - webapi

  api:
    build:
      context: ./nodeapi
    ports:
      - "5000:5000"
    restart: always
    container_name: api
    depends_on:
      - nginx
      - emongo

  webapi:
    build:
      context: ./javaapi
    ports:
      - "9000:9000"
    restart: always
    container_name: webapi
    depends_on:
      - emartdb

  nginx:
    restart: always
    image: nginx:latest
    container_name: nginx
    volumes:
      - "./nginx/default.conf:/etc/nginx/conf.d/default.conf"
    ports:
      - "80:80"

  emongo:
    image: mongo:4
    container_name: emongo
    environment:
      - MONGO_INITDB_DATABASE=epoc
    ports:
      - "27017:27017"

  emartdb:
    image: mysql:8.0.33
    container_name: emartdb
    ports:
      - "3306:3306"
    environment:
      - MYSQL_ROOT_PASSWORD=emartdbpass
      - MYSQL_DATABASE=books

Step 2: Build and Run the Containers

  1. Build and Run the Containers:
docker-compose up -d
  1. If There Are Issues:
  • Build the images separately:
docker-compose build
  • Then run the containers:
docker-compose up -d

Step 4: Verify the Deployment

  1. Check Running Containers:
docker-compose ps
docker ps -a
  1. Access the Application in the Browser:
  • Get the IP address of the VM:
ip addr show
  • Open a web browser in the host machine and navigate to the IP address on port 80: http:// (192.168.33.20:80)

Step 5: Register and Login:- Register a New User, log in, and get access to the product list.

Cleanup

  1. Stop and Remove Containers:
docker-compose down
  1. Remove Unused Docker Resources:
docker system prune -a
  1. Shutdown the Vagrant VM:
vagrant halt

Deploying a microservice application like EMart using Docker and Docker Compose simplifies the process and ensures efficient management of services. This approach is highly scalable and suitable for modern application architectures.

If you enjoyed this post, share it with your friends. Do you want to share more information about the topic discussed above or do you find anything incorrect? Let us know in the comments. Thank you!

Leave a Comment

Your email address will not be published. Required fields are marked *