Life's random bits By b1thunt3r (aka Ishan Jain)…
Docker: Introduction

Docker: Introduction

Ishan jain
Short answer: A tool that helps to maximize the utilization of system resources, by containing applications in their own environment.

What is Docker?

Docker lets applications to share the underlying OS kernel. Docker is not the only tool to provide application containers, but it is the most widely known and used tool out there for application containerization.

Docker utilizes a technology called "User Spaces". Docker was not the first to utilize User Spaces, and I am guessing, it wont be the last tool either. User Space is a way to segment off an OS, so the process can only have access to a certain set of instructions and resources. Almost all modern OS today uses "User Space", either for avoiding so called "bleeding effect" or for greater security. In Linux, the examples of User Spaces can be seen in, SUDO, CHROOT, LXC (another competing container engine), OpenVZ, etc. Other OS like BSD has Jails. Since Windows 95, Windows did also have this feature in for of WoW or Windows on WIndows. It was meant to run 16-bit application in a 32-bit OS, WoW is still present in Windows 10, for running 32-bit application in 64-bit OS this time around. I think a lot of people did notice that, when they upgraded to 64-bit Vista, a lot of their "Legacy" applications stopped to work. This was because of WoW16 was removed from the 64-bit version of Vista. If I am guessing right, you can still find WoW16 on 32-bit Windows 10 (and yes it has been ages since I used a 32-bit OS, last one I used was Windows XP). Vista also introduced another layer of User Space with then new feature; "User Access Control", even if it was ridiculed by macOS ads when it came out, it did provide a security layer for average Windows User. Apple did forget they did same with their "Sudo" clone in macOS.

What Docker did to become popular was to provide a full tool chain, not just some bits here and there. Now user could get a full ecosystem from single provider. In essential Docker revolutionized the containerization with the help of "User Space".

Docker is today split into several components. The bare minimum to run docker containers user need to have; knowledge and capability to be able to create Docker Images, and a Docker Engine to run the image in containers.

Engine

Docker Engine is the technology that enables users to run containers on any given OS. It also handles the lifecycle of images and containers.

Images

Dockers Images are packages that contains the installation. An Image can be compared to an VM hard drive. An user can install and contain the application in an image. Docker Images are modular, user can build their own image on top on an existing image. For example, you can choose to install nGinx (a web server) on top of either an Alpine Linux or Ubuntu Linux image. Depending on what image user chooses, the new image will have utilities and tools associated to the base image. With Alpine, user will need to use the Alpine's APK system to install new package. While with Ubuntu, the user will need to use APT system to install new packages. User can run any base image, irrelevant of host OS, i.e. run an Alpine Linux image on a Ubuntu Linux host.

An Docker Image contains every thing it needs to run the process; code, libraries, runtimes and settings.

Containers

When user wants to run the application contained with in an image, they need to an instance of the image. Running, live instance of an image is called a container. User can run multiple containers based on same image.

Registry

After the image is tested and needs to be shared with others, the user can choose to upload it to a registry. A Docker registry can be either public or private. The most common public registry is Docker Hub. It contains the largest collection of Docker images, and is the default registry for Docker client.

API

Docker API or Docker Engine API enables users to access Docker with any RESTful HTTP client.

Multiple Images

With time, Docker images started to get complex. To keep with the SOLID principle, one should only create one image per function, i.e. one for front-end, one for back-end and one for database. It can be hard to manage multiple images an application needs to function. There are several tools to manage complex image/container deployments.

Compose

Most simple tool to handle multiple images in a single is Docker Compose. Docker Compose lets user to define multi-container application in a single file, and spin up all the containers with a single command.

Swarm

Swarm mode: Swarm mode is the cluster management and orchestration features of Docker Engine. It enables local Docker Engine as a cluster node.

Docker Swarm: Docker Swarm pools together several Docker nodes into a single virtual Docker host. It functions as a transparent layer on top of Docker Engine, which enables users to use standard Docker API to manage the Swarm cluster.

What about VM?

To run VM on modern hypervisors, one need to have the virtualization support in the hardware (VT-x, VT-d, IOMMU, SSE4, etc). VMs are aimed to provide full OS virtualization, docker in theory only provides application virtualization. A VM need to have its own hardware (through hardware virtualization) and runs it own kernel. In a VM, applications are locked to assigned vCores and vRAM. In docker on other hand, the applications can access the physical amount of cores and RAM. Networking in both VM and Containers are similar (at least on the surface).

Pros and Cons

Docker Images are more lightweight compared to VM images. VM image contains everything from OS to the application in a single file. Docker Image can build on top an existing image, there by share the underlying libraries and runtime. Also Docker Containers uses the kernel from the Host OS, while VM cannot access the host OS. Docker Containers can share and have access to full amount of CPU cores and RAM available to the host OS and other containers, in an VM user needs to allocate vCores and VRAM that cannot be shared with either host OS or other VMs on the system. In most cases Docker Container is going to be faster the VM, as it has direct access to the underlying hardware, and has no need to virtualize the required hardware.

Sharing resources can be seen as an security risk, i.e. if there is an bug in the kernel that can be exploited to take control of the host OS. Another big disadvantage of Docker can be the totally isolated networks.