Docker Revealed: Comprehensive Guide Tailored for All Skill Levels in Docker.

Docker Revealed: Comprehensive Guide Tailored for All Skill Levels in Docker.

Introduction to Containerization with Docker.

In the vast landscape of modern software development, where efficiency, scalability, and consistency are not negotiable, Docker emerges as a transformative tool. This article aims to reveal Docker for individuals at every level of expertise, offering a structured journey from fundamental principles to advanced techniques.

The above Photo is by Ian Taylor on Unsplash

For more tech updates and discussions, follow me on Twitter at @Alpheus___ or Linkedin Alpheus.

A Brief Explanation of Containerization.

As the name containerization implies that something is in a container, in a more technical term containerization is the process of packaging an application with all the necessary dependencies and configurations to help make sure that the package is a portable artifact, easily shared and moved around development teams which increases the productivity within IT teams and development teams effectively making development and deployment very efficient.

Technical terms used in docker are explained below:

  1. Container: It is a compact executable software that includes all the configurations and dependencies that are needed to build or run a piece of software, containers provide consistent and suitable ensuring the software runs consistently without fail, from development to deployment on different operating systems. In docker when software is pulled from the docker hub or registry to the local machine docker environment and it’s running then a docker container environment has been started.

  2. Images: Docker images are artifacts that can be moved around but it is not running in a container environment. It serves as the basic standalone layer for container execution.

  3. Dockerfile: A Dockerfile is a file that contains sets of instructions for creating a Docker image. It describes the methods and roadmap required to configure and build an image. Dockerfiles are critical in maintaining image repeatability, allowing developers to systematically describe the environment and dependencies for their applications.

  4. Volume: A volume in Docker is a technique for sharing data between the host system( where the container is being run) and containers. Volumes allow you to manage and store data outside of the container, ensuring that essential information, such as databases or application logs, can endure beyond the container's lifespan.

  5. Networks: Networks are an essential component of Docker for facilitating communication between containers as well as between containers and the outside world. Docker provides a flexible networking technique, there are several types of docker networks which include host network, overlay network, bridge net, etc, we won’t cover this here but I will surely write about them, these networks allow containers to communicate with one another, making it easier to create multi-container applications.

  6. Compose: Docker is used to compose. Compose is a tool for creating and managing multi-container Docker applications. Compose allows developers to specify services, networks, and volumes in a single YAML file, allowing for the seamless coordination of numerous containers that comprise an entire application stack.

Docker Basics for Beginners

  1. Docker: I know it’s strange right we have just been taking and explaining containers, images, volumes, etc, and your brain is trying to understand all the terms used, it’s not strange you aren’t a robot if this is your first time, it happened to me too. So what is Docker, The term"Docker" refers to an open-source platform that automates the deployment, scaling, and management of programs within containers. It was created by Docker, Inc. and makes use of containerization technology, allowing developers to package an application and its dependencies into a standardized unit known as a container. Containers provide a consistent and portable environment as I said previously, ensuring that applications run reliably across diverse computer environments, from development to testing and deployment. Docker has emerged as a key tool in the realm of DevOps and container orchestration, easing the process of developing, distributing, and deploying software applications.

Installation and setup of Docker

Installation on various platforms (Windows, macOS, Linux): Installing Docker a widely use containerization tool on your system is a straightforward process. The steps below could help you install Docker on any operating system of use.

Certainly! Below is a detailed guide for installing Docker on various platforms including Windows, macOS, and Linux.

Installing Docker on Windows:

1. System Requirements:

  • Ensure your Windows version is 64-bit and supports Hyper-V. Docker Desktop for Windows requires Microsoft Hyper-V to run. I know you could research this out quickly so I won’t bother.

2. Enable Hyper-V:

  • Enable Hyper-V in the BIOS settings. Ensure virtualization is enabled in your computer's BIOS/UEFI settings. Open your task manager by using “Ctrl+Shift+Esc ”keys. You should see a page like the one below.

Click on the tab “performance

The picture below is the direct screenshot of the picture above.

3. Install Docker Desktop:

  • Download Docker Desktop for Windows from the official Docker website.

  • Run the installer and follow the on-screen instructions. If you do everything well you should have a screen like the one below.

  • During installation, choose to use Windows containers if prompted.

4. Launch Docker Desktop:

  • Once installed, the Docker Desktop can be searched and launched from your Search windows panel. Launch it.

  • Docker Desktop may require a restart to complete the installation. Restart your system, launch “Docker desktop” again then open “command line” or any terminal of your choice and type in the command you see in the picture below.

A version of Docker should be displayed after the command is executed, if you see your version congratulations you successfully installed Docker.

Installing Docker on macOS:

1. System Requirements:

  • Ensure your macOS version is 10.14 or newer with hardware virtualization support.

2. Download Docker Desktop:

  • Download Docker Desktop for Mac from the official Docker website.

3. Install Docker Desktop:

  • Open the downloaded .dmg file.

  • Drag the Docker icon to the Applications folder.

  • Open Docker from the Applications folder. The interfaces may be different but all the processes should work as the picture in the windows section.

4. Launch Docker Desktop:

  • Docker Desktop should appear in your Applications. Launch it.

  • The first time you run Docker, it may prompt you to install the Command Line Tools.

Installing Docker on Linux:

1. System Requirements:

  • Docker requires a 64-bit installation on Ubuntu. Verify your system's architecture.

2. Install Docker Engine:

  • Update the apt package index: sudo apt-get update.

  • Install packages to allow apt to use a repository over HTTPS:

sudo apt-get install apt-transport-https ca-certificates curl software-properties-common.

You should see something like the picture below:

  • Set up the stable Docker repository:

echo "deb [signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null.

  • Update the apt package index again: sudo apt-get update.

  • Install Docker Engine: sudo apt-get install docker-ce docker-ce-cli containerd.io.

3. Verify Docker Installation:

  • Run a simple Docker command to verify the installat]ion: sudo docker run hello-world

The Docker ecosystem

1. Docker Hub and Image Repositories

Docker Hub is a big online shop for software containers in the Docker ecosystem. These containers are similar to pre-packaged software applications that include everything needed to run. Docker Hub is a location where individuals can share, find, and use containers.

Here are technical terms you should familiarize yourself with:

1. Docker Registry:

  • A registry is like a warehouse of Docker containers. Docker Hub is a popular public registry where many containers are stored. Think of it as a big library of software blueprints.

2. Docker Image Repositories:

  • In Docker Hub, you have different sections or repositories. Each repository is like a shelf in the library that holds related containers. For example, there might be a repository for databases, another for web servers, and so on.

3. How Developers Use It:

  • Developers can grab (pull) these containers from here Docker Hub. It's like getting a ready-made toolkit for their software. This saves a lot of time because they don't have to set up everything from scratch.

  • Developers can also share (push) their containers to Docker Hub, so others can use their setups. It's like sharing your toolkit with the community.

4. Private Registries:

  • Besides Docker Hub, some organizations have their private registries. These are like personal toolsheds. They keep their special containers there, especially if they don't want everyone to use them.

In a nutshell, Docker Hub is where you find and share this standalone software (packages) called Docker container images, If you have used GitHub, you should understand what I am trying to say here. It's like an online marketplace for software tools that make developers' lives easier.

Docker Compose and Docker Swarm

Well, we are here now, so what is Docker Compose? Docker Compose is a tool for defining and running multi-container Docker applications. It allows you to describe your application's services, networks, and volumes in a YAML file, and then with a single command, you can create and start all the services defined and also stop all the services defined. It simplifies the process of managing multi-container applications, especially during development and testing phases.

Docker Compose excels in defining and managing the configuration of services within a single host, providing a simplified way to specify a multi-container setup in a single docker-compose.yml file. It simplifies the process of defining and running applications with multiple interconnected containers, especially during development and testing.

While Docker Swarm is a native clustering and orchestration solution for Docker. It turns a group of Docker hosts into a single, virtual Docker host. This allows you to deploy services to a cluster of machines, making it easier to scale and manage containerized applications. This tool provides features like service discovery, load balancing, and scaling that go beyond the scope of what Docker Compose offers.

Running your first Docker container

Before going to your command line or terminal to start the image using Docker, ensure to start the Docker Desktop software else you will have errors. You should have something like this when you start the software.

Using the docker run command:

  • Open your command line on Windows or Bash shell for macOS or Linux shell for Linux OS. Verify the version of Docker you are using, by following the picture below.

  • We are going to run one of docker most simplest images, take a guess “Hello-world” container image. Go to the public docker container registry ( docker hub https://hub.docker.com/ )

We are going to pull that image to our local docker environment. Follow the picture below.

To view the image in our docker environment, note, that it is not yet running because we have not run it, we only downloaded it from the Docker Hub, that’s what the “docker pull” command does.

Viewing Images pulled

If you look carefully “hello-world” is directly below the “Postgres” image, you may be different but trust me it’s the same thing.

  • Now we are going to run the hello-world image as a container, and view the content on a browser, I will be using the Chrome browser.

There are several ways of running images or a stopped container, below are some of the ways.

“ docker run < image name from docker hub >” This command will download the image ( pull the images from the docker hub ) and run it immediately.

“ docker run <image name already in downloaded docker repository > i.e., the image is on your local machine docker environment.

docker run -p 8081:8081 <image name using any of the above methods> ”

“ docker run -p (container port) : (local machine port) <image name using any of the above methods>” where the image is being bound to the container port and the port of your local system so you can send a request to the listening port and get a response from the server. The process I just explained is called port binding.

We are going to use the second case so we can view the content of the hello-world container, below is a picture of what you should see.

The "hello-world" image is a minimalistic image used for testing and doesn't have a web service running, which is why you can't view it directly in your browser. The message you received indicates that your Docker installation is working correctly, and it provides information about the steps Docker took to run the "hello-world" container.

We are going to use the third case in combination with the second way of running images so we can view the “nginx” container, below is a picture of what you should see.

Open your Chrome browser and type in localhost:8081 you should see the content of the hello-world docker container running now.

Note by default When you run the command docker run -p 8081:80 nginx, you are mapping port 8081 on your host machine to port 80 on the Nginx container. The Nginx image is configured to listen on port 80 inside the container. If you map your host port to the container's port 80, it aligns with the default configuration, allowing you to access the Nginx web server from your browser.

If you use another port it may not work or it may work, it all depends on your system.

To list running containers and their credentials follow the picture below. First of all open another terminal page.

There you have it, now go to your browser and type in “ localhost:8081”, you are trying to send a request to your system port 8081 and get a response from the nginx container port:80

If you get a response like the picture below, congratulations you just understand the concept of routing but the basics though.

Container management

Starting, stopping, and removing containers

Ok like I explained above you can’t start a container you haven’t downloaded (pulled) from a repository which could either be public (docker-hub) or private (proprietary hub)

So, let’s use the “hello-world” container we pulled earlier hope you still remember how to list the containers.

  1. To list containers that are running, use the picture below as your guide.

  1. To list containers that have been stopped, all you will need to do is use the “docker ps -a” and attach the flag (option) “-a” to the “docker ps”. Look at the picture below.

- Now we want to start and stop the “nginx” container image.

  1. First of all, we need to check whether the container is running.

If you have been following me “nginx” container should be running.

  1. Let’s stop and start the “nginx” container. You are going to type in “docker stop < container ID >”.

To check if the “nginx” container was stopped, use the picture below.

  1. To start the “nginx” container immediately. Use the “docker ps -a” to list all the stopped and running containers.

  1. Copy the container ID of the “nginx” container images you want to start, which should be the first image.

After you start the image verify if it is running, you should know how to do that by now.

To remove an image, you don’t want to use anymore:

- List all images whether running or stopped.

  • Copy the Container ID of the container you want to delete, then use this “docker rm <container ID >”. Ok, let’s say I select the ID of the “hello-world” container, I have copied the “hello-word” container ID, following the picture below.

Congratulations you have learned how to start, list, stop, and delete containers, ensure to follow me on Twitter Alpheus___.

Managing container resources (CPU, memory)

CPU Management:

1. Setting CPU Shares:

You can assign a weight to a container relative to other containers using the “--cpu-shares” option. The higher the weight, the more CPU time the container gets. In the picture below we will assign more CPU to the “nginx” container image.

docker run --cpu-shares 512 <container name>

2. Limiting CPU Usage:

Use the “--cpus” option to limit the number of CPUs a container can use.

docker run --cpus 0.5 <container name>”. OK now stop the running container and run the command in the picture below.

Note, that the two containers you just started are very different, to know the difference, open another shell and type “ docker stats”. If you do it for the two processes you will get a slightly different result because nginx server is not really a heavy container.

3. Setting CPU Period and Quota:

The “--cpu-period” and “--cpu-quota” options allow you to set a limit on the CPU usage over a specific period.

docker run --cpu-period=100000 --cpu-quota=50000 <container name>

I guess if you look at the picture below since you have read this far, you should know the next two steps to follow, else follow the picture below.

4. Viewing Resource Usage:

Check resource usage of running containers. Use the “docker stats” command. Note, it will open the environment where your resource's credentials are being displayed.

Executing commands within containers

To execute commands within a Docker container, you can use the “docker exec” command.

docker exec [OPTIONS] CONTAINER COMMAND [ARG...]

  • OK let’s say you want to view the directories of the “nginx” container. Start the “nginx” container

Use “docker ps -a” to list the stopped and running containers.

  • Copy the name of the nginx container e.g. “peaceful_shirley”. These names are randomly given to the containers by docker. Select container name you want to start and follow the steps below in the picture.

List the running containers with “docker ps”.

Accessing the container shell

To enter the container shell that is running now, follow the picture below, or copy this command below.

docker exec -it peaceful_shirley /bin/bash” change “peaceful_shirley” to your container name.

  • it: These are two separate options used together.

  • i: Keep STDIN open even if not attached. It ensures that the standard input (stdin) of the container is kept open.

  • t: Allocate a pseudo-TTY (terminal). It allocates a terminal or console interface for the command.

Together, “-it” is commonly used to interact with a container, allowing you to input commands and receive output as if you were using a local terminal.

  • peaceful_shirley: This is the name or ID of the Docker container where you want to execute the command.

  • /bin/bash: This is the command you want to run inside the container. In this case, it starts an interactive Bash shell. The /bin/bash path specifies the location of the Bash shell executable in the container.

So, the overall command is asking Docker to execute an interactive Bash shell inside the container named “peaceful_shirley”. This gives you a command prompt within the container, allowing you to interact with the container's filesystem and run commands as if you were inside it.

Running shell commands in containers

Type “ls” to list the filesystem of the container in the shell environment.

If you want to exit the shell environment you can just type “exit”.

Conclusion

In conclusion, understanding how to work with Docker containers is a crucial skill for efficiently managing and troubleshooting containerized applications. The docker exec command, with options like -it for interactive sessions and the ability to run specific commands inside containers, empowers developers and operators to navigate, inspect, and manipulate container environments seamlessly.

By mastering container orchestration, you open the door to a world of scalable and portable application deployment. Whether you're building microservices, deploying applications in production, or exploring the vast ecosystem of containerized technologies, Docker provides a powerful toolset to streamline your development and deployment processes.

Stay curious, keep exploring, and feel free to reach out with any questions or insights. For more tech updates and discussions, follow me on Twitter at @Alpheus___ or Linkedin Alpheus. Let's continue the conversation and share our experiences in the fascinating world of containerization and technology. Happy coding!