June 17, 2025
Getting Started with Docker on Linux: A Comprehensive Guide

Getting Started with Docker on Linux: A Comprehensive Guide

Docker has revolutionized the way developers build, deploy, and run applications. By using containerization, Docker enables applications to be packaged with all their dependencies, providing consistency across different environments. While Docker is available on various platforms, it’s particularly powerful and efficient when used on Linux due to its native integration with Linux container technologies.

In this guide, we’ll explore what Docker is, why it’s useful, and how to install and use Docker on a Linux system.

What is Docker?

Docker is an open-source platform that automates the process of building, shipping, and running applications inside containers. Containers are lightweight, portable, and self-sufficient units that contain everything an application needs to run—such as code, runtime, libraries, and system tools. With Docker, developers can package an application along with its environment into a standardized unit that can be executed on any machine, regardless of the underlying hardware or operating system.

The core of Docker consists of two main components:

  • Docker Engine: This is the runtime that enables the creation, management, and execution of containers.
  • Docker Hub: A cloud-based registry service that allows users to share containerized applications and images.

Key Concepts in Docker

To fully grasp Docker’s functionality, it’s important to understand a few core concepts:

Containers

  • A container is a lightweight, standalone, executable package of software that includes everything needed to run a piece of software, such as the application code, runtime, libraries, and dependencies.
  • Unlike virtual machines (VMs), containers share the host operating system’s kernel, making them much more efficient in terms of resource usage.

Images

  • A Docker image is a read-only template used to create containers. It contains the application and its dependencies.
  • Images are the building blocks for containers. A typical image might contain a base operating system, an application server (like Nginx or Apache), and the specific application code.

Dockerfile

  • A Dockerfile is a script that contains instructions to build a Docker image. It specifies the environment and configuration for your application, allowing it to be reproduced across different environments.

Docker Compose

  • Docker Compose is a tool for defining and running multi-container Docker applications. With a simple YAML file, you can configure services, networks, and volumes, enabling developers to define complex applications with multiple interdependent services.

Volumes

  • Volumes are used for persistent data storage in Docker. Unlike containers, which are ephemeral, volumes allow data to persist beyond the lifecycle of a container.

Networks

Docker allows you to define networks that enable containers to communicate with each other. You can create isolated networks to control traffic flow between services.

Why Should You Use Docker?

There are many reasons why Docker has become a staple in modern software development:

  • Portability: Docker containers can run on any machine that supports Docker, whether it’s a local laptop, an on-premises server, or a cloud platform. This makes it easy to ensure your application behaves the same everywhere.
  • Isolation: Docker provides process isolation. Each container runs as a separate process, ensuring that applications don’t interfere with each other or with the host system.
  • Resource Efficiency: Containers are lightweight and share the host system’s kernel, which allows them to start quickly and consume fewer resources compared to VMs.
  • Versioning and Rollbacks: Docker images are versioned, so it’s easy to manage and roll back changes if needed. You can also reuse Docker images across different projects.
  • Continuous Integration and Deployment (CI/CD): Docker integrates well with CI/CD pipelines, ensuring that the same environment is used throughout development, testing, and production.

Installing Docker on Linux

Docker provides official packages for most popular Linux distributions. In this section, we’ll cover how to install Docker on some of the most commonly used Linux distros.

# on Ubuntu 
sudo apt-get update
sudo apt-get install ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc

# Add the repository to Apt sources:
echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
  $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
  sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update

sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin


# On raspberry pi 
sudo apt update
sudo apt upgrade
curl -sSL https://get.docker.com | sh

# On Archlinux
sudo pacman -S docker docker-compose
sudo systemctl start docker
sudo systemctl enable docker

Adding user to docker group

So, our next step is to add our current user to the docker group by using the usermod command as shown below.If we don’t add our user to the group, we won’t be able to interact with Docker without running as the root user.

sudo usermod -aG docker $USER

Since we made some changes to our user, we will now need to log out and log back in for it to take effect.

Testing docker

docker run hello-world

Basic Docker Commands

Images (pulling , building , listing and deleting)

Docker images are self-contained templates used to create containers, leveraging a layered file system for efficient data storage. Each layer in an image represents a specific step in the build process—such as installing software packages or adding configuration files—and contains only the changes made at that step. This tiered structure means that only the modified layers need to be rebuilt and redistributed, making it a highly efficient way to update and share images. The use of layers not only reduces redundancy but also speeds up the process of sharing and deploying images.

# download the pre-built images
docker pull <image_name>

# Build an Image from a Dockerfile
docker build -t <image_name>

# buid the image from a Dockerfile with random name
docker build .

# List local images
docker images 

# Delete an Image
docker rmi <image_name>

# Remove all unused images
docker image prune

Container (run,start/stop/delete)

The docker start command is used to start an existing, stopped container whereas docker run is used create and start a new container from a Docker image.

# Create and run a container from an image, with a custom name:
docker run --name <container_name> <image_name>

# Create and run a container from an image, with a random name
docker run <image_name>

# Create and run a container from an image in interative mode
docker run -it <image_name>

# Run a container in the background
docker run -d <image_name>

# Run , set working directory and execute command inside the container 

## Used case 1
docker  run -w /root -i -t  ubuntu pwd

## Used case 2
docker run -it --rm -w /root/abc ubuntu bash

## Used case 3
docker  run --rm  -v $(pwd):$(pwd) -w $(pwd) -i -t  ubuntu pwd
/home/pb

## used case 4
docker  run  -v ./content:/content -w /content -i -t  ubuntu pwd


# Stop an existing container:
docker stop <container_name> (or <container-id>)

# Run the container in non interactive mode
docker start <container-id>

# Run the container in attached + interactive mode 
docker start -ai <container-id>

# Remove/deleted a stopped container:
docker rm <container_name>

# Remove all stopped containers
docker container prune

# Auto delete/remove container once it is exited / stopped
docker run --rm <image_name> 

Print the container logs

To print the logs of a Docker container, you can use the docker logs command. This command allows you to view the output (stdout and stderr) generated by a container while it is running or after it has stopped.

# print and exit
docker logs <container-id/name>

# print and follow
docker logs -f <container-id/name>

Attaching to docker container

The docker attach command is used to attach your terminal to the main process running inside a Docker container, allowing you to interact with it or view its output directly. It displays the output of the container’s ENTRYPOINT and CMD process. This can appear as if the attach command is hung when in fact the process may simply not be writing any output at that time.

docker attach <container-id>

Execute a command in a running container

docker exec -it <container_name_or_id> <command>
docker exec <container_name_or_id> <command>

# example
docker exec ngix:v1 netstat -ltntp

docker exec -it <container-id> bash
docker exec -it <container-id> bash -c "echo 'hello'"

Key Differences Between docker attach and docker exec

Featuredocker attachdocker exec
PurposeAttach to the main process of the containerExecute a command (e.g., a shell) inside a container
InteractionConnects to the container’s primary process, cannot launch new processesAllows you to start new processes inside the container
Multiple AttachmentsOnly one attachment to the main processMultiple instances of docker exec can be run
Use CaseMonitoring a container’s primary output (e.g., logs)Running an interactive shell or executing commands
DetachingDetach with Ctrl + P + Q without stopping the containerDetach naturally after command execution ends or with
Terminal FeaturesBasic terminal output from the container’s main processFull shell capabilities, including interactive features, commands, and editing

List container states [Running/Not running]

# To list currently running containers:
docker ps

# List all docker containers (running and stopped):
docker ps --all

Container ports mapping

# Run a container with and publish a container’s port(s) to the host.
docker run -p <host_port>:<container_port> <image_name>

# Publish muliple ports
docker run -p <host_port1>:<container_port1> -p <host_port2>:<container_port2> <image_name>

Bind mount

In Docker, a bind mount is a mechanism that allows you to mount a file or directory from your host machine into a container. This means that a specific file or directory on your host system is directly linked to the container, and any changes made in one location will be reflected in the other.

Bind mounts are particularly useful when you want the container to interact with files or directories on your local filesystem, or when you want to persist data outside the container’s filesystem. Unlike volumes (which are managed by Docker), bind mounts give you more control over the mounted content, as they directly reference files or directories from the host system.

# Single share
docker run -v /abosute/path/of/dir/on/host:/path/inside/docker/container/ <image_name>

# multiple dir share 
docker run -v /abosute/path/of/dir1/on/host:/path/inside/docker/container1/ -v /abosute/path/of/dir2/on/host:/path/inside/docker/container2/ <image_name>

Volume

In Docker, volumes are a specialized storage mechanism for persisting and managing data that can be shared across containers and even across Docker hosts. Unlike bind mounts, volumes are managed by Docker itself and provide a more robust and portable solution for managing data in containerized environments.Data stored in a volume persists even if the container is stopped, restarted, or removed.Volumes can be backed up, restored, and moved between different Docker hosts.

# SIngle volume
docker volume create my_volume
docker run -d -v my_volume:/data my_image
# or you can build it in single step 
docker run -v <volume-name>:/path/inside/docker/container <image_name>


## Multiple volumes 
docker run -v <volume-name1>:/path/inside/docker/container -v <volume-name2>::/path2/inside/docker/container<image_name>

List , delete and prune volumes

docker volume ls

docker volume rm <volume-name>

docker volume prune

View resource usage

This will print ram,cpu ,io ,pid , id and name of all the controllers in realtime

docker container stats

Copy files/dir container to host and host to container

# From docker container to host
docker cp <container-id>:/container/file/path <location on host> 

# From host to docker container
docker cp <file/path> <container_id>:/location/on/container

Rename docker container

docker rename <container-old-name> <new-name>

Set working dir and execute commands

# Used case 1
docker  run -w /root -i -t  ubuntu pwd

# Used case 2
docker run -it --rm -w /root/abc ubuntu bash

# Used case 3
docker  run --rm  -v $(pwd):$(pwd) -w $(pwd) -i -t  ubuntu pwd
/home/pb

### Alternative syntax of used case 3
docker  run  -v ./content:/content -w /content -i -t  ubuntu pwd

Snapshot

Taking a snapshot of a Docker container is a common use case for creating backups, preserving a container’s state, or creating an image based on the current container. Docker doesn’t directly use the term “snapshot”, but you can achieve similar results by creating a Docker image from a container’s current state.

The docker commit command is used to create an image from a running container. This is the closest equivalent to taking a snapshot of a container. When you commit a container, it creates a new image with all the changes that have been made to the container since it started.

# create a image based on container
docker commit <container-id> <custom-repo-name/imageName>:<versionInfo>

## This image will appear it in 'docker image' command output 
## Now you can run it normally 

# Commit a container with new CMD and EXPOSE instructions

### working exapmle 
docker commit --change='CMD ["apachectl", "-DFOREGROUND"]' -c "EXPOSE 80" c3f279d17e0a  svendowideit/testimageS

Note: The commit operation will not include any data contained in volumes mounted inside the container.

Save and load

In Docker, the docker save and docker load commands are used for exporting and importing Docker images, respectively. These commands are particularly useful for transferring Docker images between systems, creating backups of images, or storing images in an external format (such as a tarball).

# Save 
docker save <image_name> > /path/to/file.tar

## alternatively
docker -o /path/to/file.tar save <image_name>

## further compression 
docker save myimage:latest | gzip > myimage_latest.tar.gz

# Load
docker load < /path/to/file.tar

## alternatively
docker load -i fedora.tar

The docker save command is used to save a Docker image to a tarball file. This tarball can then be transferred, stored, or backed up. Essentially, it allows you to create an archive of a Docker image that can be moved around without requiring Docker to be running on the target machine.

The docker load command is used to import a Docker image from a tarball (a .tar file). This is the inverse of the docker save command. You can use docker load to load a previously saved image into the local Docker image repository so you can use it to create containers.

Conclusion

Docker is an essential tool for modern developers and system administrators. By using containers, you can ensure that your applications run consistently across different environments, making development and deployment much more efficient. With Docker’s deep integration into Linux, installing and running Docker on a Linux system is straightforward, and once set up, you can begin creating, managing, and deploying containers with ease.

Leave a Reply

Your email address will not be published. Required fields are marked *