Docker is a powerful platform that enables developers to build, ship, and run applications in containers. While Docker is simple to use for basic tasks, there are advanced features that can be invaluable when dealing with more complex deployment scenarios. In this article, we will explore three advanced Docker topics: sharing TTY ports with a Docker container, blocking internet access within a container, and pushing Docker images to Docker Hub.
Sharing TTY Port with a Docker Container
When working with Docker containers, you may sometimes need to share hardware resources, such as serial ports, with your containers. This is especially useful for applications like microcontroller development, serial communication, or connecting IoT devices. One common use case is to interact with a device connected via a serial port, like /dev/ttyACM0.
In this article, we’ll explore how to share a TTY (teletypewriter) port between the host system and a Docker container, focusing on two approaches: one using the –privileged flag and the other without it. We’ll also explain the necessary steps to ensure your user can access serial devices in a Linux environment.
docker run -it --rm --privileged -v /dev/ttyACM0:/dev/ttyACM0 ubuntu
# now install tio inside docker container
apt update && apt install tio
# Now run tio inside the docker
tio /dev/ttyACM0
Explanation of the –privileged Mode
Using the –privileged mode grants the container full access to the host’s devices and kernel features. This is why it’s essential for sharing hardware devices such as serial ports. Without –privileged, Docker containers are heavily restricted in terms of accessing host hardware, for security and isolation purposes.
In this case, –privileged is required to allow the container to access the /dev/ttyACM0 device, which is a serial port device on the host machine. The –privileged flag provides broad access to all devices, so this method is considered less secure, as it opens the container to potential misuse or system compromise.
A Simpler Approach Without Using the –privileged Flag
For security reasons, it’s often better to avoid using –privileged unless absolutely necessary. Fortunately, Docker offers a more controlled approach for sharing serial devices without requiring full privileged access.
### Simple approach (without using privileged mode)
docker run -it --rm --device=/dev/ttyACM0 ubuntu
## Make sure you user are in dialout group
sudo usermod -aG dialout $USER
Explanation:
- –device=/dev/ttyACM0: This flag grants the container access to a specific device on the host, in this case, /dev/ttyACM0, without granting full access to all devices. This provides a more secure and limited form of access than the –privileged flag.
- User Permissions for Accessing Serial Devices
- On many Linux systems, serial ports like /dev/ttyACM0 are restricted by user permissions. Typically, only users in the dialout group have access to these devices. If you’re running Docker as a non-root user, you may need to ensure that your user has the necessary permissions to access the serial port.
Blocking internet access of a docker container
Blocking internet access for a Docker container can be important for several reasons, depending on the use case and the security requirements of your environment. Here are some key scenarios where restricting internet access for Docker containers might be necessary:
- Prevent Unintended Data Transfer: In some cases, containers might communicate with external servers (e.g., through analytics, telemetry, or auto-updates) unintentionally. By blocking internet access, organizations can prevent any unintended transfer of sensitive data or metadata to the cloud or third-party services.
- Internal-Only Services: Some containers might be designed to provide internal services only, such as a cache, worker service, or an internal API. These services don’t need to communicate with the outside world and, therefore, should be isolated from the internet.
- Avoid Unintended Outbound Traffic: For environments where internet traffic could incur costs (e.g., cloud-based Docker containers, private networks with bandwidth limitations, etc.), blocking internet access can help control unexpected charges caused by data transfer outside the local network.
- Prevent Unwanted Updates or Downloads: If a container is running software that might automatically update itself or download additional components from the internet, blocking internet access ensures that no unnecessary data is pulled into the container, avoiding both performance degradation and potential charges.
# create a internal bridge network with custom gateway , subnet and ip range
docker network create \
--driver bridge \
--gateway 192.168.100.1 \
--subnet 192.168.100.0/24 \
--ip-range 192.168.100.128/25 \
--internal \
no_internet_network
# using our network
docker run --rm --network no_internet_network -p 5000:5000 -v ./snipdata:/app/data pawelmalak/snippet-box
# Testing internet inside the container
docker exec -it 68052c45f726 sh
ping google.com
Back up, restore, or migrate data volumes
Backing up, restoring, or migrating data volumes in Docker is crucial for several reasons, primarily related to data persistence, disaster recovery, and managing changes in environments. Here’s a breakdown of why someone would engage in these activities:
- Upgrading Containers: When upgrading the version of a containerized application (e.g., a new version of a database container), you can back up its volume before performing the upgrade. This way, you can roll back to the previous version with the old data if the upgrade doesn’t go as planned.
- Moving Between Hosts: Docker volumes can be migrated between different Docker hosts or cloud environments. For example, if you need to move an application from a staging environment to a production environment, migrating volumes ensures that the data associated with the application moves along with the container.
- Restoration: If something goes wrong (e.g., a corrupted volume or failed container), restoring a backup can bring back data to a previous working state without having to rebuild everything from scratch.
Example 1 : Backup volume of deleted/stopped container
To back up a Docker volume, you essentially need to copy the contents of the volume to a safe location on your host. The backup procedure can be done by creating a temporary container that mounts the volume and then copying its contents. Here’s a detailed and working example using an Ubuntu container.
# We will first create a ubuntu container and put some data to it
services:
ubuntu:
image: ubuntu:latest
container_name: ubuntu-container
volumes:
- ubuntu_data:/data
command: bash -c "echo 'Hello, Docker Volume!' > /data/hello.txt; tail -f /dev/null"
stdin_open: true
tty: true
volumes:
ubuntu_data:
# Run this container in background
docker compose up -d
# opon the bash shell of the container to verify our stored data by ls /data/hello.txt
docker exec -it ubuntu-container bash
# Now we can destory this container by
docker compose down
# Now lets list all the volumes
docker volume ls
## in this caes name "ubuntu_compose_ubuntu_data" is assigned to our ubuntu volume
# If you want to inspect the volume and see where the data is stored on the host system, you can run:. This will give you location of the volume on your system
docker volume inspect ubuntu_compose_ubuntu_data
# we can verify the location by
sudo ls /var/lib/docker/volumes/ubuntu_compose_ubuntu_data/_data
# To backup out volume , we will monunt this volume in another container and then tar the contents of volume. THe tarred data will will be stored in bind mounted directory of the host
docker run --rm -v ubuntu_compose_ubuntu_data:/data -v /home/pb/backupV:/backup ubuntu tar czf /backup/backup.tar.gz -C /data .
Example 2 : Backup volume of running container
The –volumes-from flag in Docker is used to share volumes between containers. Instead of mounting a volume directly to a container, you can use –volumes-from to mount all the volumes from another container.
# Start the container that has the volume you want to back up
docker run -d --name my_container -v my_volume:/data ubuntu tail -f /dev/null
# The tail -f /dev/null keeps the container running in the background (it's just an example to keep the container alive without doing anything)
# Now we
docker run --rm --volumes-from my_container -v /path/to/backup/onHost:/backup ubuntu tar czf /backup/backup.tar.gz -C /data .
Explanation:
–volumes-from my_container: This flag tells Docker to mount all volumes from the my_container to the new container. So, if my_container has the volume my_volume mounted to /data, it will be available inside the new container at /data.
Why Use –volumes-from?
- Sharing Volumes Across Containers: This allows you to avoid manually mounting the volume in every container, making it more convenient for backup or other tasks where you need to access data in a volume but don’t want to mount it explicitly.
- Automating Backups: You can keep your containers and backup process separate, and use one container just for the task of backing up the data stored in volumes.
Conclusion
Whether you’re building containers for debugging purposes, ensuring network isolation, or preparing images for production use, these techniques enhance your ability to control the containerized environment effectively.