Currently I’m planning to dockerize some web applications but I didn’t find a reasonably easy way do create the images to be hosted in my repository so I can pull them on my server.
What I currently have is:
- A local computer with a directory where the application that I want to dockerize is located
- A “docker server” running Portainer without shell/ssh access
- A place where I can upload/host the Docker images and where I can pull the images from on the “Docker server”
- Basic knowledge on how to write the needed
Dockerfile
What I now need is a sane way to build the images WITHOUT setting up a fully featured Docker environment on the local computer.
Ideally something where I can build the images and upload them but without that something “littering Docker-related files all over my system”.
Something like a VM that resets on every start maybe? So … build the image, upload to repository, close the terminal window, and forget that anything ever happened.
What is YOUR solution to create and upload Docker images in a clean and sane way?
For the littering part, just type
crontab -e
and add the following line:@daily docker system prune -a -f
as a user with root permission or as root ?
You shouldn’t need root permissions to run docker, just can create a
docker
group and add your user to it. This will give you the steps on how to run docker withoutsudo
.
I use Gitea and a Runner to build Docker images from the projects in the git repo. Since I’m lazy and only have one machine, I just run the runner on the target machine and mount the docker socket.
BTW: If you manage to “litter your system with docker related files” you fundamentally mis-used Docker. That’s exactly what Docker is supposed to prevent.
Self hosting your own CI/CD is the key for OP. Littering is solved too because litter is only a problem on long running servers, which is an anti-pattern in a CI/CD environment.
I already have Forgejo (soft-fork of Gitea) in a Docker container. I guess I need to check how I can access that exact same Docker server where itself is hosted …
With littering I mean several docker dotfiles and dotdirectories in the user’s home directory and other system-wide locations. When I installed Docker on my local computer it created various images, containers, and volumes when created an image.
This is what I want to prevent. Neither do I want nor do I need a fully-featured Docker environment on my local computer.
I build, configure, and deploy them with nix flakes for maximum reproducibility. It’s the way you should be doing it for archival purposes. With this tech, you can rebuild any docker image identically to today’s in 100 years.
I knew you were going to mention nix before reading you post.
::Robert Redford nodding gif::
I use drone CI. you can also use woodpecker which is a community fork of drone CI. https://github.com/woodpecker-ci/woodpecker
For local testing: build and run tests on whatever computer I’m developing on.
For deployment: I have a self hosted gitlab instance in a kubernetes cluster. It comes with a registry all setup. Push the project, let the cicd pipeline build, test, and deploy through staging into prod.
Docker, Jenkins, Docker-in-Docker (dind)
Nix + dockerTools.
Doesn’t even need docker, and if buitt with flakes I don’t even have to checkout the repo.
I use podman, and the standalone tool “buildah” can build images from dockerfiles, and the tool “skopeo” can upload it to an image repository.
VM with a docker build environment.
As for “littering”, a simple
docker system prune -f
after a build gets rid of most of it.pycharm + selfhosted docker registry
Nowadays, I build them locally, and upload stable releases to registry. I have in the last used GitHub runners to do it, but building locally is just easier and faster for testing.
I use portainer, and when I deploy an image, I write a short bash script for it.
- stop the image if running
- pull the image
- run the image
This lets me easily do updates. I have a script for each image I run, it’s less than a dozen. They’re all from public repositories.