I run my containers in an LCX on Proxmox (yes I heard I should use a VM, but it works…)
For data storage (syncthing, jellyfin …) I make volumes in the LXC. But I was wondering if this is the best way?
I started thinking about restoring backups. The docker backups can get quite large with all the user data. I was wondering if a separate “NAS” VM and NFS shares makes more sense. Then restoring/cloning docker lxc would be faster, for troubleshooting. And the user data I could restore separately.
What do you guys do?
I run my dockers all in one VM, with persistent volumes over NFS. That way the entire thing could take a dump and as long as I have the nfs volume, we’re Gucci.
If you’re using LXC and your filesystem is BTRFS you can use the built in snapshots.
Yes, before doing major changes i usually run a snapshot
I listened to https://thehomelab.show/ podcast today, and they mentioned that before doing major upgrades, you could create a clone VM from latest backups and test the upgrades before doing them for real. That way you both ensure safe upgrade and also make sure your backup is restorable.
It sounded like a good idea, but it got me thinking of the size of my LXC filled with user data… So I was wondering if I was doing it wrong
With BTRFS you can take a snapshot, upgrade and if things go wrong rollback to the snapshot. Snapshot are incremental so you won’t have issues with your data.
I use unpriveliged LXC für everything I have running in my proxmox.
Plex, syncthing, rclone, motioneye, pyload all in seperate Lxc on the boot drive.
All data of those is on my mirror raid, including the lxc backups. The rclone lxc backs the important data onto my cloud drive.
Do you use reverse proxy?
One of the reasons I use a single lxc is that I can reverse proxy containers without exposing ports / http to the LAN, it seemed like a good feature to me.
No reverse proxy. In LAN everything is seen and accessible.
No port is open to WAN, I connect via my router VPN from extern.
Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:
Fewer Letters More Letters LXC Linux Containers Plex Brand of media server package VPN Virtual Private Network
3 acronyms in this thread; the most compressed thread commented on today has 10 acronyms.
[Thread #57 for this sub, first seen 17th Aug 2023, 17:45] [FAQ] [Full list] [Contact] [Source code]
It’s not always possible but it’s generally good practice to configure your applications to use external storage rather than file systems - MySQL/PostgreSQL for indexable data, and S3-clones like MinIO for blob storage.
One major reason for this is that these systems generally have data replication and fall over redundancy built-in. So you can have two or more physical servers, have an instance of each type of server on each, and have these stay synchronized. If one server goes down, the disks crash, or you need to upgrade, you can easily rebuild a set of redundant servers without downtime, and all you need to do is save the configurations (and take notes!)
Like I said, not always possible, but in general the more an application needs to store “user data”, the more likely it is it has the ability to use one of the above as a backend storage system. That will reduce, significantly, the amount of application servers that need to be backed up, and may reduce your need to consider using NFS etc to separate the data.
Interesting! I felt S3 was more a business cloud storage api.
I did a quick search, and it seems neither syncthing or jellyfin is compatible with S3. What do you do in these cases?
I’m not directly familiar with either, but syncthing seems to be about backing up, so I’m not entirely surprised it’s file oriented, and jellyfin doesn’t look like it’s about user maintained content so much as being a server of content. So I’m not entirely surprised neither would support S3/Minio.
Yeah it took me a while to realize what S3 is intended to be too. But you’ll find “Blob storage” now a major part of most cloud providers, whether they support the S3 protocol (which is Amazon’s) or their own, and it’s to be used precisely the way we’re talking about: user data. Things clicked for me when I was reading the DoveCot manuals and found S3 was supported as a first class back-end storage system like maildir.
I’m old though, I’m used to this kind of thing being done (badly) by NFS et al…
Huh. I recently set up local dovecot for archiving old emails, but not S3.
I’m curious, when you work on a document, how does that work; Is it a file on your hard drive, have you mounted a bucket somehow, do you sync using restful api somehow?
Rclone can do this for you.