My go-to for this is a plain Debian or Ubuntu container with Cockpit and the 45Drives file sharing plugin. It’s pretty straightforward and works pretty well.
My go-to for this is a plain Debian or Ubuntu container with Cockpit and the 45Drives file sharing plugin. It’s pretty straightforward and works pretty well.
You can set maintenance schedules in Uptime Kuma and alerts won’t be sent out during those times. I use that for when my backup routines run each night. That seems like a decent cross-platform work around.
It sounds like you’ve got your solution already, but just in case someone stumbles on this later, I thought I’d mention autofs.
I’m coming to prefer it over fstab entries because it handles disconnections nicely and attempts to reconnect. Worth checking out for those who haven’t played with it.
Could be. If that’s the case, it’s nothing I’ve noticed. I’ve got a 32gb VM and I’m running a bunch of LXC and docker containers on it without issue.
I’ve never heard anyone else mention them, but I’ve had really good luck with https://www.ssdnodes.com for the past several years. I don’t recall ever using their support, but I did have a policy question before buying when I first signed up and they were pretty quick to reply. I think I found them on LowEndBox.
I second mailcow. It’s what I’ve been using for years and it’s pretty great.
One thing I’ll add is before you take the plunge, make sure your VPS address isn’t on a block list somewhere. Pay a visit to mxtoolbox.com and you should find some resources there.
I’m a fan of the UniFi and Omada lines, but for your use case, I’d be looking for any AP that could run OpenWRT. That’s a super-powerful Linux-based router OS that meets all your needs and will present a nice web interface for each AP, no controller needed.
Check the project’s site for hardware compatibility, but I’ve had good luck with the GL.iNet travel routers and I bet some of their bigger models would do the trick for you.
I completely agree with this. Seems like a stellar use for either Cloudflare Tunnels or Tailscale’s similar Funnel feature.
Connect it only to the gramos deployment and that will be the only piece of your setup available publicly.
I have a couple older Minis in my Proxmox cluster. One’s a 2012 model and the other is a 2018. They both run great (and the 2018’s got 64GB of RAM and 10Gb Ethernet). I’m not sure I’d go looking for them for a homeland, but they’re great to repurpose.
A bind mount kind of shares a directory on the host with the container. To do it, unless something’s changed in the UI that I don’t remember, you have to edit the LXC config file and add something like:
mp0: /path/on/host,mp=/path/in/container
I usually make a sharing dataset and use that as the target.
From that prompt, type ls -l
. That will show you a listing of the items in the /var/www/html
directory and there will be columns for the user and group that own each file. It will most likely say www-data
.
You could likely use dd
or clonezilla to create a duplicate of your boot drive and boot your laptop right from that, but that’s not quite what you’re after.
There are some distros lately that use a declarative config file to set the whole thing up that I think is much more what you have in mind. The big ones that come up a lot are nixOS and Fedora Silverblue. Maybe one of those systems would be to your liking.
How about option 3: let Proxmox manage the storage and don’t set up anything that requires drive pass through.
TrueNAS and OMV are great, and I went that same VM NAS route when I first started setting things up many years ago. It’s totally robust and doable, but it also is a pretty inefficient way to use storage.
Here’s how I’d do it in this situation: make your zpools in Proxmox, create a dataset for stuff that you’ll use for VMs and stuff you’ll use for file sharing and then make an LXC container that runs Cockpit with 45Drives’ file sharing plugin. Bind mount the filesharing dataset you made and then you have the best of both worlds—incredibly flexible storage and a great UI for managing samba shares.
That’s awesome, I’ll definitely be interested to see how it all works out.
Yeah, I started working on it once a couple years ago and getting it spun up was a chore. Life got busy and I never finished.
That imapbox looks pretty interesting. Thanks for tracking that one down.
Not my reply, but I’ve also had mixed tests playing with Netmaker. It’s a project I really want to like, but getting clients to work together is sometimes finicky. It’s a young project, so maybe the kinks will get worked out. I do like the admin UI.
If you’re looking for something more or less in the same footprint, I understand those cheap Wyze cameras can be used. There are alternative firmwares available that can be flashed to them to open up the rtsp stream to whatever self-hosted recorder you’d like. Haven’t tried it, but have heard it mentioned on the Self Hosted podcast.
So I think the way I would want to do this is with something like mailpiler (https://www.mailpiler.org/). It’s been on my long list of things to dive into for a while.
It’s managed service provider, which translates more or less to a company that handles IT for other companies.
There was a recent conversation on the Practical ZFS discourse site about poor disk performance in Proxmox (https://discourse.practicalzfs.com/t/hard-drives-in-zfs-pool-constantly-seeking-every-second/1421/). Not sure if you’re seeing the same thing, but it could be that your VMs are running into the same too-small
volblocksize
that PVE uses to make zvols for its Vans under ZFS.If that’s the case, the solution is pretty easy. In your PVE datacenter view, go to storage and create a new ZFS storage pool. Point it to the same zpool/dataset as the one you’ve already got and set the block size to something like 32k or 64k. Once you’ve done that, move the VM’s disk to that new storage pool.
Like I said, not sure if you’re seeing the same issue, but it’s a simple thing to try.