As a Volvo fan, that was my first thought too.
US or Japan 7/11? I’m pretty sure they’re different grades of coffee.
My setup is similar. My main “desktop” is a Slackware VM through VNC/guacamole.
Why are you running full VMs for something that can be put in a container? Sounds to me (without having any evidence or proof) that you’re running out of memory and you’re swapping and it’s taking forever. That’s what causes the VMs to slow/stop.
Why not just run your own WireGuard instance? I have a pivpn vm for it and it works great. You could also just put jellyfin behind a TLS terminating reverse proxy.
I’d suggest Alpine too. Works great for me so far.
Cloudflare zero trust tunnel might be up your alley. Look into that. It’s free but has privacy concerns so do your homework.
It was me. Guess what I’ll be doing today.
This is the way.
Forgive my stupidity, but couldn’t you just use split-horizon DNS and have your internal DNS resolve to your homelab instead of the VPS? Personally, that’s what I’ve done. So external lookups for sub.domain.tld go one way and internal lookups go to 10.10.10.x.
So, docker networking uses it’s own internal DNS. Keep that in mind. You can create (and should) docker networks for your containers. My personal design is to have only nginx exposing port 443 and have it proxy for all the other containers inside those docker networks. I don’t have to expose anything. I also find nginx proper to be much easier to deal with than using NPM or traefik or caddy.
Why did you register two separate domains instead of using a wildcard cert from LE and just using subdomains?
This is what I ended up going with. I’ll just have to keep an eye on disk space.
I’ll have to check this out. Have you run this in a container or just a native app?
Kind of. I’m thinking something along the lines of sonarr/radarr/etc but with the ability to play/stream the podcast instead of downloading it. I tend to use web interfaces of stuff like that at work and can’t really use my phone. Maybe I’ll have to look into a roll-your-own solution using some existing stuff. Was hoping I wouldn’t have to.
Use a USB drive or otherwise download this on the Win side and get it over to your Ubuntu side: linky Install that package and you should be able to build your kernel module using dkms.
links is pretty lightweight. All joking aside, I’d look at adding RAM to it if possible. That’s probably going to help the most.
Also, to add to this: you’re setup sounds almost identical to mine. I have a NAS with multiple TBs of storage and another machine with plenty of CPU and RAM. Using NFS for your docker share is going to be a pain. I “fixed” my pains by also using shares inside my docker-compose files. What I mean by that is specify your share in a volume section:
volumes:
media:
driver: local
driver_opts:
type: "nfs"
o: "addr=192.168.0.0,ro"
device: ":/mnt/zraid_default/media"
Then mount that volume when the container comes up:
services:
...
volumes:
- type: volume
source: media
target: /data
volume:
nocopy: true
This way, I don’t have to worry as much. I also use local directories for storing all my container info. e.g.: ./container-data:/path/in/container
Goddamnit, I didn’t even think about this when I saw they were doing the mass delete. Here’s to hoping that they’ll at least keep the videos up. Waaaay too much stuff on YT to lose it all. Anyone know if archive.org is backing them up?