• 4 Posts
  • 58 Comments
Joined 1 year ago
cake
Cake day: June 27th, 2023

help-circle

  • Honestly what really matters (imo) is that you do offsite storage. Cloud, a friends house, your parents, your buddy’s NAS, whatever. Just get your data away from your “production/main” site.

    For me, I chose cloud for two main reason. First, convenience. I could use a tool to automate the process of moving data offsite in a reliable manner thus keeping my offsite backups almost identical to my main array and easy retrieval should I need it. Second, I don’t really have family or friends nearby and/or with the hardware to support my need for offsite storage.

    There are lots of pros and cons of each, let alone add your specific needs and circumstances on top of it.

    If you can use the additional drives later on in your main array, some other server or a different purpose then it may be worth while exploring the drives (my concern would be ease of keeping offsite data up to par with main data). If you don’t like it for one reason or the other, you can always repurpose the drives and give cloud storage a try. Again, the important thing is to do it in the first place (and encrypt it client side).


  • Well here’s my very abbreviated conclusion (provided I remember the details appropriately) when I did the research about 3 months ago.

    Wasabi - okay pricing, reliable, s3 compatible, no charges to retrieve my data, pay for 1tb blocks (wasn’t a fan of this one), penalty for data retrieval prior to a “vesting” period (if I remember correctly, you had to leave the data there for 90 days before you could retrieve it at no cost. Also not a big fan of this one)

    AWS - I’m very familiar with it due to my job, pricing is largely influenced by access requirements (how often and how fast do I want to retrieve my data), very reliable, s3, charges for everything (list, read, retrieve, etc). This is the real killer and largely unaccounted cost of AWS.

    Backblaze - okay pricing, reliable, s3 compliant, free retrieval of data up to the same amount that you store with them (read below), pay by the gig (much more flexible than Wasabi). My heartburn with Backblaze was that retrieval stipulation. However, they have recently increased it to free up to 3x of what you store with them which is super awesome and made my heartburn go away really quickly.

    I actually chose Backblaze before the retrieval policy change and it has been rock solid from the start. Works seamlessly with the vast majority of utilities that can leverage s3 compliant storage. Pricing wise, I honestly don’t think it’s that bad

    Hope this helps










  • When you created your containers, did you create a “frontend” and “backend” docker network? Typically I create those two networks (or whatever name you want) and connect all my services (gitlab, Wordpress, etc) to the “backend” network then connect nginx to that same “backend” network (so it can talk to the service containers) but I also add nginx to the “frontend” network (typically of host type).

    What this does is it allows you to map docker ports to host ports to that nginx container ONLY and since you have added nginx to the network that can talk to the other containers you don’t have to forward or expose any ports that are not required (3000 for gitlab) to talk from the outside world into your services. Your containers will still talk to each other through native ports but only within that “backend” network (which does not have forwarded/mapped ports).

    You would want to setup your proxy hosts exactly like you have them in your post except that in your Forward Hostname you would use the container name (gitlab for example) instead of IP.

    So basically it goes like this

    Internet > gitlab.domain.com > DNS points to your VPS > Nginx receives requests (frontend network with mapped ports like 443:443 or 80:80) > Nginx checks proxy hosts list > forwards request to gitlab container on port 3000 (because nginx and gitlab are both in the same “backend” network) > Log in to Gitlab > Code until your fingers smoke! > Drink coffee

    Hope this help!

    Edit: Fix typos






  • I went with the OpenSSL CA as cryptography has been a weakness of mine and I needed to tackle it. Glad I did, learned a lot throughout the process.

    Importing certs is a bit of a pain at first but I just made my public root ca cert valid for 3 years (maybe 5 I can’t remember) and put that public cert in a file share accessible to all my home devices. From each device I go to the file share once, import the public root ca cert and done. It’s a one time per device pain so it’s manageable in my opinion.

    Each service gets a 90 day cert signed by root ca and imported to nginx proxy manager to serve up for the service (wikijs.mydomain.io).

    Anything externally exposed I use let’s encrypt for cert generation (within NPM) and internally I use the OpenSSL setup.

    If you document your process and you’ve done it a few times, it’s gets quicker and easier.



  • I use rclone as well and was in your position not long ago (looking for a non complicated backup solution). Landed on rclone based on feedback and what I read online. Spent about an hour reading rcl one’s documentation and built a script to do the backups daily.

    OP if you go the rclone route, I can share my template script with you to get you started.

    The script is pretty simple: makes sure there’s a logging file created on the system ahead of time, timestamps, the actual backup job, error checking, notification via discord (success or failure) and log output to the file created above.

    Edit: I forgot to mention that recently (don’t know exactly when) Proxmox released something call Proxmox Backup Server (PBS). I have not used it but I imagine it integrates well with your Proxmox cluster but even then you may want to look at a complimentary solution to backup that server too.

    Edit: Even if you go with Proxmox Backup Server, you may want to thinking about how you backup the backup server. Preferably off site, in my opinion.