• 0 Posts
  • 10 Comments
Joined 1 year ago
cake
Cake day: July 4th, 2023

help-circle
  • Nah @exu is right: non-IT focused companies do not have the skills or desire to reliably set up and maintain these systems. There is no benefit to them creating their own server stack based on a community distro to save a few bucks.

    Smaller companies will hire MSPs to get them setup and maintain what they need. And medium to large size companies would want an enterprise solution (IE: RHEL) they can reliably integrate into their operations.

    This is for a few high value reasons. Taking Red Hat as an example:

    1. Standardization (IE: they can hire people with RedHat certificates and they will be a few steps ahead in ramping up to internal systems)
    2. Vendor support (IE: if something critical isn’t working they can get quick support from a Red Hat technician and get it resolved quickly)
    3. Reliability (IE: all software is backed and tested by Red Hat and if anything breaks from a package update its on Red Hat to fix)

    When lots of money is on the line companies want as many safety/contingency plans as they can get which is why RedHat makes sense.

    The only companies that will roll their own solution are either very small with knowledgeable IT people (smaller startups), or MASSIVE companies that will create very custom solutions and then train their own IT operations divisions (talking like Apple, Microsoft, Amazon levels).

    Not to say what Red Hat did is justified or good, because hampering the FOSS ecosystem is destructive overall, but just putting this into context.





  • Yeah I saw a post about it a long time ago on Reddit for users with lots of devices

    Basically it is just setting up one or two “central devices” that know all the client devices, but not linking the client devices individually.

    IE: One server is connected to your phone, laptop, tablet, desktop, etc. But the phone is not directly connected to your laptop or desktop or tablet.

    To be fair I don’t actually know if this is the best approach anymore or if just connecting all of them in a mesh is better 🤷

    Here is a forum post describing it.





  • I daily drive Fedora Silverblue on my laptop and distrobox has been great.

    I have layered only two packages: USB Guard and Distrobox. I run syncthing in a rootless podman container, and the rest goes through Distrobox.

    I was even able to setup ProtonVPN in distrobox and it functions as if it was directly installed on the host (just need to map your home folder and some permissions).

    I hope that immutable becomes either the standard or at least all major distros start offering it as an alternative. Makes everything foolproof and makes me much more willing to try new packages and tools because I can always just roll back.

    The only thing that would really make it perfect is if files in /etc/ where also handled in a similar manner. IE: Can make changes to configuration files, and easily roll back to defaults at any time.


  • I run everything in rootless containers using systemd service files generated with podman generate systemd.

    Podman Compose is a “community effort”, and Red Hat seems to be less focused on its development (here is their post about it).

    There are ways to get it working but I find it easier to go with podman containers and pods through systemd because the majority of documentation (both official and unofficial) leans in that direction.

    I don’t know how much you already know, so here is just a summary of things that worked for me for anyone reading.

    Podman uses the concept of “Pods” to link together associated containers and manage name spaces, networking, etc. The high level summary for running podman pods through systemd:

    • Create an empty pod podman pod create --name=<mypod>.
    • Start containers using podman run --pod=<mypod> ... and reconfigure until containers are working within the same pod as desired.
    • Use podman generate systemd to create a set of systemd unit files. Be sure to read through the options in that man page. – this is more reliable than creating systemd unit files by hand because it creates unit files optimized for the podman workflow.
    • place the generated systemd unit files in the right place (user vs. system) and then it can be started, enabled, and disabled as with other systemd unit files.

    Note: for standalone containers that are not linked or reliant on other containers, you can should skip creating the empty pod and can skip the --pod=<mypod> when starting containers. This should result in a single service file generated and that container will operate independently.

    This post goes over pods as systemd services.

    This doc goes over containers as systemd services.

    The Red Hat Enterprise Linux docs have a good amount of info, as well as their “sysadmin” series of posts.

    Here are some harder to find things I’ve had to hunt down that might help with troubleshooting:

    • Important: be sure to enable loginctl enable-linger <username> or else rootless pods/containers will stop when you log out of that session.
    • If you want it to run a container or pod at system startup you will need to specify the right parameters in the [Install] section of the systemd file, see this doc page. Podman generate systemd should take care of this.
    • If you are using SELinux there is a package called container-selinux that has some useful booleans that can help with specific policies (container-use-devices is a good one if your container needs access to a GPU or similar). Link to repo