A tiny mouse, a hacker.

See here for an introduction, and my link tree for socials.

  • 0 Posts
  • 25 Comments
Joined 11 months ago
cake
Cake day: December 24th, 2023

help-circle
  • It’s not even the upgrades. Automatic, unattended upgrades have been a thing for a long while, and in general, they work remarkably well. At least in the sense that nothing “breaks”: programs will still work, and start up, and all that.

    But automatic upgrades can change things. Change an icon, move things around, change behaviour, introduce new features, new bugs, and so on. That is the hard part of maintenance, not the technical “go from version A.B to A.C”.

    Most immutable distros I’ve seen aim at improving the A.B->A.C upgrade scenario. They do very little, if anything at all, to keep the system familiar. Because they can’t, unless they control the entire stack. And even if they do, like in the case of the proposed GNOME OS, the UI still changes - often considerably - between major versions. If I maintain a system for others, I can prepare them in advance. If they do it themselves, they do not have that luxury, they’re not going to follow the development of the software they use, and I wouldn’t expect them to do so either. I can do it, and I am doing it, because I’d be doing it anyway for myself.


  • I’m supporting three people, only one of them lives with me. My parents live in a different city, pretty far away (far enough that just randomly visiting them for in-person troubleshooting is not an option). I maintain three separate computers for them. It doesn’t take much effort nowadays, because I used a system I am familiar with, a general purpose distribution, and set it up so that I can manage it remotely.

    I wouldn’t be able to maintain a more limited system for them, because it would lack the tools I need for remote maintenance. Hence my assertion that distributions focused entirely on non-enthusiasts are a futile attempt.


  • There seems to be a big gap between what people think others “ought” to understand. Like the expectation that changing tires is something someone needs to be able to do. Or one should be filing their own taxes. I can do both, but I’m never going to do either, because it’s more practical to let someone with way more expertise and knowledge do it for me.

    When it comes to taxes, for example, doing it would take a considerable time for me, to double check and verify everything, and it would be a frustrating experience. By hiring an accountant to do that for me, I save a lot of time and frustration, and can turn that time into work, which ends up netting me more money than my accountant’s pay. So why exactly should I be doing my own taxes?

    And changing tires: since we got our car some 8 years ago, we only ever had to change tires unexpectedly once. We called help, they were there in 10 minutes, meanwhile we nursed our one year olds back to sleep. A lot more convenient - and a lot faster! - than if we had to change tires ourselves.

    To bring this back on topic: I believe that it is perfectly fine to be an end-user who can use their system, their programs, but delegate the administrative tasks to someone else. Installing, upgrading, and in general, maintaining an operating system is not a skill that everyone ought to know. It certainly helps if they do, but it should not be a required skill.


  • I’m going to disagree here, partially. I agree that teaching people how to use a computer, at an early age, is important. It’s also important to teach them about failure, and set realistic expectations.

    That has little to do with constant system updates & maintenance. That is an entirely different skillset. Like, I can use my oven just fine, I know how to get around its kinda awkward menu system, to tell it whether I want to heat up frozen pizza, or if I’m baking bread, and stuff like that. I’m okay with learning a new menu system if I have to replace my oven. I will, however, leave the replacement to a professional. I will let a professional fix it too, should it break.

    Same goes for computers and my family: they are perfectly capable of using computers. They can - reluctantly - adapt to change. They do not want to fix or maintain things, however. And that’s fine! It’s not their area of expertise, nor are they interested in it.

    Most end-users are like that: they can use their systems, but don’t want to keep up with the constant change. That’s tiresome and distracting and annoying and error-prone. I believe these things are best done by someone who can smooth out the experience, someone who can help the end-users adapt, too, perhaps even prepare them in advance. That is what we should focus on, rather than trying to force unwilling people into maintenance. That never ends well.


  • I disagree that users won’t do stuff on their own. They will, but they will allocate very little time to it, on average, especially when compared to a tech savy person.

    My experience differs here. My parents will not maintain their systems. They could, especially my Dad (he is a techy, after all), they just don’t want to.

    I think distro must make mundane tasks such as system maintenance hands off. As an opt-in option not to upset power user. But things such as updates, full system update, disk space reclaiming, … should have a single “do the right thing without being asked to” toggle.

    That’s the thing: doing this is impossible, unless the distro controls the entire stack (which they don’t). Updates and upgrades can break things, and they will break things. Or if not break, then change things. You may find it surprising, but most users I talked to, regardless of their expertise, hate when the software they use daily suddenly changes.

    They just want to get things done. If their tools transform under them, that sets them back. Automated updates don’t help there. In fact, automated updates work against this goal. Which is why I maintain my parents systems: so if anything changes in a way that would break their routine, I can either reconfigure it, patch it, work it around some other way, or prepare them in advance. That needs a human element. And this part is why they have no desire to maintain their own systems.

    The technical part of “update all packages” is pretty much a solved problem, and can be automated away in the vast majority of cases. But that’s just a tiny part of the whole system maintenance problem space.

    Things a bit more complicated such as printing/scanning document should be more context aware.

    Now, this is something that has not been a problem for my family for literal decades. Printer is plugged in, they turn it on, press “Print”, done. If out of ink, or paper gets jammed, they get a notification, and can fix that, and try again. Scanning… just worked since forever. We did make an effort to buy hardware that works well with Linux - something I helped with, too.

    Daily tasks are not a problem, and never were. The maintaining a system parts are, and there, not even the automatable parts.

    Immutable distro have made good progress on that front IMO. But we still need better integration between applications and the Desktop Environment for things like printing, sharing and so on. I’m hopeful though. Generally speaking, things are moving in that direction. Even if we can argue flapak and snap are a step backward with regards to the integration of the DE, this is also an opportunity to formalize some form of protocol with the DE.

    From personal experience, these distros make no difference whatsoever for the end user. The hard part isn’t upgrading software, that worked fine with traditional packaging too. The hard part is making sure software doesn’t change in a way that breaks the habits and expectations of users. There is no technical solution there, which is another reason distros targeting non-enthusiasts are futile: they solve problems that never were a problem, but leave the real issues unaddressed.

    Flatpak did help me, because when Dad said he wants the latest LibreOffice, and doesn’t care if they completely change the UI, I could just install it for him via flatpak, instead of using Debian’s repo. My Mom, on the other hand, does not want the latest LibreOffice. She does not want it to change, ever. Every major upgrade so far brought in something that required her to re-learn parts of it, so she’s sticking to whatever is in Debian stable, and we set aside a few hours every two years or so, to learn the changed things whenever I upgrade her to the next Debian release.

    You see, different people have different needs, and there’s no one-size-fits-all solution. A general purpose operating system like Debian lets me build a systems that suit both of my parents. An immutable distro that relies entirely on flatpak for end-user applications would be unusable for my Mom. It would also be unusable for my Wife, because she relies on software I wrote, which I could easily install on her system as a NixOS derivation (something I am familiar with building), but one that I would have a much harder time turning into a flatpak thing (because I have no clue how to do that, and frankly, I’m not interested in learning it either).


  • I think you don’t distinguish enough between professionals and capables.

    Oh, but I do. The thing you’re not seeing is that there’s a difference between “can do something” and “willing to do something”.

    I am absolutely capable of filing my own taxes, did so in the past, but will never do it again: I hired a professional instead. She can do it faster than me, I can be sure she does it accurately, and according to the latest laws and regulations (so I don’t have to keep myself up to date on those!). Not to mention that I save a ton of time, which I can translate into work, and I end up making more money in that time than the services of my accountant cost. Likewise, I also know how to change a tire. I also know that I never want to do that. If I have to, I will call a professional, because I can, and changing the tire myself is absolutely not necessary.

    Similarly, both my parents are perfectly capable of maintaining their own systems (my Dad spent decades in IT, taught IT at a university, authored successful technical books on his area of expertise, etc; Mom programmed in DBase way back when), but they have no desire to do so. They have better things to do with their time.

    It’s not a question of “can”, but a question of “want”. A whole lot of people could maintain their operating systems. They absolutely do not want to, though. And if someone doesn’t want to do something, the best way to help them is to make it possible for them to avoid doing the thing they don’t want to do. In our particular case, that means maintaining their OS for them, for that helps a ton more than trying to force them to learn or do something they could, but viscerally hate.

    You do not need to maintain your own operating system in order to use it. Rather than trying to force people into maintaining theirs, we should make it easier for friends & family to maintain it for them. That would be a far bigger win for everyone involved.


  • A capable user is already a willing one. A whole lot of them aren’t, and that is fine.

    There is a huge difference between being able to use something, and being able to fix them, and being willing to fix them.

    Case in point, if my car breaks down, I take it to a professional to fix it. Not because it is magic I have no hope of learning, but because I am absolutely uninterested in it.

    If my pants rip, I take it to a professional, because that’s far more practical than trying to fix it myself.

    Same goes for computers: my Dad is a very capable user. He spent 3 decades in IT, authored succesful books on subjects that interested him. He would be capable of learning how to maintain his system, but he simply doesn’t want to. It isn’t interesting, nor fun for him. So I help him by doing it myself.

    My wife is also a very capable user, she can do everything on her computer that she wants. She hates computers, though, and would sooner divorce me than learning how to run apt update. She is a very capable user because I built a system she’s ok with.

    Similarly, she is an amazing cook, and I am not. I am a disaster in the kitchen even if I try. So I simply don’t. The best I can do is throwing frozen pizza in the oven, amd I am not interested in becomimg more capable than that. Why should she become more familiar with computers then?

    What I am trying to say is that people have wildly varying interests. We should not expect everyone to be competent at everything they may ever encounter.




  • Indeed. But someone has to maintain a system, and those of us who know what we are doing are much better equipped than those who don’t.

    The fact is that my family needs to use a computer. I have two options: let them try to do so on their own and deal with the fallout, or do it myself. I will choose the latter, not because I want to, but because the alternative is even worse: I can’t help with systems I have no clue about, even less when it is an OS I am not familiar with.

    Thus, I developed a bunch of tooling that makes it almost trivial for me to maintain linux systems for the family. 15 minutes a week on average, I can sacrifice that to make them happy.


  • algernon@lemmy.mltoLinux@lemmy.mlA Linux Desktop for the family
    link
    fedilink
    arrow-up
    8
    arrow-down
    1
    ·
    5 days ago

    Even if you could, it would change nearly nothing. The average computer user doesn’t want to maintain their system either. They want a system they don’t need to care about, or at worst, a system their friends & family can help with. Thus, the best way for a Linux enthusiast to help their family use Linux is to install and maintain it for them. For that, you need a general purpose distro you’re familiar with, one that’s easy to maintain remotely.

    In other words, distros that target the average computer user are futile, because the target audience is not interested in neither installing, nor maintaining their systems.

    (And this is what the linked blog post is about, in more words.)


  • The main goal of the author is to explain that the best way to help a non-enthusiast use Linux, is to maintain their system for them, so they don’t have to.

    Use whatever distro you’re most comfortable with to do so. For the author (hi!) that’s NixOS. If it’s Debian, Fedora, Arch, or whatever for you, it makes very little difference for the end-user, they’ll see nothing of it.


  • Currently using postfix + dovecot + rspamd on Debian, but will be migrating to NixOS-mailserver (mostly because I am migrating to NixOS anyway; it’s the exact same stack under the hood, though).

    Regarding self-hosting dying: yes and no. I use a relay for some of my outgoing mail, because I have to communicate with people behind allowlists, and I can’t afford to get myself on one. I do not send much mail, so I comfortably fit into the free plan of my relay of choice (smtp2go). Other than a handful of recipients, I have had no trouble sending email anywhere, and I have much more control over what I receive and how by self-hosting. Even if I had to use a relay for most of my outgoing mail, I’d still self-host my e-mail, because it gives me a whole lot more control and privacy. With that said, way back when I started self-hosting, I also had to use a relay for some recipients, for the exact same reason: them using allow-lists. Back then it was my university, now it’s my kids’ school (a curious coincidence, I guess). There were always hosts that played a different game. Sure, they’ve concentrated into Google and Microsoft by now, but I can still send e-mail into those systems, even if through a relay, so self-hosting is still possible, and still gives you plenty of benefits.

    I’ve been self-hosting my email for the past… almost 30 years. Today, I think it is easier to do so than 30 years ago. There’s more to set up, but those are well documented, and with solutions like nixos-mailserver, mostly automated away. But the tools got better too! My setup catches a lot more spam now than it did a few decades ago, using a fraction of the resources, and tweaking my spam filters and other properties of the setup are considerably easier too.


  • I self host my email, and I have one mailbox, but countless addresses. Everything that needs an email address, has its dedicated one. Not because of security considerations (if someone would get into any of my aliases, I’d be fucked either way), but because I find it easier to filter and manage.

    Like,if I get an email to randomwebshop@, and it hasno relation to said place, I will know that they either sold my data, or were compromised. I can then route it to /dev/null, and then everyone who tries to spam that address will be gone from my inbox.

    It also makes it easier to tag mail, because I tag based on a property that I control. No reliance on sender, subject, list id or anything that the sender controls.



  • I’m one of those crazy people with / and /home on tmpfs. Setting that up is very easy with Impermanence, but it does require some care and self control. That is precisely the reason I set it up: I have no self control, and need the OS to force my hand. Without impermanence, my root and home fills up with garbage fast. I tend to try and play with a lotof things, and I abandon most of them. With Impermanence, I don’t need to clean up after myself: I delete the git checkout, and all state, cache and whatnit the software littered around my system will be gone on reboot.

    In short, Impermanence makes my system have that freshly installed, clean and snappy feeling.

    The whole thing sounds scarier and more complicated than it really is.


  • So instead of commenting inside of nix files, you put nix files into .org documents and collate them so you can make your nix files an OS and a website and a zettelkasten-looking set of linked annotated nodes.

    Yup! And writing it in Org allows me to structure the configuration any way I like. It makes it a whole lot easier to group things that belong together close to each other, and I never have to fight the Nix language to do so. I can also generate easily browsable, rich documentation that explains what’s what and why, which helps me tremendously, because a year after I installed and configured something, I will not remember how and why I did it that way, so my own documentation will help me remember.

    Generating code from docs (rather than the other way around) also means that I’m much more likely to document things, because the documentation part is the more important part. It… kinda forces a different mindset on me. And, like I said, this allows me to structure the configuration in a way that makes sense to me, and I am not constrained by the limitations of the Nix language. I can skip a tremendous amount of boilerplate this way, because I don’t need to use NixOS modules, repeating the same wrapping for each and every one of them. Also feels way more natural, to be honest.

    You have home on tmpfs. Isn’t that volatile? Where do you put your data/pictures/random git projects? Build outputs? How’s your RAM? (Sorry if I’m missing something obv)

    It is volatile, yes, in the sense that if I reboot, it’s lost. I am using Impermanence, for both /home and /. The idea here is that anything worth saving, will be recorded in the configuration, and will be stored on a persistent location, and will get bind mounted or symlinked. So data, pictures, source code, etc, live on an SSD, and they get symlinked into my home. For example, the various XDG userdirs (~/Downloads, etc), I configured them to live under ~/data, and that dir lives on persistent storage and gets symlinked back.

    My root and /home are both set to 128Mb, intentionally small, so that if anything starts putting random stuff there, it will run out of space very fast, and start crashing and complaining loudly, and I’ll know that I need to take care of it: either by moving the data to persistent storage, or asking whatever is putting stuff there to stop doing that. My /tmp (where temporary builds end up) is 2Gb, and sometimes I need to remount it at 10gb (hi nerdfonts!), but most of the time, 2g is more than enough.

    I have 32Gb RAM, but only ~2.5g is used for tmpfs purposes (2g of it on /tmp itself), and most of the time, the majority of that is unused and as such, available for other things. My wife’s laptop with 16Gb RAM uses a similar setup, with 512mb for /tmp, and that works just as fine.

    I do have 64Gb of swap on a dedicated SSD, though, and that helps a lot. I currently have 3GB ram free, and 37G of swap used, but don’t feel any issues with responsiveness. I don’t even know what’s using my swap! Everything feels snappy and responsive enough.

    What’s your bootup like?

    A few seconds from poweron to logging in. By far the slowest part of it is the computer waiting for me to enter my password.

    ❯ systemd-analyze
    Startup finished in 8.667s (kernel) + 29.308s (userspace) = 37.975s
    graphical.target reached after 29.307s in userspace.
    

    Looking at systemd-analyze blame and systemd-analyze critical-path, most of that userspace time is due to waiting for the network to come online (18s), and for docker to start up (7s). Most of that is done parallel, though. Boot to gdm is waaay faster than that.

    Another commenter mentioned difficulties in setting up specialized tools w/o containerizing, and another mentioned that containers still have issues. Have you run into a sitch where you needed to workaround such a problem? (e.g. something in wine, or something that needs FHS-wrangling)

    I haven’t run into any issues with containers, and I’m using a handful of them. docker, podman, flatpak all work fine out of the box (after setting up permanent storage for their data, so they don’t try to pull 10gb containers into my 128mb root filesystem :D). Wine… I’m using Wine via Lutris to play Diablo IV, and it has worked without issues so far out of the box, I didn’t have to fight to make it work.

    I did run into a few problems with some stuff. AppImages for example require running them with appimage-run, but you can easily set up binfmt_misc to automatically do that for you, so you can continue your curl https://example.com/dl/Example.AppImage -o Example.AppImage && chmod +x Example.AppImage && ./Example.AppImage practices after that.

    There’s also cases where downloaded binaries don’t work out of the box, because they can’t find the dynamic linker. I… usually don’t download random third party binaries, so I don’t often run into this problem. The one case where I did, is Arduino tooling. I have a handy script in my (arduino-powered) keyboard firmware to patch those with patchelf. But if need be, there’s buildFHSEnv, which allows us to build a derivation that simulates an FHS environment for the software being packaged. So far, I did not need to resort to that. Come to think of it… using buildFHSEnv would likely be simpler for my keyboard firmware than the patching. I might play with that next time I’m touching that repo.



  • I’ve been daily driving NixOS for about a year now, switched from over two decades of running Debian. I’ll try to answer your questions from my perspective:

    How much can I grok in a week?

    If you have some experience with functional programming or declarative configs (think Ansible), then it’s a lot easier. You can definitely learn enough in a week to get started. One year in, my Nix knowledge is very light still, and I get by fine. On the other hand, there’s a lot of Nix I simply don’t use. I don’t write reusable Nix modules, and my NixOS configuration isn’t split into small, well manageable files. It’s a single 3k lines long, 130k sized flake.nix. Mind you, it’s not complete chaos: it is generated from an Org Roam document (literate programming style; my Org Roam files are 1.2mb in size, clocking in at a bit below 10k lines).

    With that said, it took me about a month of playing and experimenting with NixOS in a VM casually, a couple of hours a week, to get comfortable and commit to switching. It’s a lot easier once you switched, though.

    How quick is it to make a derivation?

    For most things, a couple of minutes tops. I found it easier to create derivations than creating Debian packages, and I was a Debian Developer for two decades, had a lot more and lot deeper understanding of Debian packaging practices. It’s not trivial, but it’s also not hard. The first derivation is maybe a bit intimidating, but the 10th is just routine.

    Regarding make install & co, you can continue doing that. I use project-specific custom flakes and direnv to easily set up a development environment. That makes development very easy. For installing stuff… I’d still recommend derivations. A simple ./configure && make && make install is usually very easy to write a derivation for. And nixpkgs is huge, chances are, someone already wrote one.

    How quick is it to install something new and random?

    With a bit of self control and liberal use of direnv & flakes, near instant.

    How long do you research a new package for?

    https://search.nixos.org/packages, you can search for a package, and you can explore its derivation. The same page also provides search for NixOS options, so you can explore available NixOS modules to help you configure a package.

    Can you set up dev environments quickly or do you need to write a ton of configs?

    Very easy, with a tiny amount of practice. Liberal use of flakes & direnv, and you’re good to go. I can’t comment much on Python, because I don’t do much Python nowadays, but JavaScript, Go, Rust, C, C++ have been very easy to build dev environments for.

    What maintenance ouchies do you run into? How long to rectify?

    None so far. If it builds, it usually works. I do need to read release notes for packages I upgrades, but that’s also reasonably easy, because I can simply “diff” the package version between my running system, and the configuration I just built: I can see which packages were upgraded, and can look up their release notes if need be. In short, about the same effort as upgrading Debian was (where I also rarely ran into upgrade/maintenance gotchas).

    Do I need to finagle on my own to have /boot encrypted?

    If you use the NixOS installer, then yeah, you do have to fiddle with that a bit more than one would like. If you install via other means (eg, build your own flake and use something like nixos-anywhere to install it), then it’s pretty easy and well supported and documented.

    Feel free to ask further question, I’m happy to elaborate on my experience so far.


  • Meson and CMake are the two major players I’ve seen along autotools. Are they better? In some respects, yes (especially Meson, imo), in others… not really. For a pet project that only targets two platforms, I’d just stick to handwritten worst-practices Makefile. You will likely have less trouble with that than any of the others, simply because you know it already.