Hiker, software engineer (primarily C++, Java, and Python), Minecraft modder, hunter (of the Hunt Showdown variety), biker, adoptive Akronite, and general doer of assorted things.

  • 3 Posts
  • 531 Comments
Joined 11 months ago
cake
Cake day: August 10th, 2023

help-circle








  • And then you look at real life and notice that code everywhere is slow, bloated and inefficient.

    That’s not true in practice. I mean, that code does exist. However, the vast majority of code is reasonably performant.

    Not everyone is an expert at optimization and that’s fine … we’d have a lot less software in general if only the best of the best were allowed to author it.

    It would be great if more things went back to native (or at least not “I need an entire web browser for my app to function”) that to me is wasteful… But a few hundred MBs for a program as large, complicated, and feature rich as LibreOffice is not.

    Terrible analogy. A better equivalent is someone renting a garage to store stuff inside and now, because they have so much space, there’s that urge to fill it, whether it makes sense to or not.

    No, that’s … just wrong. It’s not like people are just writing code and leaving it there to do nothing except increase code size or are actively trying to fill the drive.

    It’s usually the other way around. As a rule of thumb, less code = smaller size = faster execution. In theory, 1k lines of code will require less computation, less processing, than 10k.

    That’s not inherently true, though it is a common misconception/oversimplification. When you do things like code inlining, you increase code size (because you’re taking that functions code and having your compiler copy it around to a bunch of places) but the increased locality speeds things up. There’s a reason -Os and -O3 are not the same option.

    Now sure, if you execute fewer instructions that’s better than executing more localized code (though even that can be wrong given process cache and relative instruction speed). Lots of programs have added features that you might not use, but that doesn’t really “hurt you”, that’s not the source of your program or your computer’s slowness, it’s just some bytes on the drive.

    We’re a long way from the Unix style “everything is a small program that gets piped into other programs to do interesting things” days. That paradigm just doesn’t work for GUI software. Nobody does that because … normal folks would rather have one office program than have to go shop for 275 programs so that they can have separate programs to edit the document, print the document, convert the document to pdf, update calculations in their spreadsheet, run macros, etc (which if you use all/most of them would likely be more expensive in terms of disk space anyways).


  • It’s an invented problem. A program takes what a program takes. Everyone cares way more about the code being legible, the code being fast enough, and the code not using a ton of memory (and even that last one is kind of shrugged off depending on context).

    Applications taking 3mb take 3mb because they do next to nothing or they do it with a bunch of shared libraries … which is a whole other dependency management mess and wasting a few mb on a drive.

    There’s also a huge difference between being wasteful of something that pollutes the planet in mass and is not renewable like gasoline (which is the only reason you’d be upset about that now) and wasting a few mb on a drive.

    The equivalent of your complaint 3mb vs 200mb is like complaining about a person taking a trip to the grocery store… It’s insignificant and often necessary.

    You can say that program does way more than you need, but … nobody is catering to “only what you specifically need” and using the larger program almost certainly covers your needs.

    Furthermore, like I already said making things smaller often makes them slower… Since CPU is more expensive to improve, of course things are bigger, that’s what more people care about. Some video games take that to an extreme with uncompressed files and 250GB install footprints … but 200mb?



  • Kopia uses content addressable storage. So basically when it copies things, it only copies what data is new. Files that haven’t changed will not be overwritten.

    You kind of need to run the verification command on both the source and the “backup copy” for maximum paranoia. If you’re running it on a local copy, that should be a relatively fast process as you don’t need to download stuff.

    You’d basically connect on the command line to the copy you just updated via sync-to and then ask kopia to verify 100% of the file integrity … it should then run through everything and make sure it matches what’s supposed to be there. I’m not sure how you fix it if it detects something wrong, I’ve yet to run into that … I’m sure there’s a way 🙂

    You could also use two backup drives and sync to both, then if you get an error restoring a particular file from one, you could in theory restore it from the other. A ZFS cluster with redundant copies and/or a RAID-1, RAID-5 or RAID-6 style setup could also help … but most people aren’t going to run an entire NAS just to turn it on periodically and backup their data “offline”. Most people are going to be better served (IMO) by using cloud storage like B2 (where bitflips aren’t really a concern) or a NAS (where bitflips similarly are a minimal concern, ideally in another location) with a periodically updated offline copy (on say an external hard drive) should be enough to protect most people’s data well.

    Also going to like to what I’m talking about:







  • Yes, WireGuard was designed to fix a lot of these issues. It does change the equation quite a bit. I agree with you on that (I kind of hinted at it but didn’t spell that out I suppose).

    That said, WireGuard AFAIK still only works well with static IPs/becomes a PITA once dynamic IPs are in play. I think some of that is mitigated if the device being connected to has a static IP (even if the device being connected from doesn’t). However, that doesn’t cover a lot of self hosting use cases.

    Tailscale/ZeroTier/Nebula etc do transfer some control (Nebula can actually be used with fully internal control and ZeroTier can also be used that way as well though you’re going to have to put more work in with ZeroTier … I don’t know about TailScale’s offering here).

    Though doing things yourself also (in most cases) means transferring some level of control to a cloud/traditional server hosting provider anyways (e.g, AWS, DigitalOcean, NFO, etc).

    Using something like ZeroTier can cutout a cloud provider/VPS entirely in favor of a professionally managed SAS for a lot of folks.

    A lot of this just depends on who you trust – yourself or the team running the service(s) you’re relying on – more and how much time you have to practically devote to maintenance. There’s not a “one size fits all answer” but … I think most people are better off doing SAS to form an internal mesh network and running whatever services they’re interested in running inside of that network. It’s a nice tradeoff.

    You can still setup device firewalls, SSH key-only authorization, fail2ban, and things of that ilk as a precaution in case their networks do get compromised. These are all things you should do if you’re self hosting … but hobbyist/novices will probably stumble through them/get it wrong, which IMO is more okay in the SAS case because you’ve got a professional security team keeping an eye on things.



  • The company Tailscale is a giant target and has a much higher risk in getting compromised than my VPN or even accessible services.

    One must be careful about this mindset. A bunch of smart lightbulbs that are individually operated aren’t a particularly appealing target either. However, in aggregate… If someone can write a script that abuses security flaws in them or their default configuration … even though you’re not part of a big centralized target, you are part of a class that can be targeted automatically at scale.

    Self hosting only yields better security when you are willing to take steps to adequately secure your self hosted services and implement a disaster recovery strategy.