• 1 Post
  • 61 Comments
Joined 1 year ago
cake
Cake day: July 22nd, 2023

help-circle



  • IRC’s not as popular as in its heyday, and while once it was the main choice for multi-playing gaming chat (Quakenet et al), that’s largely gone elsewhere, but it’s still very good for certain technical channels.

    IRC has also proved to be remarkably resistent to commercialisation, mostly due to the users. Even when one of the biggest networks, Freenode, got taken over by a drug addled mentalist Reference who started insisting all all kinds of strange things, the users just upped sticks and created a new network. A bit of fuss, but the important stuff stayed the same and it’s continued much as before as a new network, Librenet.


  • Others have answered your question - but it may be worth pointing out the obvious - backups. Annoyances such as you describe are much less of a stress if you know you’re protected - not just against accidental erasure, but malicious damage and technical failure.

    Some people think it’s a lot of bother to do backups, but it is very easily automated with any of the very good free tools around (backup-manager, someone’s mentioned timeshift, and about a million others). A little time spent planning decent backups now will pay you back one day in spades, it’s a genuine investment of time. And once set up, with some basic monitoring to ensure they’re working and the odd manual check once in a blue moon, you’ll never be in this position again. Linux comes ahead here in that the performance impact from automated backups can be prioritised not to impact your main usage, if the machine isn’t on all the time.











  • Fail2ban is something I’ve used for years - in fact it was working on these very sites before I decided to dockerise them, but find it a lot less simple in this application for a couple of reasons:

    The logs are in the docker containers. Yes, I could get them squirting to a central logging serverbut that’s a chunk of overhead for a home system. (I’ve done that before, so it is possible, just extra time)

    And getting the real IP through from cloudlfare. Yes, CF passes headers with it in, and haproxy can forward that as well with a bit of tweaking. But not every docker container for serving webpages (notably the phpbb one) will correctly log the source IP even when passed through from Haproxy as the forwarded-ip, instead showing the IP of the proxy. I’ve other containers that do display it, and it can obviously be done, but I’m not clear yet why it’s inconsistent. Without that, there’s no blocking.

    And… You can use the cloudflare IP to block IPs, but there’s a fixed limit on the free accounts. When I set this up before with native webservers and blocked malicious url scanning bots, then using the api to block them - I reached that limit within a couple of days. I don’t think there’s automatic expiry, so I’d need to find or build a tool that manages the blocklist remotely. (Or use haproxy to block and accept the overhead)

    It’s probably where I should go next.

    And yes - you’re right about scripting. Automation is absolutely how I like to do things. But so many problems only become clear retrospectively.



  • Obesity is increasingly a problem in low- and middle-income countries.

    Isn’t that always going to be the case, regardless of ingredient adjustment? It feels like people who have had very little food will tend towards over-compensating during times of glut - perhaps not so much the generation directly affected, but the care they give to next generations.

    As an example vaguely related but less extreme; I was born in 1970 in England to a lower middle-class family. My parents were wartime and post-war babies who had experienced rationing and as a result, I have very strong recollections of being made to “clear your plate” before I could leave the table. (Ironically given this topic, the “there are starving children in Africa who would like that” line was given quite often)

    Wasting food was the absolute highest sin I could commit and that’s stayed with me to this day.





  • I think bus factor would be a lot easier to cope with than a slowly progressing, semi-abandoned project and a White Knight saviour.

    In a complete loss of a sole maintainer, then it should be possible to fork and continue a project. That does require a number of things, not least a reliable person who understands the codebase and is willing to undertake it. Then the distros need to approve and change potentially thousands of packages that rely upon the project as a dependency.

    Maybe, before a library or any software gets accepted into a distro, that distro does more due diligence to ensure it’s a sustainable project and meets requirements like a solid ownership?

    The inherited debt from existing projects would be massive, and perhaps this is largely covered already - I’ve never tried to get a distro to accept my software.

    Nothing I’ve seen would completely avoid risk. Blackmail upon an existing developer is not impossible to imagine. Even in this case, perhaps the new developer in xz started with pure intentions and they got personally compromised later? (I don’t seriously think that is the case here though - this feels very much state sponsored and very well planned)

    It’s good we’re asking these questions. None of them are new, but the importance is ever increasing.