I think you’re reading more into that than there is.
I think you’re reading more into that than there is.
Why use kodi *and *jellyfin? Jellyfin is its own thing, and without all the awful cruft that comes with Kodi.
It also has native apps for windows, linux and… FireTv.
IRC’s not as popular as in its heyday, and while once it was the main choice for multi-playing gaming chat (Quakenet et al), that’s largely gone elsewhere, but it’s still very good for certain technical channels.
IRC has also proved to be remarkably resistent to commercialisation, mostly due to the users. Even when one of the biggest networks, Freenode, got taken over by a drug addled mentalist Reference who started insisting all all kinds of strange things, the users just upped sticks and created a new network. A bit of fuss, but the important stuff stayed the same and it’s continued much as before as a new network, Librenet.
Others have answered your question - but it may be worth pointing out the obvious - backups. Annoyances such as you describe are much less of a stress if you know you’re protected - not just against accidental erasure, but malicious damage and technical failure.
Some people think it’s a lot of bother to do backups, but it is very easily automated with any of the very good free tools around (backup-manager, someone’s mentioned timeshift, and about a million others). A little time spent planning decent backups now will pay you back one day in spades, it’s a genuine investment of time. And once set up, with some basic monitoring to ensure they’re working and the odd manual check once in a blue moon, you’ll never be in this position again. Linux comes ahead here in that the performance impact from automated backups can be prioritised not to impact your main usage, if the machine isn’t on all the time.
Some of the cheaper Thinkpads are terribly poor quality. Once a by word for ruggedness, now just another name.
The way I help, as a Sysadmin, is primarily by using foss software in my job and feeding back with bug reports, issues and so on. I’ve raised several hundred issues on Github this way, and try to do them concisely, accurately and with as much relevant information as I can.
It’s back today with a new user-agent, this time containing an email address at anthropic.com - so it looks like it’s Claude3, a scraper for an AI bot.
I’ve just installed crowdsec and its haproxy plugin. Documentation is pretty good. I need to look into getting it to ban the ip at cloudflare - that would be neat.
Annoyingly, the claudebot spammer is back again today with a new UA. I’ve emailed the address within it politely asking them to desist - be interesting to see if there’s a reply. And yes, it is Claudebot 3 - AI.
UA:like Gecko; compatible; ClaudeBot/1.0; +claudebot@anthropic.com)
Yep - agree with all of that. It’s a fault of mine that I don’t always step back and look at the bigger picture first.
Thanks, I’ve not heard of that, it sounds like it’s worth a look.
I don’t think the tunnel would complicate blocking via the cloudflare api, but there is a limit on the number of IPs you can ban that way, so some expiry rules are necessary.
Doh - another example of my muddled thinking.
Fail2ban will work directly on haproxy’s log, no need to read the web logs from containers at all. Much simpler and better.
Maybe? It feels like the kind of stupid that you really need a human to half-ass it to achieve this thoroughly though.
Some nice evil ideas there!
Fail2ban is something I’ve used for years - in fact it was working on these very sites before I decided to dockerise them, but find it a lot less simple in this application for a couple of reasons:
The logs are in the docker containers. Yes, I could get them squirting to a central logging serverbut that’s a chunk of overhead for a home system. (I’ve done that before, so it is possible, just extra time)
And getting the real IP through from cloudlfare. Yes, CF passes headers with it in, and haproxy can forward that as well with a bit of tweaking. But not every docker container for serving webpages (notably the phpbb one) will correctly log the source IP even when passed through from Haproxy as the forwarded-ip, instead showing the IP of the proxy. I’ve other containers that do display it, and it can obviously be done, but I’m not clear yet why it’s inconsistent. Without that, there’s no blocking.
And… You can use the cloudflare IP to block IPs, but there’s a fixed limit on the free accounts. When I set this up before with native webservers and blocked malicious url scanning bots, then using the api to block them - I reached that limit within a couple of days. I don’t think there’s automatic expiry, so I’d need to find or build a tool that manages the blocklist remotely. (Or use haproxy to block and accept the overhead)
It’s probably where I should go next.
And yes - you’re right about scripting. Automation is absolutely how I like to do things. But so many problems only become clear retrospectively.
I mean - I switched my attention to Haproxy. And yes, no argument there.
Obesity is increasingly a problem in low- and middle-income countries.
Isn’t that always going to be the case, regardless of ingredient adjustment? It feels like people who have had very little food will tend towards over-compensating during times of glut - perhaps not so much the generation directly affected, but the care they give to next generations.
As an example vaguely related but less extreme; I was born in 1970 in England to a lower middle-class family. My parents were wartime and post-war babies who had experienced rationing and as a result, I have very strong recollections of being made to “clear your plate” before I could leave the table. (Ironically given this topic, the “there are starving children in Africa who would like that” line was given quite often)
Wasting food was the absolute highest sin I could commit and that’s stayed with me to this day.
Anyone else find themselves singing this headline to the tune of The House of the Rising Sun?
Fair point.
If the distro team is compromised, then that leaves all their users open too. I’d hope that didn’t happen, but you’re right, it’s possible.
I think bus factor would be a lot easier to cope with than a slowly progressing, semi-abandoned project and a White Knight saviour.
In a complete loss of a sole maintainer, then it should be possible to fork and continue a project. That does require a number of things, not least a reliable person who understands the codebase and is willing to undertake it. Then the distros need to approve and change potentially thousands of packages that rely upon the project as a dependency.
Maybe, before a library or any software gets accepted into a distro, that distro does more due diligence to ensure it’s a sustainable project and meets requirements like a solid ownership?
The inherited debt from existing projects would be massive, and perhaps this is largely covered already - I’ve never tried to get a distro to accept my software.
Nothing I’ve seen would completely avoid risk. Blackmail upon an existing developer is not impossible to imagine. Even in this case, perhaps the new developer in xz started with pure intentions and they got personally compromised later? (I don’t seriously think that is the case here though - this feels very much state sponsored and very well planned)
It’s good we’re asking these questions. None of them are new, but the importance is ever increasing.
So you’re using Kodi as the OS on the TV itself? Not the Kodi App or Kodi backend?
I’m still struggling to understand how that would work, and still have Jellyfin in the mix - could you please explain exactly what you mean?