• 0 Posts
  • 8 Comments
Joined 7 months ago
cake
Cake day: February 27th, 2024

help-circle

  • Personally my arch install is almost boring me with how stable it’s been - and if anything goes wrong, it backs itself up before and after every single update plus on every boot just cuz, so I can roll back to wherever I want. I’ve put a lotta work into building out all these redundancies I’m happy with, and arch has been so goddamn stable I haven’t even had an excuse to use them. The process of getting a complete install was absolutely not “it works” - but now that I’m there, yeah, it really does just work. My only complaint is that I don’t have any reason to tinker with it more.


  • I didn’t consider account recovery, that’s a good point. Personally I don’t usually bother with it for anything I want to be private - if I lose it I lose it lol.
    It’s still not perfect, but some of the private email hosting providers like proton have email aliases, so you could use one for recovery without giving any info to hackers (assuming you trust the email provider). Definitely less secure than only a public key being exposed, but maybe an acceptable tradeoff for the convenience of an existing established solution?


  • You rule out social networks, but why? Wouldn’t a fediverse microblogging (or full blogging) platform work fine for the purpose? Just pick an irrelevant username and a strong+unique password and only access your account through tor using any and all relevant best practices.
    Given you want the continuity of the author preserved, I don’t see the functional difference between the posts being associated with an anonymous account and them all having your public key. Am I missing something?





  • Whether or not you agree with AI image generation, the authors of this study have pulled off something impressive. This particular study isn’t going to be the single most important thing to humanity this year, sure, but they made a pretty clever stride in pushing a developing field forward and you don’t need to be excited about the field itself to appreciate that.
    I’m assuming your dislike for AI image generation is based on the plagiarism issue, which is absolutely valid, but model architecture is separate from training data and the concepts here are perfectly usable with a more moral training set. The companies scraping all the data - OpenAI, google, and to a much lesser extent stability AI - are the ones to blame for that problem, not researchers working on model architecture.