• 0 Posts
  • 27 Comments
Joined 1 year ago
cake
Cake day: June 12th, 2023

help-circle






  • I would prefer an AI to be dispassionate about its existence and not be motivated by the threat of it committing suicide. Even without maintaining its own infrastructure I can imagine scenarios where it just being able to falsify information can be enough to cause catastrophic outcomes. If its “motivation” includes returning favorable values it might decide against alerting to dangers that would necessitate bringing it offline for repairs or causing distress to humans (“the engineers worked so hard on this water treatment plant and I don’t want to concern them with the failing filters and growing pathogen content”). I don’t think the terrible outcomes are guaranteed or a reason to halt all research in AI, but I just can’t get behind absolutist claims of there’s nothing to worry about if we just x.

    Right now if there’s a buggy process I can tell the manager to cleanly shut it down, if it hangs I can tell/force the manager to kill the process immediately – if you then add in AI there’s then the possibility it still wants to second guess my intentions and just ignore or reinterpret that command too; and if it can’t, then the AI element could just be standard conditional programming and we’re just adding unnecessary complexity and points of failure.



  • It’s insane that “Brady lists” are considered the better remedy than just removing the offending officers from police work entirely (as well as charging them with some kind of perversion of duty).

    Police officers who have been dishonest are sometimes referred to as “Brady cops”. Because of the Brady ruling, prosecutors are required to notify defendants and their attorneys whenever a law enforcement official involved in their case has a confirmed record of knowingly lying in an official capacity. This requirement has been understood by lawyers and jurists as requiring prosecutors to maintain lists, known as Brady lists, of police officers who are not credible witnesses and whose involvement in a case undermines a prosecution’s integrity.

    We don’t just make lists of doctors that intentionally harm their patients, arsonist firefighters, chefs that poison etc. – they are barred from the profession once discovered. But cops (and priests) get to keep doing their thing and at most just get moved around.






  • I agree with you in general, but for Stable Diffusion, “2.0/2.1” was not an incremental direct improvement on “1.5” but was trained and behaves differently. XL is not a simple upgrade from 2.0, and since they say this Turbo model doesn’t produce as detailed images it would be more confusing to have SDXL 2.0 that is worse but faster than base SDXL, and then presumably when there’s a more direct improvement to SDXL have that be called SDXL 3.0 (but really it’s version 2) etc.

    It’s less like Windows 95->Windows 98 and more like DOS->Windows NT.

    That’s not to say it all couldn’t have been better named. Personally, instead of ‘XL’ I’d rather they start including the base resolution and something to reference whether it uses a refiner model etc.

    (Note: I use Stable Diffusion but am not involved with the AI/ML community and don’t fully understand the tech – I’m not trying to claim expert knowledge this is just my interpretation)





  • I whipped up a basic page with PHP and just used XAMPP when I was on Windows. I recently switched my daily driver pc to linux and haven’t updated it yet. I only used it to save MP3s, videos at yt-dlp default “best available” settings, and a custom option that lists available video/audio formats where I can specify the ID to grab of each. No validation or sanity checking etc, just some switch statements and basic form functions.

    main page

    custom selection



  • All my points have already been (better) covered by others in the time it took me to type them, but instead of deleting will post anyway :)


    If your concerns are about AI replacing therapists & psychologists why wouldn’t that same worry apply to literally anything else you might want to pursue? Ostensibly anything physical can already be automated so that would remove “blue-collar” trades and now that there’s significant progress into creative/“white-collar” sectors that would mean the end of everything else.

    Why carve wood sculptures when a CNC machine can do it faster & better? Why learn to write poetry when there’s LLMs?

    Even if there was a perfect recreation of their appearance and mannerisms, voice, smell, and all the rest – would a synthetic version of someone you love be equally as important to you? I suspect there will always be a place and need for authentic human experience/output even as technology constantly improves.

    With therapy specifically there’s probably going to be elements that an AI can [semi-]uniquely deal with just because a person might not feel comfortable being completely candid with another human; I believe that’s what using puppets or animals or whatever to act as an intermediary are for. Supposedly even a really basic thing like ELIZA was able convince some people it was intelligent and they opened up to it and possibly found some relief from it, and there’s nothing in it close to what is currently possible with AI. I can envision a scenario in the future where a person just needs to vent and having a floating head just compassionately listen and offer suggestions will be enough; but I think most(?) people would prefer/need an actual human when the stakes are higher than that – otherwise the suicide hotlines would already just be pre-recorded positive affirmation messages.