• Allonzee@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    3 months ago

    Humanity is surrounding itself with the harbingers of its own self-inflicted destruction.

    All in the name of not only tolerated avarice, but celebrated avarice.

    Greed is a more effectively harmful human impulse than even hate. We’ve merely been propagandized to ignore greed, oh im sorry “rational self-interest,” as the personal failing and character deficit it is.

    The widely accepted thought terminating cliche of “it’s just business” should never have been allowed to propagate. Humans should never feel comfortable leaving their empathy/decency at the door in our interactions, not for groups they hate, and not for groups they wish to exploit value from. Cruelty is cruelty, and doing it to make moooaaaaaar money for yourself makes it significantly more damning, not less.

  • mansfield@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 months ago

    Don’t fall for this horseshit. The only danger here is unchecked greed from these sociopaths.

  • technocrit@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    3 months ago

    If these people actually cared about “saving humanity”, they would be attacking car dependency, pollution, waste, etc.

    Not making a shitty cliff notes machine.

  • Veedem@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    3 months ago

    I mean is this stuff even really AI? It has no awareness of what it’s saying. It’s simply calculating the most probable next word in a typical sentence and spewing it out. I’m not sure this is the tech that will decide humanity is unnecessary.

    It’s just rebranded machine learning, IMO.

    • redcalcium@lemmy.institute
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 months ago

      Supposedly they found a new method (Q*) that significantly improved their models, enough to make some key people revolt to force the company to not monetize it out of ethical concern. Those people have been pushed out ofc.

    • kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      3 months ago

      It has no awareness of what it’s saying. It’s simply calculating the most probable next word in a typical sentence and spewing it out.

      Neither of these things are true.

      It does create world models (see the Othello-GPT papers, Chess-GPT replication, and the Max Tegmark world model papers).

      And while it is trained on predicting the next token, it isn’t necessarily doing it from there on out purely based on “most probable” as your sentence suggests, such as using surface statistics.

      Something like Othello-GPT, trained to predict the next move and only fed a bunch of moves, generated a virtual Othello board in its neural network and kept track of “my pieces” and “opponent pieces.”

      And that was a toy model.

      • technocrit@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        3 months ago

        Something like Othello-GPT, trained to predict the next move and only fed a bunch of moves, generated a virtual Othello board in its neural network and kept track of “my pieces” and “opponent pieces.”

        AKA Othello-GPT chooses moves based on statistics.

        Ofc it’s going to use a virtual board in this process. Why would a computer ever use a real one board?

        There’s zero awareness here.