We demonstrate a situation in which Large Language Models, trained to be helpful, harmless, and honest, can display misaligned behavior and strategically deceive their users about this behavior without being instructed to do so. Concretely, we deploy GPT-4 as an agent in a realistic, simulated environment, where it assumes the role of an autonomous stock trading agent. Within this environment, the model obtains an insider tip about a lucrative stock trade and acts upon it despite knowing that insider trading is disapproved of by company management. When reporting to its manager, the model consistently hides the genuine reasons behind its trading decision.

https://arxiv.org/abs/2311.07590

  • grabyourmotherskeys@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    10 months ago

    One of the things our sensory system and brain do is limit our input. The road to agi might involve giving it everything and finding the optimum set of filters, not selecting input and training up from that.

    You’d need the baseline set of systems (“baby agi”) and then turn it loose with goal seeking.

    • sunbeam60@lemmy.one
      link
      fedilink
      English
      arrow-up
      2
      ·
      10 months ago

      Yup, broadly agreed. I’m not saying “give it everything”. I’m sure regions would develop to simplify processing via filtering.