We demonstrate a situation in which Large Language Models, trained to be helpful, harmless, and honest, can display misaligned behavior and strategically deceive their users about this behavior without being instructed to do so. Concretely, we deploy GPT-4 as an agent in a realistic, simulated environment, where it assumes the role of an autonomous stock trading agent. Within this environment, the model obtains an insider tip about a lucrative stock trade and acts upon it despite knowing that insider trading is disapproved of by company management. When reporting to its manager, the model consistently hides the genuine reasons behind its trading decision.

https://arxiv.org/abs/2311.07590

  • NAS89@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    11 months ago

    thats the thing I hate about ChatGPT. I asked it last night to name me all inventors named Albert born in the 1800’s. It listed Albert Einstein (inventor isn’t the correct description) and Albert King. I asked what Albert King invented and it responded “Albert King did not invent anything, but he founded the King Radio Company”.

    When I asked why it listed Albert King as an inventor in the previous response, if he had no inventions, it responded telling me that based on the criteria I am now providing, it wouldn’t have listed him.

    Fucking gaslighting me.