• MotoAsh@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      3 months ago

      Not if it’s for inference only. What do you think the “AI accelerators” they’re putting in phones now are? Do you think they’d be as expensive or power hungry as an entire 3090 for performance if they were putting them in small devices?

      • ShadowRam@fedia.io
        link
        fedilink
        arrow-up
        0
        ·
        3 months ago

        Ok,

        Show me a PCE-E board that can do inference calculations as fast as a 3090 but is less expensive than a 3090.

      • RandomlyRight@sh.itjust.worksOP
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        3 months ago

        Yeah show me a phone with 48GB RAM. It’s a big factor to consider. Actually, some people are recommending a Mac Studio cause you can get it with 128GB RAM and more and it’s shared with the AI/GPU accelerator. Very energy efficient, but sucks as soon as you want to do literally anything other than inference