• doctorcherry@feddit.uk
    link
    fedilink
    arrow-up
    19
    ·
    edit-2
    1 year ago

    This is going to be pretty interesting, despite seeming far behind Apple is very well positioned to benefit from the AI developments. Apple has an opportunity for deep integration of AI features into the operating systems as well as offline compute through specialised silicon design that no other company really has.

  • redballooon@lemm.ee
    link
    fedilink
    arrow-up
    10
    ·
    1 year ago

    Better late than never.

    But even more interesting than when is whether this uses local AI models or if this becomes again a data protection trust sink.

      • redballooon@lemm.ee
        link
        fedilink
        arrow-up
        5
        arrow-down
        1
        ·
        edit-2
        1 year ago

        We’ll see. To date there’s no local runnable generative LLM model that comes close to the gold standard GPT-4. Even coming close to GPT-3.5-turbo counts as impressive.

        • kinttach@lemm.ee
          link
          fedilink
          English
          arrow-up
          6
          ·
          1 year ago

          We only recently got on-device Siri and it still isn’t always on-device if I understand correctly. So the same level of privacy that applies to in-the-cloud Siri could apply here.

          • BudgetBandit@sh.itjust.works
            link
            fedilink
            arrow-up
            4
            ·
            1 year ago

            My on-device-Siri that lives in my Apple Watch Series 4 is definitely processing everything locally now. She got dumber than I.

          • abhibeckert@lemmy.world
            link
            fedilink
            arrow-up
            3
            ·
            edit-2
            1 year ago

            Apple has sold computers with local voice input and command processing for more than 20 years, and iPhones have pretty much always had that feature (it was called “Voice Control” before Siri existed, and it was 100% local).

            I’d argue that, for Apple, what they’ve started doing recently is processing commands in the cloud. The list of commands that are processed locally vs in the cloud has changed over time… and they did move most of it to the cloud several years ago when they bought a cloud based smart assistant startup and used it as the basis for a new an improved assistant on iPhone. But every year they remove the dependence on that and are going back to how it used to be with local processing. These days even when a command is processed in the cloud it’s often only part of a multi-step process where the majority of the work was done on device. And many everyday commands are done entirely on device.

            For example if you ask it what the weather is, it’s entirely an on device command except for actually checking the latest weather report… and you can ask it what the temperature is “inside” which will check a sensor in your house and be entirely offline (if your home has a temperature sensor. There’s one built into Apple smart speakers and also a small but growing number of third party smart home products)

        • abhibeckert@lemmy.world
          link
          fedilink
          arrow-up
          4
          ·
          edit-2
          1 year ago

          To date there’s no local runnable generative LLM model that comes close to the gold standard GPT-4.

          True - but iPhones do run a local language model now as part of their keyboard. It’s definitely not GPT-4 quality but that’s to be expected given it runs on a tiny battery and executes every single time you tap the keyboard. Apple has proven that useful language models can be run locally on the slowest hardware they sell. I don’t know of anyone else who’s done that?

          Even coming close to GPT-3.5-turbo counts as impressive.

          Llama 2 is GPT-3.5-Turbo quality and it runs well on modern Macs which have a lot of very fast memory. Even their smallest fanless laptop can be configured with 24GB of memory and it’s fast memory too - 800Gbps. That’s not quite enough to run the largest Llama2 model but it’s close to enough memory. Their more expensive laptops have more memory and it’s faster - they can run the 70 billion parameter llama 2 without breaking a sweat.

          And on desktops Apple sells Macs with 192GB of memory and it’s way faster at 6.4Tbps. That’s slightly more memory (and for a lot less money) than the most expensive data center GPU NVIDIA sells (the NVIDIA unit is faster at compute operations but LLMs are often limited by available memory not compute speed).

  • billwashere@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    3
    ·
    1 year ago

    One thing Apple is good at is waiting until the market is ripe and then releasing a better product. Like mp3 players, phones, tablets, etc.

    • 68x@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      2
      ·
      1 year ago

      Except Siri keeps getting worse every year. In terms of actual usefulness, connectivity issues and plain old voice recognition.

      • billwashere@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        1 year ago

        And it seems to not be just Siri. Every Alexa device in my house has gotten more deaf and more stupid. But yes Siri has declined recently I agree.

        • bamboo@lemm.ee
          link
          fedilink
          arrow-up
          2
          ·
          1 year ago

          Add google assistant to the list. I remember it being great back in 2018 and now it struggles to do basic things like “set a timer”

        • fer0n@lemm.eeOP
          link
          fedilink
          arrow-up
          2
          ·
          1 year ago

          All my echos went out the window as soon as they started with product promotion

      • fer0n@lemm.eeOP
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        In my experience it’s not really getting worse, it’s just not getting substantially better either.