• Th4tGuyII@fedia.io
    link
    fedilink
    arrow-up
    55
    ·
    3 months ago

    The TL;DR for the article is that the headline isn’t exactly true. At this moment in time their PPU can potentially double a CPU’s performance - the 100x claim comes with the caveat of “further software optimisation”.


    Tbh, I’m sceptical of the caveat. It feels like me telling someone I can only draw a stickman right now, but I could paint the Mona Lisa with some training.

    Of course that could happen, but it’s not very likely to - so I’ll believe it when I see it.

    Having said that they’re not wrong about CPU bottlenecks and the slowed rate of CPU performance improvements - so a doubling of performance would be huge in this current market.

    • barsquid@lemmy.world
      link
      fedilink
      English
      arrow-up
      16
      ·
      3 months ago

      Putting the claim instead of the reality in the headline is journalistic malpractice. 2x for free is still pretty great tho.

      • barsquid@lemmy.world
        link
        fedilink
        English
        arrow-up
        16
        ·
        3 months ago

        Just finished the article, it’s not for free at all. Chips need to be designed to use it. I’m skeptical again. There’s no point IMO. Nobody wants to put the R&D into massively parallel CPUs when they can put that effort into GPUs.

    • Clusterfck@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      1
      ·
      3 months ago

      I get that we have to impress shareholders, but why can’t they just be honest and say it doubles CPU performance with the chance of even further improvement with software optimization. Doubling performance of the same hardware is still HUGE.

    • pop@lemmy.ml
      link
      fedilink
      English
      arrow-up
      5
      ·
      3 months ago

      I’m just glad there are companies that are trying to optimize current tech rather than just piling over new hardware every damn year with forced planned obsolescence.

      Though the claim is absurd, I think double the performance is NEAT.

  • xantoxis@lemmy.world
    link
    fedilink
    English
    arrow-up
    42
    ·
    3 months ago

    This change is likened to expanding a CPU from a one-lane road to a multi-lane highway

    This analogy just pegged the bullshit meter so hard I almost died of eyeroll.

  • Kairos@lemmy.today
    link
    fedilink
    English
    arrow-up
    34
    ·
    edit-2
    3 months ago

    I highly doubt that unless they invented magic.

    Edit: oh… They ommitted the “up to” in the headline.

  • rtxn@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    3 months ago

    Cybercriminals are creaming their jorts at the potential exploits this might open up.

  • 9point6@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    edit-2
    3 months ago

    Haha okay

    Edit: after a skim and a quick Google, this basically looks like a packaging up of existing modern processor features (sorta AVX/SVE with a load of speculative execution thrown on top)

  • Shadow@lemmy.ca
    link
    fedilink
    English
    arrow-up
    4
    ·
    3 months ago

    Hmm, so sounds like they’re moving the kernel scheduler down to a hardware layer? Basically just better smp?

    • Chocrates@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      3 months ago

      Processors have an execution pipeline, so a single command like mov has some number of actions the CPU takes to execute it. CPU designers already have some magic that allows them to execute these out of order as well as other stuff like pre calculating what they think the next command will probably be.

      It’s been a decade since my cpu class so I am butchering that explanation, but I think that is what they are proposing messing with

      • Kairos@lemmy.today
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        3 months ago

        That’s accurate.

        Its done through multiple algorithms, but the general idea is to schedule calculations as soon as possible, accounting for data hazards to make sure everything is still equivalent to non out of order execution. Individual circuits can execute different things at the same time. Special hardware is needed to make the algorithms work.

        There’s also branch prediction which is the same thing kind of except the CPU needs a way to ensure if the prediction was actually correct.

  • Coasting0942@reddthat.com
    link
    fedilink
    English
    arrow-up
    4
    ·
    3 months ago

    Others have already laughed at this idea, but on a similar topic:

    I know we’ve basically disabled a lot of features that sped up the CPU but introduced security flaws. Is there a way to turn those features back on for an airgapped computer intentionally?