Elon Musk’s quest to wirelessly connect human brains with machines has run into a seemingly impossible obstacle, experts say. The company is now asking the public for help finding a solution.

Musk’s startup Neuralink, which is in the early stages of testing in human subjects, is pitched as a brain implant that will let people control computers and other devices using their thoughts. Some of Musk’s predictions for the technology include letting paralyzed people “walk again and use their arms normally.”

Turning brain signals into computer inputs means transmitting a lot of data very quickly. A problem for Neuralink is that the implant generates about 200 times more brain data per second than it can currently wirelessly transmit. Now, the company is seeking a new algorithm that can transmit this data in a smaller package — a process called compression — through a public challenge.

As a barebones web page announcing the Neuralink Compression Challenge posted on Thursday explains, “[greater than] 200x compression is needed.” The winning solution must also run in real time, and at low power.

  • Luvs2Spuj@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 month ago

    He’s such a genius, why would he look for additional help? All these claims are such shit. Remember when Tesla would be fully self driving and we would all whizzing around in tunnels? Fuck this guy.

    • QuadratureSurfer@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 month ago

      A job interview! (I wish I was joking).

      The reward for developing this miraculous leap forward in technology? A job interview, according to Neuralink employee Bliss Chapman. There is no mention of monetary compensation on the web page.

    • Gsus4@mander.xyz
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      1 month ago

      Nothing, but then you could patent it and license it to anyone but elon :) are you motivated yet?

    • AngryCommieKender@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 month ago

      You can have a free “flamethrower” cigarette lighter. The company is bankrupt, and musk has a warehouse if the things he didn’t sell.

  • MonkderDritte@feddit.de
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 month ago

    They want to add compression to the implant?

    And how does the brainwave data look? I’m sure they have some samples?

    • partial_accumen@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 month ago

      They want to add compression to the implant?

      They’re making their own silicon for their sensor so adding an on-die ASIC for a specific compression method sounds pretty attainable.

      • Cosmicomical@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 month ago

        What does this have to do with the question? Having samples of the data they want to compress is fundamental if you hope to find an algorythm to compress 200x.

        • partial_accumen@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 month ago

          What does this have to do with the question? Having samples of the data they want to compress is fundamental if you hope to find an algorythm to compress 200x.

          There were two questions asked. I answered for part of the first question. I have no information on the second question (samples). You’re welcome to do your own googling to see if you can find an answer.

  • drdiddlybadger@pawb.social
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 month ago

    That isn’t at all their problem their problem is scar tissue buildup that they haven’t even bothered addressing. Wtf are they doing talking about data compression when they can’t even maintain connection.

    • Cocodapuf@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 month ago

      You really think they only have one problem to solve? If that were the case this would be relatively easy.

  • TheDudeV2@lemmy.caOP
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 month ago

    I’m not an Information Theory guy, but I am aware that, regardless of how clever one might hope to be, there is a theoretical limit on how compressed any given set of information could possibly be; and this is particularly true for the lossless compression demanded by this challenge.

    Quote from the article:

    The skepticism is well-founded, said Karl Martin, chief technology officer of data science company Integrate.ai. Martin’s PhD thesis at the University of Toronto focused on data compression and security.

    Neuralink’s brainwave signals are compressible at ratios of around 2 to 1 and up to 7 to 1, he said in an email. But 200 to 1 “is far beyond what we expect to be the fundamental limit of possibility.”

    • Waldowal@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      I’m no expert in this subject either, but a theoretical limit could be beyond 200x - depending on the data.

      For example, a basic compression approach is to use a lookup table that allows you to map large values to smaller lookup ids. So, if the possible data only contains 2 values: One consisting of 10,000 letter 'a’s. The other is 10,000 letter 'b’s. We can map the first to number 1 and the second to number 2. With this lookup in place, a compressed value of “12211” would uncompress to 50,000 characters. A 10,000x compression ratio. Extrapolate that example out and there is no theoretical maximum to the compression ratio.

      But that’s when the data set is known and small. As the complexity grows, it does seem logical that a maximum limit would be introduced.

      So, it might be possible to achieve 200x compression, but only if the complexity of the data set is below some threshold I’m not smart enough to calculate.

      • QuadratureSurfer@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        1 month ago

        You also have to keep in mind that, the more you compress something, the more processing power you’re going to need.

        Whatever compression algorithm that is proposed will also need to be able to handle the data in real-time and at low-power.

        But you are correct that compression beyond 200x is absolutely achievable.

        A more visual example of compression could be something like one of the Stable Diffusion AI/ML models. The model may only be a few Gigabytes, but you could generate an insane amount of images that go well beyond that initial model size. And as long as someone else is using the same model/input/seed they can also generate the exact same image as someone else. So instead of having to transmit the entire 4k image itself, you just have to tell them the prompt, along with a few variables (the seed, the CFG Scale, the # of steps, etc) and they can generate the entire 4k image on their own machine that looks exactly the same as the one you generated on your machine.

        So basically, for only a few bits about a kilobyte, you can get 20+MB worth of data transmitted in this way. The drawback is that you need a powerful computer and a lot of energy to regenerate those images, which brings us back to the problem of making this data conveyed in real-time while using low-power.

        Edit:

        Tap for some quick napkin math

        For transmitting the information to generate that image, you would need about 1KB to allow for 1k characters in the prompt (if you really even need that),
        then about 2 bytes for the height,
        2 for the width,
        8 bytes for the seed,
        less than a byte for the CFG and the Steps (but we’ll just round up to 2 bytes).
        Then, you would want something better than just a parity bit for ensuring the message is transmitted correctly, so let’s throw on a 32 or 64 byte hash at the end…
        That still only puts us a little over 1KB (1078Bytes)… So for generating a 4k image (.PNG file) we get ~24MB worth of lossless decompression.
        That’s 24,000,000 Bytes which gives us roughly a compression of about 20,000x
        But of course, that’s still going to take time to decompress as well as a decent spike in power consumption for about 30-60+ seconds (depending on hardware) which is far from anything “real-time”.
        Of course you could also be generating 8k images instead of 4k images… I’m not really stressing this idea to it’s full potential by any means.

        So in the end you get compression at a factor of more than 20,000x for using a method like this, but it won’t be for low power or anywhere near “real-time”.

        • Cosmicomical@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 month ago

          just have to tell them the prompt, along with a few variables

          Before you can do that, you have to spend hours of computation to figure out a prompt and a set of variables that perfectly match the picture you want to transmit.

          • QuadratureSurfer@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 month ago

            Sure, but this is just a more visual example of how compression using an ML model can work.

            The time you spend reworking the prompt, or tweaking the steps/cfg/etc. is outside of the scope of this example.

            And if we’re really talking about creating a good pic it helps to use tools like control net/inpainting/etc… which could still be communicated to the receiving machine, but then you’re starting to lose out on some of the compression by a factor of about 1KB for every additional additional time you need to run the model to get the correct picture.

    • orclev@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      The implication of a 200 to 1 algorithm would be that the data they’re collecting is almost entirely noise. Specifically that 99.5% of all the data is noise. In theory if they had sufficient processing in the implant they could filter the data down before transmission thus reducing the bandwidth usage by 99.5%. It seems like it would be fairly trivial to prove that any such 200 to 1 compression algorithm would be indistinguishable in function from a noise filter on the raw data.

      It’s not quite the same situation, but this should show some of the issues with this: https://matt.might.net/articles/why-infinite-or-guaranteed-file-compression-is-impossible/

      • Death_Equity@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 month ago

        Absolutely, they need a better filter and on-board processing. It is like they are just gathering and transmitting for external processing instead of cherry picking the data matching an action that is previously trained and sending it as an output.

        I’m guessing they kept the processing power low because of heat or power availability, they wanted to have that quiet “sleek” puck instead of a brick with a fanned heatsink. Maybe they should consider a jaunty hat to hide the hardware.

        Gathering all the data available has future utility, but their data transmission bottleneck makes that capability to gather data worthless. They are trying to leap way too far ahead with too high of a vanity prioritization and getting bit for it, about par for the course with an Elon project.