• 3 Posts
  • 66 Comments
Joined 1 year ago
cake
Cake day: June 21st, 2023

help-circle




  • SD? SD 3? The weights? All the above?

    Stable Diffusion is an open source image generating machine learning model (similar to Midjourney).

    Stable Diffusion 3 is the next major version of the model and, in a lot of ways, it looks better to work with than what we currently have. However, up until recently we were wondering if we would even get the model since Stability AI ran out of funding and they’re in the midst of being sold off.

    The “weights” refer to the values that make up the neural network. Basically by releasing the weights they are essentially saying that they are making the model open-source so that the community can retrain/fine-tune the model as much as we want.

    They made a wait list for those who are interested in getting notified once the model is released, and they turned it into a pun by calling it a “weights list”.



  • Sure, but this is just a more visual example of how compression using an ML model can work.

    The time you spend reworking the prompt, or tweaking the steps/cfg/etc. is outside of the scope of this example.

    And if we’re really talking about creating a good pic it helps to use tools like control net/inpainting/etc… which could still be communicated to the receiving machine, but then you’re starting to lose out on some of the compression by a factor of about 1KB for every additional additional time you need to run the model to get the correct picture.



  • You also have to keep in mind that, the more you compress something, the more processing power you’re going to need.

    Whatever compression algorithm that is proposed will also need to be able to handle the data in real-time and at low-power.

    But you are correct that compression beyond 200x is absolutely achievable.

    A more visual example of compression could be something like one of the Stable Diffusion AI/ML models. The model may only be a few Gigabytes, but you could generate an insane amount of images that go well beyond that initial model size. And as long as someone else is using the same model/input/seed they can also generate the exact same image as someone else. So instead of having to transmit the entire 4k image itself, you just have to tell them the prompt, along with a few variables (the seed, the CFG Scale, the # of steps, etc) and they can generate the entire 4k image on their own machine that looks exactly the same as the one you generated on your machine.

    So basically, for only a few bits about a kilobyte, you can get 20+MB worth of data transmitted in this way. The drawback is that you need a powerful computer and a lot of energy to regenerate those images, which brings us back to the problem of making this data conveyed in real-time while using low-power.

    Edit:

    Tap for some quick napkin math

    For transmitting the information to generate that image, you would need about 1KB to allow for 1k characters in the prompt (if you really even need that),
    then about 2 bytes for the height,
    2 for the width,
    8 bytes for the seed,
    less than a byte for the CFG and the Steps (but we’ll just round up to 2 bytes).
    Then, you would want something better than just a parity bit for ensuring the message is transmitted correctly, so let’s throw on a 32 or 64 byte hash at the end…
    That still only puts us a little over 1KB (1078Bytes)… So for generating a 4k image (.PNG file) we get ~24MB worth of lossless decompression.
    That’s 24,000,000 Bytes which gives us roughly a compression of about 20,000x
    But of course, that’s still going to take time to decompress as well as a decent spike in power consumption for about 30-60+ seconds (depending on hardware) which is far from anything “real-time”.
    Of course you could also be generating 8k images instead of 4k images… I’m not really stressing this idea to it’s full potential by any means.

    So in the end you get compression at a factor of more than 20,000x for using a method like this, but it won’t be for low power or anywhere near “real-time”.





  • Sure, but don’t let that feed into the sentiment that AI = scams. It’s way too broad of a term that covers a ton of different applications (that already work) to be used in that way.

    And there are plenty of popular commercial AI products out there that work as well, so trying to say that “pretty much everything that’s commercial AI is a scam” is also inaccurate.

    We have:
    Suno’s music generation
    NVidia’s upscaling
    Midjourney’s Image Generation
    OpenAI’s ChatGPT
    Etc.

    So instead of trying to tear down everything and anything “AI”, we should probably just point out that startups using a lot of buzzwords (like “AI”) should be treated with a healthy dose of skepticism, until they can prove their product in a live environment.


  • If you think that “pretty much everything AI is a scam”, then you’re either setting your expectations way too high, or you’re only looking at startups trying to get the attention of investors.

    There are plenty of AI models out there today that are open source and can be used for a number of purposes: Generating images (stable diffusion), transcribing audio (whisper), audio generation, object detection, upscaling, downscaling, etc.

    Part of the problem might be with how you define AI… It’s way more broad of a term than what I think you’re trying to convey.