• 1 Post
  • 12 Comments
Joined 1 year ago
cake
Cake day: June 18th, 2023

help-circle

  • They’re good for the short term possibly. But longer term, people will be wary of getting in too deep with them and will seek out other alternatives. A game engine like unity thrives on large numbers of skilled users and lots of games using the engine. One of those users or games could’ve been the next big win. Now that might go to unreal instead.



  • This isn’t true with AI generators. You can absolutely draw in a shitty stick figure with the pose you want and it’ll transform that into a proper artwork with the person in that pose. There are tons and tons of ways to manipulate the the output.

    And again, we give copyright to artists that incorporate randomness into their art. If I throw darts at paint filled balloons I get to copyright the output. It would be absolutely impossible to replicate that piece and I only have vague control over the results.


  • These are my thoughts as well. It seems obvious that putting in ‘cat with a silly hat’ as a prompt is basically the creative equivalent of googling for a picture.

    But, as you say, that sort of AI usage is just dumb, bottom tier usage. There’s going to someday be a major, critical piece of art that heavily uses AI assistance in it’s creation and people are going to be surprised that it’s somehow not copyrightable under the laws and rulings they’re working on now.

    I remember in the LOTR behind the scenes they talked about how WETA built a game l like software to simulate the massive battle scenes, giving each soldier a series of attacks and hp, etc. They then used this to build out the final CGI.

    Stuff like that has already been going on for ages and it’s only going to get more murky as to what ‘AI art’ even means and what is enough human creativity and editing added to the process to make it human created rather than AI created.





  • AI art is derivative work, and claim that the authors of the works used to train the model shall have partial copyright over it too.

    To me this is a potential can of worms. Humans can study and mimic art from other humans. It’s a fundamental part of the learning process.

    My understanding of modern AI image generation is that it’s much more advanced than something like music sampling, it’s not just an advanced cut and paste machine mashing art works together. How would you ever determine how much of a particular artists training data was used in the output?

    If I create my own unique image in Jackson Pollock’s style I own the entirety of that copyright, with Pollock owning nothing, no matter that everyone would recognize the resemblance in style. Why is AI different?

    It feels like expanding the definition of derivative works is more likely to result in companies going after real artists who mimic or are otherwise inspired by Disney/Pixar/etc and attempting to claim partial copyright rather than protecting real artists from AI ripoffs.


  • A photographer does not give their camera prompts and then evaluate the output.

    I understand what you’re trying to say, but I think this will grow increasingly unclear as machines/software continue to play a larger and larger part of the creative process.

    I think you can argue that photographers issue commands to their camera and then evaluate the output. Modern digital cameras have made photography almost a statistical exercise rather than a careful creative process. Photographers take hundreds and hundreds of shots and then evaluate which one was best.

    Also, AI isn’t some binary on/off. Most major software will begin incorporating AI assistant tools that will further muddy the waters. Is something AI generated if the artist added an extra inch of canvas to a photograph using photoshops new generative fill function so that the subject was better centered in the frame?



  • Companies aren’t run to earn profit based on goods and services generated anymore. They are investment vehicles for wealthy VC to use and abuse until they run them into the ground while they jump to the next disposable company. Someday this will result in no effective company existing anymore, but the investors don’t care.

    If governments were actually functioning they’d recognize this danger and crack down on this behavior because it weakens the country as a whole, but most of the politicians are already bought and paid for.


  • People are also waaay overestimating how close we are to the classical AI shown in media. They see ChatGPT and understand that it has problems, but also know we went from dumb phones to super fast smartphones really quickly, so apply the same logic to AI, when it’s closer to the ‘bird in the picture’ xkcd comic. (Ironically that problem can now be solved by ‘AI’, but the point still stands). End users are bad at estimating the complexity of a given task and taking something like our current AI models to something like Cortana from Halo is a completely unknown amount of time away. Most likely decades if not centuries from now. The current approach to AI will most likely never work like that, because it has no true ability to learn and grow. At least not in the human sense.