• Echo Dot@feddit.uk
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 hours ago

    Obviously I can’t really answer that question. It’s nuanced I can’t give a black and white answer.

    Notice I said chords, music isn’t just chords though. I mentioned it because they have been copyright cases where people have tried to claim that they can own certain chords or certain chord progression, the courts have decided that isn’t the case. You can own the composition but not the progression.

    AI music is an entire piece, theoretically an original piece, you could of course make the arguement that it’s just cutting it up bits of pre-existing work and sticking them back together but you could also make that arguement of a human as well. Copyright law isn’t really fit for the 21st century and it certainly isn’t fit to deal with the existence of AI, but that’s nothing new. I can go online right now and find music that sounds like the Imperial March, is that copyright violation? The courts don’t think so.

    • MountingSuspicion@reddthat.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 hours ago

      I had initially written a different comment instead of asking about music, but since you brought it up I felt like that’d be a good gauge. I think we should treat all AI devised work similarly. If support AI code we should support AI art. I do think a nuanced approach is understandable though. The following is what I had initially written for my previous comment:

      I don’t think there is a “correct” way of doing a lot of the things AI is being asked to do. There are conventions which are followed, but plenty of people solve the same problems in different ways. Is an interaction a click, long press, or a swipe? Is something a button and a popup or a simple menu item? If there was a single correct way to solve coding questions, there would only be one operating system or one Lemmy app. I think a lot of people see code as a bunch of loops rather than a (hopefully) well planned solution to a problem. AI as it stands cannot actually understand a problem, but it can guess what a solution might look like based on similar problems. It takes those people’s solutions and assumptions and applies them and there is an assumption that the output is somehow inevitably that regardless of what or who would have been asked to solve the problem. It’s like saying there’s a “correct” photorealistic beach scene. Sure, plenty of people might have an idea of what that might look like, but everyone might have a different take. Are there palm trees? Are there birds? Where is the sun in the picture? I’m sure AI can generate a serviceable rendition of one, but it’s rendition is no more correct, and it certainly wasn’t done with an actual understanding of the elements of the scene. People aren’t asking AI to generate “if… else…” they are having AI design applications whole cloth and we end up seeing the results like that site that had its user list exposed on its front end or that have buttons that go nowhere. If there was one correct solution then AI either didn’t know it, didn’t understand it, or didn’t deliver it. Any of those options is a failure of the AI in that case, but the in my opinion true and scarier answer is that there is no correct way to do a lot of things AI is generating code for and that it is prone to mistakes for that reason.