• 0 Posts
  • 421 Comments
Joined 1 year ago
cake
Cake day: June 22nd, 2023

help-circle





  • They claim he made a threat. The article failed to print his side of the story for some curious reason. It isn’t printing any testimony from the bystanders, either.

    Fair enough, supposedly they were wearing body cams so hopefully some of what actually happened can be answered objectively, I’m just pointing out what the article said. If he didn’t make a threat or have a knife, then tasering him is a wild escalation, it’s just that if he did, then the police can’t really just let him get on a train.

    Cops will often lie about the danger of a suspect in order to justify elevating their use-of-force. That said, they weren’t that concerned by his unreasonableness when they deployed tasers into the crowd first. They didn’t switch to guns until they realized the tasers weren’t going to work.

    Again, assuming what the article says is true, which is a big assumption, it’s not that crazy to taser a guy who just got onto a train with a knife and threatened to you. At that point you’re looking at a potential mass stabbing incident if you do nothing.

    Again, who knows, maybe the cops are blowing his behaviour wildly out of proportion, I’m just saying that, based on the article, it sounds like he wasn’t just gunned down for jumping a turnstile.



  • The work is reproduced in full when it’s downloaded to the server used to train the AI model, and the entirety of the reproduced work is used for training. Thus, they are using the entirety of the work.

    That’s objectively false. It’s downloaded to the server, but it should never be redistributed to anyone else in full. As a developer for instance, it’s illegal for me to copy code I find in a medium article and use it in our software. I’m perfectly allowed to read that Medium article, learn from it, and then right my own similar code.

    And that makes it better somehow? Aereo got sued out of existence because their model threatened the retransmission fees that broadcast TV stations were being paid by cable TV subscribers. There wasn’t any devaluation of broadcasters’ previous performances, the entire harm they presented was in terms of lost revenue in the future. But hey, thanks for agreeing with me?

    And Aero should not have lost that suit. That’s an example of the US court system abjectly failing.

    And again, LLM training so egregiously fails two out of the four factors for judging a fair use claim that it would fail the test entirely. The only difference is that OpenAI is failing it worse than other LLMs.

    That’s what we’re debating, not a given.

    It’s even more absurd to claim something that is transformative automatically qualifies for fair use.

    Fair point, but it is objectively transformative.




  • You said open source. Open source is a type of licensure.

    The entire point of licensure is legal pedantry.

    No. Open source is a concept. That concept also has pedantic legal definitions, but the concept itself is not inherently pedantic.

    And as far as your metaphor is concerned, pre-trained models are closer to pre-compiled binaries, which are expressly not considered Open Source according to the OSD.

    No, they’re not. Which is why I didn’t use that metaphor.

    A binary is explicitly a black box. There is nothing to learn from a binary, unless you explicitly decompile it back into source code.

    In this case, literally all the source code is available. Any researcher can read through their model, learn from it, copy it, twist it, and build their own version of it wholesale. Not providing the training data, is more similar to saying that Yuzu or an emulator isn’t open source because it doesn’t provide copyrighted games. It is providing literally all of the parts of it that it can open source, and then letting the user feed it whatever training data they are allowed access to.



  • LLMs use the entirety of a copyrighted work for their training, which fails the “amount and substantiality” factor.

    That factor is relative to what is reproduced, not to what is ingested. A company is allowed to scrape the web all they want as long as they don’t republish it.

    By their very nature, LLMs would significantly devalue the work of every artist, author, journalist, and publishing organization, on an industry-wide scale, which fails the “Effect upon work’s value” factor.

    I would argue that LLMs devalue the author’s potential for future work, not the original work they were trained on.

    Those two alone would be enough for any sane judge to rule that training LLMs would not qualify as fair use, but then you also have OpenAI and other commercial AI companies offering the use of these models for commercial, for-profit purposes, which also fails the “Purpose and character of the use” factor.

    Again, that’s the practice of OpenAI, but not inherent to LLMs.

    You could maybe argue that training LLMs is transformative,

    It’s honestly absurd to try and argue that they’re not transformative.


  • Tap for spoiler

    I agree the title was a giveaway, but what do you mean more of a Sweeny Todd situation? She literally baked their blood into tea cakes that she served to the neighbourhood women and gave them soap that she made from boiling the fat of the last victim… Also, it’s a little pedantic, but I believe that technically she didn’t dissolve them in acid but in a strong base… If it was from soap making, it was most likely lye which is extremely basic. Caustic just means it will dissolve you, but both acids and bases can do that.





  • Making a copy is free. Making the original is not.

    Yes, exactly. Do you see how that is different from the world of physical objects and energy? That is not the case for a physical object. Even once you design something and build a factory to produce it, the first item off the line takes the same amount of resources as the last one.

    Capitalism is based on the idea that things are scarce. If I have something, you can’t have it, and if you want it, then I have to give up my thing, so we end up trading. Information does not work that way. We can freely copy a piece of information as much as we want. Which is why monopolies and capitalism are a bad system of rewarding creators. They inherently cause us to impose scarcity where there is no need for it, because in capitalism things that are abundant do not have value. Capitalism fundamentally fails to function when there is abundance of resources, which is why copyright was a dumb system for the digital age. Rather than recognize that we now live in an age of information abundance, we spend billions of dollars trying to impose artificial scarcity.


  • they did NOT predict generative AI, and their graphics cards just HAPPEN to be better situated for SOME reason.

    This is the part that’s flawed. They have actively targeted neural network applications with hardware and driver support since 2012.

    Yes, they got lucky in that generative AI turned out to be massively popular, and required massively parallel computing capabilities, but luck is one part opportunity and one part preparedness. The reason they were able to capitalize is because they had the best graphics cards on the market and then specifically targeted AI applications.