• 0 Posts
  • 780 Comments
Joined 1 year ago
cake
Cake day: March 4th, 2024

help-circle



  • No the reality is, these [indecipherable] have simply gotten too powerful, and they continue to abuse our country for immense profit because the American public has allowed them to get away with it.

    I mean, it seems pretty clear that Luigi didn’t have a personal beef with Thompson directly, but moreso with the political system that allowed Thompson to thrive. I’m not really sure why saying this was politically-motivated is so divisive.



  • Just for what it’s worth, you don’t need CSAM in the training material for a generative AI to produce CSAM. The models know what children look like, and what naked adults look like, so they can readily extrapolate from there.

    The fact that you don’t need to actually supply any real CSAM to the training material is the reasoning being offered for supporting AI CSAM. It’s gross, but it’s also hard to argue with. We allow for all types of illegal subjects to be presented in porn; incest, rape, murder, etc. While most mainstream sites won’t allow those types of material, none of them are technically outlawed - partly because of freedom of speech and artistic expression and yadda yadda, but also partly because it all comes with the understanding that it’s a fake, made-for-film production and that nobody involved had their consent violated, so it’s okay because none of it was actually rape, incest, or murder. And if AI CSAM can be made without actually violating the consent of any real people, then what makes it different?

    I don’t know how I feel about it, myself. The idea of “ethically-sourced” CSAM doesn’t exactly sit right with me, but if it’s possible to make it in a truly victimless manner, then I find it hard to argue outright banning something just because I don’t like it.












  • all data on plebbit is text-only, you cannot upload media.

    I worry this still puts the “host” of a community at risk. In some jurisdictions, storing functional links to CSAM on your device, even in text form, is effectively the same thing as saving the actual media file locally. This means that a community admin would need to have some sort of system in place on their own machine to scan and remove those, which there doesn’t currently seem to be a mechanism in place to do automatically.

    Right now, it seems like a lot more responsibility for the end-user when creating a community, as opposed to the relatively consequence-free route of creating a community on Lemmy/Reddit.