• 1 Post
  • 76 Comments
Joined 2 years ago
cake
Cake day: June 12th, 2023

help-circle








  • I’d say it is (was? It’s been ~a year and a half since I used it consistently but I’m guessing it hasn’t changed too much since then) moderately left by US standards but definitely not progressive left - you don’t have to go very far to find thinly-veiled sexism/racism/homophobia, though that might just be because a large portion of the people there are terminally online in a bad way. That being said, there are definitely also communities ranging from conservative to hardcore conservative as well but I actively tried to avoid those so I didn’t really see them in my feeds. The same is true with progressive communities but they tended to drift away from being actually progressive once they got to a certain size.










  • There were plenty of good shows in 2023 though? Even excluding Frieren and shonens (since I’m assuming based on what you said, you aren’t interested in them) there was also Apothecary Diaries which aired during the same season. Oshi No Ko was pretty good also (the first episode is by far the best, imo the rest of the show is still pretty good tho). Those definitely stood out the most to me but I did enjoy a lot of the 2023 shows I watched.


  • Decentralized/OSS platforms >>> Multiple competing centralized platforms >>> One single centralized platform

    Bluesky and Threads are both bad but having more options than Twitter/X is still a step in the right direction, especially given the direction Musk is taking it in. As much as I like the fediverse (I won’t be using either Threads or BlueSky anytime soon), it still has a lot of problems surrounding ease of use. Lemmy, Mastodon, Misskey, etc. would benefit a lot from improving the signup process so that the average user doesn’t need to be overwhelmed with picking an instance and understanding how federation works.


  • I’m not trying to say LLM’s haven’t gotten better on a technical level, nor am I trying to say there should have been AGI by now. I’m trying to say that from a user perspective, ChatGPT, Google Gemini, etc. are about as useful as they were when they came out (i.e. not very). Context size might have changed, but what does that actually mean for a user? ChatGPT writing is still obviously identifiable and immediately discredits my view of someone when I see it. Same with AI generated images. From experience, ChatGPT, Gemini, and all the others still hallucinate points which makes it near-useless for learning new topics since you can’t be sure what is real and what’s made up.

    Another thing I take issue with is open source models that are driven by VCs anyway. A model of an LLM might be open source, but is the LLM actually open source? IMO this is one of those things where the definitions haven’t caught up to actual usage. A set of numerical weights achieved by training on millions of pieces of involuntarily taken data based on retroactively modified terms of service doesn’t seem open source to me, even if the model itself is. And AI companies have openly admitted that they would never be able to make what they have if they had to ask for permission. When you say that “open source” LLMs have caught up, is that true, or are these the LLM-equivalent of uploading a compiled binary to GitHub and then calling that open source?

    ChatGPT still loses OpenAI hundreds of thousands of dollars per day. The only way for a user to be profitable to them is if they own the paid tier and don’t use it. The service was always venture capital hype to begin with. The same applies to Copilot and Gemini as well, and probably to companies like Perplexity as well.

    My issue with LLMs isn’t that it’s useless or immoral. It’s that it’s mostly useless and immoral, on top of causing higher emissions, making it harder to find actual results as AI-generated slop combines with SEO. They’re also normalizing collection of any and all user data for training purposes, including private data such as health tracking apps, personal emails, and direct messages. Half-baked AI features aren’t making computers better, they’re actively making computers worse.