• 0 Posts
  • 13 Comments
Joined 2 years ago
cake
Cake day: June 16th, 2023

help-circle



  • I agree, but I’m not sure it matters when it comes to the big questions, like “what separates us from the LLMs?” Answering that basically amounts to answering “what does it mean to be human?”, which has been stumping philosophers for millennia.

    It’s true that artificial neurons are significant different than biological ones, but are biological neurons what make us human? I’d argue no. Animals have neurons, so are they human? Also, if we ever did create a brain simulation that perfectly replicated someone’s brain down to the cellular level, and that simulation behaved exactly like the original, I would characterize that as a human.

    It’s also true LLMs can’t learn, but there are plenty of people with anterograde amnesia that can’t either.

    This feels similar to the debates about what separates us from other animal species. It used to be thought that humans were qualitatively different than other species by virtue of our use of tools, language, and culture. Then it was discovered that plenty of other animals use tools, have language, and something resembling a culture. These discoveries were ridiculed by many throughout the 20th century, even by scientists, because they wanted to keep believing humans are special in some qualitative way. I see the same thing happening with LLMs.










  • The reason these models are being heavily censored is because big companies are hyper-sensitive to the reputational harm that comes from uncensored (or less-censored) models. This isn’t unique to AI; this same dynamic has played out countless times before. One example is content moderation on social media sites: big players like Facebook tend to be more heavy-handed about moderating than small players like Lemmy. The fact small players don’t need to worry so much about reputational harm is a significant competitive advantage, since it means they have more freedom to take risks, so this situation is probably temporary.