

In a sense, AI is already fucking with everyone’s brain when it comes to mass-produced ads and propaganda.
In a sense, AI is already fucking with everyone’s brain when it comes to mass-produced ads and propaganda.
Better than creating this culinary atrocity in real life.
I agree, but I’m not sure it matters when it comes to the big questions, like “what separates us from the LLMs?” Answering that basically amounts to answering “what does it mean to be human?”, which has been stumping philosophers for millennia.
It’s true that artificial neurons are significant different than biological ones, but are biological neurons what make us human? I’d argue no. Animals have neurons, so are they human? Also, if we ever did create a brain simulation that perfectly replicated someone’s brain down to the cellular level, and that simulation behaved exactly like the original, I would characterize that as a human.
It’s also true LLMs can’t learn, but there are plenty of people with anterograde amnesia that can’t either.
This feels similar to the debates about what separates us from other animal species. It used to be thought that humans were qualitatively different than other species by virtue of our use of tools, language, and culture. Then it was discovered that plenty of other animals use tools, have language, and something resembling a culture. These discoveries were ridiculed by many throughout the 20th century, even by scientists, because they wanted to keep believing humans are special in some qualitative way. I see the same thing happening with LLMs.
I don’t know how I work. I couldn’t tell you much about neuroscience beyond “neurons are linked together and somehow that creates thoughts”. And even when it comes to complex thoughts, I sometimes can’t explain why. At my job, I often lean on intuition I’ve developed over a decade. I can look at a system and get an immediate sense if it’s going to work well, but actually explaining why or why not takes a lot more time and energy. Am I an LLM?
deleted by creator
Ph’nglui mglw’nafh Kevin Rose Digg wgah’nagl fhtagn
I woke up screaming last night because I dreamed I went to grab my colored pencils and they said “colour” on the box. Almost as bad as that time I dreamed I had to take a driving tests and all the speed signs were in KM.
I’ll remember that the next time I enter my PIN number at an ATM machine.
Thankfully, there’s an official standard for using the internet with just carrier pigeons: https://en.m.wikipedia.org/wiki/IP_over_Avian_Carriers
Let me guess, its favorite band is sssssslayer
That’s not what I’m saying at all. I’m saying the rich and powerful have a vested interest in not taking risks that jeopardize their power and wealth, because they have more to lose.
The reason these models are being heavily censored is because big companies are hyper-sensitive to the reputational harm that comes from uncensored (or less-censored) models. This isn’t unique to AI; this same dynamic has played out countless times before. One example is content moderation on social media sites: big players like Facebook tend to be more heavy-handed about moderating than small players like Lemmy. The fact small players don’t need to worry so much about reputational harm is a significant competitive advantage, since it means they have more freedom to take risks, so this situation is probably temporary.
I work at big tech (not MS) and yes, the comp package really is that good, though not as good as it used to be. I immediately doubled my total comp when I came here from my last job, and now it’s ~5x. I could retire right now if I wanted, so I don’t care about layoffs anymore.