

Another unpublished preprint that hasn’t published peer review? Funny how that somehow doesn’t matter when something seemingly supports your talking points. Too bad it doesn’t exactly mean what you want it to mean.
“Logical operations and definitions” = Booleans and propositional logic formalisms. You don’t do that either because humans don’t think like that but I’m not surprised you’d avoid mentioning the context and go for the kinda over the top and easy to misunderstand conclusion.
It’s really interesting how you get people constantly doubling down on specifically chatbots being useless citing random things from google but somehow Palantir finds great usage in their AIs for mass surveillance and policing. What’s the talking point there, that they’re too dumb to operate and that nobody should worry?
You’re less coherent than a broken LLM lol. You made the claim that transformer-based AIs are fundamentally incapable of reasoning or something vague like that using gimmicky af “I tricked the chatbot into getting confused therefore it can’t think” unpublished preprints (while asking for peer review). Why would I need to prove something? LLMs can write code, that’s an undeniable demonstration that they understand abstract logic fairly well that can’t be faked using probability and it would be a complete waste of time to explain it to anyone who is either having issues with cognitive dissonance or less often may be intentionally trying to spread misinformation.
Are the AIs developed by Palantir “fundamentally incapable” of their demonstrated effectiveness or not? It’s a pretty valid question when we’re already surveilled by them but some people like you indirectly suggest that this can’t be happening. Should people not care about predictive policing?
How about the industrial control AIs that you “critics” never mention, do power grid controllers fake it? You may need to tell Siemens, they’re not aware their deployed systems work. And while on that, we shouldn’t be concerned about monopolies controlling public infrastructure with closed source AI models because they’re “fundamentally incapable” to operate?
I don’t know, maybe this “AI skepticism” thing is lowkey intentional industry misdirection and most of you fell for it?