We also didnt understand how the internet would change the world, still went ahead with it. We didnt understand how computers would change the world, still went ahead with it, we didnt understand how the steam engine would change the world… etc etc.
No one can know how a new invention will change things, but you are not going to be able to crush human’s innate creativity and drive to try new things. Sometimes those things are going to be a net negative and that’s bad, but the alternative is to insist nothing new is tried and thats A bad and B not possible.
People being economically displaced from innovation increasing productivity is good provided it happens at a reasonable pace and there is a sufficient social safety net to get those people back on their feet. Unfortunately those safety nets dont exist everywhere and have been under attack (in the west) for the past 40 years.
I don’t think that’s really a fair comparison, babies exist with images and sounds for over a year before they begin to learn language, so it would make sense that they begin to understand the world in non-linguistic terms and then apply language to that. LLMs only exist in relation to language so couldnt understand a concept separately to language, it would be like asking a person to conceptualise radio waves prior to having heard about them.
Probably, given that LLMs only exist in the domain of language, still interesting that they seem to have a “conceptual” systems that is commonly shared between languages.
Compared to a human who forms an abstract thought and then translates that thought into words. Which words I use has little to do with which other words I’ve used except to make sure I’m following the rules of grammar.
Interesting that…
Anthropic also found, among other things, that Claude “sometimes thinks in a conceptual space that is shared between languages, suggesting it has a kind of universal ‘language of thought’.”
Translations apps would be the main one for LLM tech, LLMs largely came out of google’s research into machine translation.
No you’re not going crazy, you just understand economics and trade more than the President of the USA.
I wouldnt trust the words of a Palantir exec if they said the sky was blue, but even accepting what they say, its just that the hamas attacks gave the ban impetus to move forwards. By his own words the ban already had bipartesan support and executive approval before that.
The headline that it was “about” Isreal rather than China is a massive reach.
Its a very difficult subject, both sides have merit. I can see the “CSAM created without abuse could be used in treatment/management of people with these horrible urges” but I can also see “Allowing people to create CSAM could normalise it and lead to more actual abuse”.
Sadly its incredibly difficult for academics to study this subject and see which of those two is more prevalent.
It does not, unless you run weights that someone else has modified to remove the baked in censorship. If you run the unmodified weights released by deepseek it will refuse to answer most things that the CCP dont like being discussed.
Not the parent, but LLMs dont solve anything, they allow more work with less effort expended in some spaces. Just as horse drawn plough didnt solve any problem that couldnt be solved by people tilling the earth by hand.
As an example my partner is an academic, the first step on working on a project is often doing a literature search of existing publications. This can be a long process and even more so if you are moving outside of your typical field into something adjacent (you have to learn what excatly you are looking for). I tried setting up a local hosted LLM powered research tool that you can ask it a question and it goes away, searches arxiv for relevant papers, refines its search query based on the abstracts it got back and iterates. At the end you get summaries of what it thinks is the current SotA for the asked question along with a list of links to papers that it thinks are relevant.
Its not perfect as you’d expect but it turns a minute typing out a well thought question into hours worth of head start into getting into the research surrounding your question (and does it all without sending any data to OpenAI et al). That getting you over the initial hump of not knowing exactly where to start is where I see a lot of the value of LLMs.
Yeah, fair enough, I was refering to posts and comments not other metadata because that isnt publicly available just as a get request (as far as I’m aware)
Everything on the Fediverse is almost certainly scraped, and will be repeatedly. You cant “protect” content that is freely available on a public website.
So if I modify an LLM to have true randomness embedded within it (e.g. using a true random number generator based on radioactive decay ) does that then have free will?
If viruses have free will when they are machines made out of rna which just inject code into other cells to make copies of themselves then the concept is meaningless (and also applies to computer programs far simpler than llms).
So where does it end? Slugs, mites, krill, bacteria, viruses? How do you draw a line that says free will this side of the line, just mechanics and random chance this side of the line?
I just dont find it a particularly useful concept.
There’s a vast gulf between automated moderation systems deleting posts and calling the cops on someone.
Look, Reddit bad, AI bad. Engaging with anything more that the most surface level reactions is hard so why bother?
At a recent conference in Qatar, he said AI could even “unlock” a system where people use “sliders” to “choose their level of tolerance” about certain topics on social media.
That combined with a level of human review for people who feel they have been unfairly auto-moderated seems entirely reasonable to me.
Empty sequences being false goes back a lot further than perl, it was already a thing in the first lisp (in fact the empty list was the cannonical false).