

It does not, unless you run weights that someone else has modified to remove the baked in censorship. If you run the unmodified weights released by deepseek it will refuse to answer most things that the CCP dont like being discussed.
It does not, unless you run weights that someone else has modified to remove the baked in censorship. If you run the unmodified weights released by deepseek it will refuse to answer most things that the CCP dont like being discussed.
Not the parent, but LLMs dont solve anything, they allow more work with less effort expended in some spaces. Just as horse drawn plough didnt solve any problem that couldnt be solved by people tilling the earth by hand.
As an example my partner is an academic, the first step on working on a project is often doing a literature search of existing publications. This can be a long process and even more so if you are moving outside of your typical field into something adjacent (you have to learn what excatly you are looking for). I tried setting up a local hosted LLM powered research tool that you can ask it a question and it goes away, searches arxiv for relevant papers, refines its search query based on the abstracts it got back and iterates. At the end you get summaries of what it thinks is the current SotA for the asked question along with a list of links to papers that it thinks are relevant.
Its not perfect as you’d expect but it turns a minute typing out a well thought question into hours worth of head start into getting into the research surrounding your question (and does it all without sending any data to OpenAI et al). That getting you over the initial hump of not knowing exactly where to start is where I see a lot of the value of LLMs.
Yeah, fair enough, I was refering to posts and comments not other metadata because that isnt publicly available just as a get request (as far as I’m aware)
Everything on the Fediverse is almost certainly scraped, and will be repeatedly. You cant “protect” content that is freely available on a public website.
So if I modify an LLM to have true randomness embedded within it (e.g. using a true random number generator based on radioactive decay ) does that then have free will?
If viruses have free will when they are machines made out of rna which just inject code into other cells to make copies of themselves then the concept is meaningless (and also applies to computer programs far simpler than llms).
So where does it end? Slugs, mites, krill, bacteria, viruses? How do you draw a line that says free will this side of the line, just mechanics and random chance this side of the line?
I just dont find it a particularly useful concept.
There’s a vast gulf between automated moderation systems deleting posts and calling the cops on someone.
Look, Reddit bad, AI bad. Engaging with anything more that the most surface level reactions is hard so why bother?
At a recent conference in Qatar, he said AI could even “unlock” a system where people use “sliders” to “choose their level of tolerance” about certain topics on social media.
That combined with a level of human review for people who feel they have been unfairly auto-moderated seems entirely reasonable to me.
Ok, but then you run into why does billions of vairables create free will in a human but not a computer? Does it create free will in a pig? A slug? A bacterium?
eh, the entireity of training GPT4 and the whole world using it for a year turns out to be about 1% of the gasoline burnt just by the USA every single day. Its barely a rounding error when it comes to energy usage.
The articles point was that markdown (or other similar utf-8 text based documents) is the best guarantee you have for the files being usable into the indefinite future. As you get into the complicated formats of things like word processors the less likely that format will be meaningfully usable in 10,20,50 years time, good luck reading a obsolete word processor file from the 80s today.
No I’m not, I’m just not assuming immigrants have 0 buying power, which your post implicitly was. Yes supply increases but demand also increases. Beyond that you get into the realms of having to do empirical research as to which is more (which is difficult).
More people also means more demand for things that require labour to create however. Your position is referred to as the lump of labour fallacy
1 of 3
noun
la·bor ˈlā-bər
plural labors
So by going harder on blocking content that China? Because that’s what they do but most of the big providers get through after a day or two of downtime each time the government make a change to block them.
It would be more productive if you said how you think im wrong. Just saying ‘youre wrong’ doesnt really add anything to the discussion.
It produces about the same power per cubic metre as compost does, which is pretty crazy when you think about it.
Inertial confinement doesnt produce a “stable reaction” it is pulsed by it’s nature, think of it in the same way as a single cylinder internal combustion engine, periodic explosions which are harnessed to do useful work. So no the laser energy is required every single time to detonate the fuel pellet.
NIF isnt really interested in fusion for power production, it’s a weapons research facility that occasionally puts out puff pieces to make it seem like it has civilian applications.
Its a very difficult subject, both sides have merit. I can see the “CSAM created without abuse could be used in treatment/management of people with these horrible urges” but I can also see “Allowing people to create CSAM could normalise it and lead to more actual abuse”.
Sadly its incredibly difficult for academics to study this subject and see which of those two is more prevalent.