

Seriously, why the fuck is he still CEO of that company? He’s actively undermining them in every way on a global scale. Tesla shareholders are idiots…
Seriously, why the fuck is he still CEO of that company? He’s actively undermining them in every way on a global scale. Tesla shareholders are idiots…
I guess it’s time to look at self-hosted cloud storage services like NextCloud, OwnCloud, Seafile, CryptPad etc. that can replace Proton Drive, but does anyone have any recommendations for a secure email service to replace Proton Mail? From what I read on r/selfhosted, while you can technically run your own email server, it’s just not worth it.
The problem with this is that Trump acting on his own, or in pure MAGA mode, is even worse than him acting under Musk’s influence. I mean I absolutely hate Musk and the bad name he’s given EV’s, but his influence on Trump is literally my only glimmer of hope that the American vehicle fleet will electrify enough–and quickly enough–to stave off the very worst version of climate catastrophe. Sadly Musk either doesn’t seem to give a shit about his own company, or is too busy making the cynical play that in a subsidy-free market Tesla wins due to sheer scale, as long as tariffs keep out cheap import EVs… it wouldn’t be the first time he had screwed the EV market at large in order to be the top dog in a smaller luxury niche.
But again, with immigration, Musk and Vivek are the only dissenting voices in a sea of xenophobia, even though, again, I hate the cynical anti-labor motivations behind their advocacy for H1B visas. Still, the alternative is Stephen Miller and full-on white supremacy with no exceptions for smart hard-working brown people.
It absolutely sucks that our glimmer of hope is that the billionaires who used to sound more liberal will feel some weird compulsion to act consistent with their past statements, and it’s a very slim chance that this will happen anyways. But given the state of affairs, it’s what we’ve got.
r/SubSimGPT2Interactive and spinoffs
Since I used to run GPT-2 bots on Reddit (openly declared as such, in a bot-friendly sub, using LLMs so stupid/deranged nobody would mistake them for real accounts) I’ve been thinking about this problem for a long time. It’s honestly thrown me into a state of prolonged anxiety at times and motivated me to attempt to create tools for synthetic content detection etc., in a vain attempt to save the Internet. And I’ve concluded that we’re well past that point, and approaching the point at which we need to reconsider what, exactly, the internet really is, and that is to say that it should not be considered a source of any sort of authentic experience. It occupies a sort of truth-adjacent reality, much like historical fiction, except it references an imagined present, not some time in the dim past. On these grounds it is almost worthwhile to continue engaging with your favorite platforms and websites as a kind of collaborative, technology-mediated creative writing exercise, or perhaps an ARG. It doesn’t feel quite so pointless, viewed through that lens.
it’s not called “bots” outside of social media but synthetic content is widespread across the rest of the Internet, due to different, but similarly large incentives. So no, it’s not just a FB/Reddit/Meta etc. problem.
Yeah I would be fine with this IF he also used the expanded powers granted to him by Trump’s Supreme Court to block the incoming fascist/monarchist takeover. Or, fine, don’t try to block them with anything that gives the courts a chance to clarify that ruling, but also don’t transfer power smoothly and peacefully to these bastards in any way shape or form, you know? If you’re saying “fuck it”, then fuck ALL of it; not just the parts of it that affect you personally.
I had never heard of it before now–thanks!
I’m honestly surprised that nobody has said anything about MS Office, but it’s not like I expect anyone to miss the application itself, it’s just that if your work requires you to interface with it, there really is no alternative to running Windows or MacOS. Microsoft’s own Office Online versions of the apps do a worse job of maintaining DOC/PPT formatting consistency than the possible Russian spyware that is OnlyOffice, which also screws things up too often to be relied upon. LibreOffice is, let’s be honest, a total mess (with the exception of Calc, which also isn’t consistent with the current version of Excel, but can do some things that Excel no longer can do, so I appreciate it more as a complementary tool than as a replacement).
so which country do you hail from?
The musical instrument thing is transitory and depends entirely on the instrument.
Pre-relationship; in a popular band playing a more traditional instrument like guitar with a bunch of also attractive people (or at least part of a cool local scene) = hot
In a relationship and/or solo bedroom producing any kind of electronic music and/or buying lots of synthesizers, drum machines or grooveboxes = not hot
Also note how low “clubbing” is on the least attractive list, so no, DJs and electronic musicians who perform live don’t get a pass
this is learning completely the wrong lesson. it has been well-known for a long time and very well demonstrated that smaller models trained on better-curated data can outperform larger ones trained using brute force “scaling”. this idea that “bigger is better” needs to die, quickly, or else we’re headed towards not only an AI winter but an even worse climate catastrophe as the energy requirements of AI inference on huge models obliterate progress on decarbonization overall.
those are all classification problems, which is a fundamentally different kind of problem with less open-ended solutions, so it’s not surprising that they are easier to train and deploy.
I really wish it were easier to fine-tune and run inference on GPT-J-6B as well… that was a gem of a base model for research purposes, and for a hot minute circa Dolly there were finally some signs it would become more feasible to run locally. But all the effort going into llama.cpp and GGUF kinda left GPT-J behind. GPT4All used to support it, I think, but last I checked the documentation had huge holes as to how exactly that’s done.
One of the reasons I love StarCoder, even for non-coding tasks. Trained only on Github means no “instruction finetuning” bullshit ChatGPT-speak.
oh great, so we can look forward to another horrifyingly mismanaged pandemic!
absolutely batshit crazy how nobody at all mentioned that not laid it at Trump’s doorstep during the campaign
There are a bunch of reasons why this could happen. First, it’s possible to “attack” some simpler image classification models; if you get a large enough sample of their outputs, you can mathematically derive a way to process any image such that it won’t be correctly identified. There have also been reports that even simpler processing, such as blending a real photo of a wall with a synthetic image at very low percent, can trip up detectors that haven’t been trained to be more discerning. But it’s all in how you construct the training dataset, and I don’t think any of this is a good enough reason to give up on using machine learning for synthetic media detection in general; in fact this example gives me the idea of using autogenerated captions as an additional input to the classification model. The challenge there, as in general, is trying to keep such a model from assuming that all anime is synthetic, since “AI artists” seem to be overly focused on anime and related styles…
Well, maybe we need a movement to make physical copies of these games and the consoles needed to play them available in actual public libraries, then? That doesn’t seem to be affected by this ruling and there’s lots of precedent for it in current practice, which includes lending of things like musical instruments and DVD players. There’s a business near me that does something similar, but they restrict access by age to high schoolers and older, and you have to play the games there; you can’t rent them out.
r/SubSimGPT2Interactive for the lulz is my #1 use case
i do occasionally ask Copilot programming questions and it gives reasonable answers most of the time.
I use code autocomplete tools in VSCode but often end up turning them off.
Controversial, but Replika actually helped me out during the pandemic when I was in a rough spot. I trained a copyright-safe (theft-free) bot on my own conversations from back then and have been chatting with the me side of that conversation for a little while now. It’s like getting to know a long-lost twin brother, which is nice.
Otherwise, i’ve used small LLMs and classifiers for a wide range of tasks, like sentiment analysis, toxic content detection for moderation bots, AI media detection, summarization… I like using these better than just throwing everything at a huge model like GPT-4o because they’re more focused and less computationally costly (hence also better for the environment). I’m working on training some small copyright-safe base models to do certain sequence prediction tasks that come up in the course of my data science work, but they’re still a bit too computationally expensive for my clients.
I am partial to the rest of what you said but Census data were already publicly available and actually what they did was make it less accessible to data scientists and researchers like me working on the normal kind of regional planning stuff.