-
You can look at manufacturers info pages and see what they support. Intel integrated chips usually list the capabilities and you’ll want to double check with your mini PC or motherboard manufacturer to make sure they support it too. I think any i5+ from the past 5 years with integrated graphics should be able to play/decode 4k media (someone correct me if this sounds crazy). Fornsure my core ultra 265. As far as codec support, I’m not familiar with the compatibilities but I’m sure everything CAN be played on recentish hardware. Encoding is out of my weelhouse.
-
I’ve used HDMI 2.1 hdr 4k120 on Linux with Nvidia, AMD and Integrated Intel. AMD will be the best experience especially on cards from the past 5 years. Nvidia, with proprietary drivers, on 3000 series or newer should be good for a few more years. I heard 2000 series will be dropped from support soonish m. Intel HDMI 2.1 is a pain on linux and I’ve only been able to get HDR 4k120 using a special DP to HDMI cable.
- 1 Post
- 130 Comments
afk_strats@lemmy.worldto
Lemmy Shitpost@lemmy.world•Negotiating with other family membersEnglish
5·1 month ago
afk_strats@lemmy.worldto
Ask Lemmy@lemmy.world•What's an objectively terrible movie that you love anyway?English
4·1 month agoProud
afk_strats@lemmy.worldto
Lemmy Shitpost@lemmy.world•Is it hot in here or what? 🥵English
31·1 month agoEverything reminds me of her
afk_strats@lemmy.worldto
Privacy@lemmy.ml•Alright, y'all were right, fuck Proton. This was the last straw for me.English
71·2 months agoBummer.
afk_strats@lemmy.worldto
Technology@lemmy.world•A Guide to the Circular Deals Underpinning the AI Boom | A web of interlinked investments raises the risk of cascading losses if AI falls short of its potential.English
162·2 months agoGreat spin, Bloomberg. You were very careful to only talk about “potential” and missing revenue targets when the real problem is that a bunch of grifters pretended they were on the absolute verge of AGI when, in fact, they were/are bulding advanced bullshit machines.
I will eat my words when a model can come up with an original thought
afk_strats@lemmy.worldto
Privacy@lemmy.ml•Alright, y'all were right, fuck Proton. This was the last straw for me.English
18·2 months agoHowdy. For the clarity of users such as myself, can you please clarify which “Proton” you’re referring to.
afk_strats@lemmy.worldto
Technology@lemmy.world•Android won't kill sideloading after all, but new verification rules will make it harderEnglish
246·2 months agoThis framing still sucks. Google is blocking apps THEY don’t approve on YOUR phone.
afk_strats@lemmy.worldto
Technology@lemmy.world•Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task – MIT Media LabEnglish
2·2 months agoThat’s a great observation!..
afk_strats@lemmy.worldto
Technology@lemmy.world•Majority of CEOs report zero payoff from AI splurgeEnglish
3·2 months agoI think organizing labor is a useful skill. I just think doing it to the sole benefit of “shareholder value” is what’s killing us. Is that liberal of me? I can’t imagine a society where work isn’t done by people and work needs some form of organization.
afk_strats@lemmy.worldto
Technology@lemmy.world•Majority of CEOs report zero payoff from AI splurgeEnglish
3·2 months agoHere are some of the schools I know set the pace for Business education in the US. Feels like social responsibility is more than an afterthought.
Again, not defeding, “the MBAs” running companies. I’m defending the schools
https://www.hbs.edu/mba/academic-experience/curriculum
afk_strats@lemmy.worldto
Technology@lemmy.world•Majority of CEOs report zero payoff from AI splurgeEnglish
73·2 months agoBroken systems elevate psychopath leaders into positions of wealth and power, and people who want those things exploit the fastest path there by getting degrees who put you on that track.
By this MBA logic, do we close CompSci for the the poor code coming out of Microsoft, close Law Schools because social rights are being lost, engineering schoolings because infrastructure doesn’t meet current needs?
My point is to blame the CEOs and their shitty behaviour, not the schools that, to my knowledge, try to educate reasonable policy, law, ethics, HR, etc.
Disclaimer: not an MBA
afk_strats@lemmy.worldto
Technology@lemmy.world•Researchers figured out how to run a 120-billion parameter model across four regular desktop PCsEnglish
2·2 months agoI still think AI is mostly a toy and a corporate inflation device. There are valid use cases but I don’t think that’s the majority of the bubble
- For my personal use, I used it to learn how models work from a compute perspective. I’ve been interested and involved with natural language processing and sentiment analysis since before LLMs became a thing. Modern models are an evolution of that.
- A small, consumer grade model like GPT-oss-20 is around 13GB and can run on a single mid-grade consumer GPU and maybe some RAM. It’s capable of parsing text and summarizing, troubleshooting computer issues, and some basic coding or code review for personal use. I built some bash and home assistant automatons for myself using these models as crutches. Also, there is software that can index text locally to help you have conversations with large documents. I use this with documentation for my music keyboard which is a nightmare to program and with complex APIs.
- A mid-size model like Nemotron3 30B is around 20GB can run on a larger consumer card (like my 7900xtx with 24 gb of VRAM, or 2 5060tis with 16gb of vRAM each) and will have vaguely the same usability as the small commercial models, like Gemini Flash, or Claude Haiku. These can write better, more complex code. I also use these to help me organize personal notes. I dump everything in my brain to text and have the model give it structure.
- A large model like GLM4.7 is around 150GB can do all the things ChatGPT or Gemini Pro can do, given web access and a pretty wrapper. This requires big RAM and some patience or a lot of VRAM. There is software designed to run these larger models in RAM faster, namely ik_llama but, at this scale, you’re throwing money at AI.
I played around with image creation and there isn’t anything there other than a toy for me. I take pictures with a camera.
afk_strats@lemmy.worldto
Technology@lemmy.world•Researchers figured out how to run a 120-billion parameter model across four regular desktop PCsEnglish
71·2 months agoI think you’re missing the point or not understanding.
Let me see if I can clarify
What you’re talking about is just running a model on consumer hardware with a GUI
The article talks about running models on consumer hardware. I am making the point that this is not a new concept. The GUI is optional but, as I mentioned, llama.cpp and other open source tools provide an OpenAI-compatible api just like the product described in the article.
We’ve been running models for a decade like that.
No. LLMs, as we know them, aren’t that old, were a harder to run and required some coding knowledge and environment setup until 3ish years ago, give or take when these more polished tools started coming out.
Llama is just a simplified framework for end users using LLMs.
Ollama matches that description. Llama is a model family from Facebook. Llama.cpp, which is what I was talking about, is an inference and quantization tool suite made for efficient deployment on a variety of hardware including consumer hardware.
The article is essentially describing a map reduce system over a number of machines for model workloads, meaning it’s batching the token work, distributing it up amongst a cluster, then combining the results into a coherent response.
Map reduce, in very simplified terms, means spreading out compute work to highly pararelized compute workers. This is, conceptually, how all LLMs are run at scale. You can’t map reduce or parallelize LLMs any more than they already are. The article doent imply map reduce other than taking about using multiple computers.
They aren’t talking about just running models as you’re describing.
They don’t talk about how the models are run in the article. But I know a tiny bit about how they’re run. LLMs require very simple and consistent math computations on extremely large matrixes of numbers. The bottleneck is almost always data transfer, not compute. Basically, every LLM deployment tool is already tries to use as much parallelism as possible while reducing data transfer as much as possible.
The article talks about gpt-oss120, so were aren’t talking about novel approaches to how the data is laid out or how the models are used. We’re talking about tranformer models and how they’re huge and require a lot of data transfer. So, the preference is try to keep your model on the fastest-transfer part of your machine. On consumer hardware, which was the key point of the article, you are best off keeping your model in your GPU’s memory. If you can’t, you’ll run into bottlenecks with PCIe, RAM and network transfer speed. But consumers don’t have GPUs with 63+ GB of VRAM, which is how big GPT-OSS 120b is, so they MUST contend with these speed bottlenecks. This article doesn’t address that. That’s what I’m talking about.
afk_strats@lemmy.worldto
Technology@lemmy.world•Researchers figured out how to run a 120-billion parameter model across four regular desktop PCsEnglish
202·2 months agoThis is basically meaningless. You can already run gpt-OSS 120 across consumer grade machines. In fact, I’ve done it with open source software with a proper open source licence, offline, at my house. It’s called llama.cpp and it is one of the most popular projects on GitHub. It’s the basis of ollama which Facebook coopted and is the engine for LMStudio, a popular LLM app.
The only thing you need is around 64 gigs of free RAM and you can serve gpt-oss120 as an OpenAI-like api endpoint. VRAM is preferred but llama.cpp can run in system RAM or on top of multiple different GPU addressing technologies. It has a built-in server which allows it to pool resources from multiple machines…
I bet you could even do it over a series of high-ram phones in a network.
So I ask is this novel or is it an advertisement packaged as a press release?
- Cream Theater
- System of a Town
- Go:jira
afk_strats@lemmy.worldto
Ask Lemmy@lemmy.world•What is it about technology that fascinates you?English
8·2 months agoIf you know the right incantatuons, you can make sand do the things you want. IM A FUCKING WIZARD, HARRY


Before reading your comment, I knew IN MY SOUL, that this was from the shittier part of Florida