I don’t think it matters too much but I’m not sure, I just sticked to using the dedicated extension and it’s working good
I don’t think it matters too much but I’m not sure, I just sticked to using the dedicated extension and it’s working good
Oh yeah I heard about this and saw that mutahar (some ordinary gamers) was doing it once on windows with a 4090. I would love to do that on my GPU and then split it between my host and my VM
Wonderful thank you so much!
I need that wallpaper! Is there a way you could provide me that?
Don’t get me started on the Linux kernel😂I hate that argument but it’s old on software. If it’s still maintained and maybe even actively developed that’s way better than a new project
Probably or the ai if I should have guessed in the backend it’s using something like local ai, koboldcpp, llamacpp probably
I used llamacpp with opencl but couple of months they supported rocm which is even faster
Just want to piggyback this. You will probably need more than 6gb vram to run good enough models with a acceptable speed and coherent output, but the more the better.
Is kagi a metasearch engine? Or does it have its own crawler and so on?
Fyi searx is deprecated or at least not maintained and discontinued Switch to searxNG which is an active fork and it is really good :D
I agree it does suck in general One thing I tried is using a metasearch engine and for the least part I find the results better and way more customizable (for reference I am self hosting searxNG)
I think qcow2 images are always a fixed size (but I could be wrong on that) however I saw some threads explaining how you could relatively easy modify the size of the qcow2 image :)
The “BTW”
Source code is available at GitHub but it require you to have your phone by yourside e.g. connected via usb or in the same network so you can connect via IP
Yeah think so but with extra privacy hardening features and especially useful Screensharing on Wayland! I don’t know if there is an alternative to it for Screensharing on wayland
It heavily depends which filter lists you use obviously. I never had this issues and neither my family does
Well yeah that approach would work if you train it on one model but that doesn’t mean it would work on another model, but for the normal user who uses chatgpt it is probably enough to detect it at least 80-90% of the times
I think there is something like that but it’s really not popular and I’m not even sure it’s maintained anymore
I know about that, and I love that it runs locally. I just hope that they will keep this mindset, I would love to have such tool available at all times Edit: Personally I also agree its a bit weird that it will be a native feature, maybe an official extension would be more adequate
They probably also do some OCR on that and then let something other run over that to see if the text makes sense (basically letting another AI grade the output, commonly done to judge what’s a good dataset and what isn’t) and then just feed the ai again. Today you have a shortage of data since the internet is too small (yes I know it sounds crazy) so I wouldn’t wonder if they actually tried to use pictures and ocr to gather a bit more usable data