Getting Keycloak and Headscale working together.
But I did it after three weeks.
I captured my efforts in a set of interdependent Ansible roles so I never have to do it again.
Getting Keycloak and Headscale working together.
But I did it after three weeks.
I captured my efforts in a set of interdependent Ansible roles so I never have to do it again.
The Android version of the app still has the zoom/cursor offset bug when using a software keyboard from when they sunset RDP 8. That has been a severe usability bug for over three years now.
I’m also going to push forward Tilda, which has been my preferred one for a while due to how minimal the UI is.
We all mess up! I hope that helps - let me know if you see improvements!
I think there was a special process to get Nvidia working in WSL. Let me check… (I’m running natively on Linux, so my experience doing it with WSL is limited.)
https://docs.nvidia.com/cuda/wsl-user-guide/index.html - I’m sure you’ve followed this already, but according to this, it looks like you don’t want to install the Nvidia drivers, and only want to install the cuda-toolkit metapackage. I’d follow the instructions from that link closely.
You may also run into performance issues within WSL due to the virtual machine overhead.
Good luck! I’m definitely willing to spend a few minutes offering advice/double checking some configuration settings if things go awry again. Let me know how things go. :-)
It should be split between VRAM and regular RAM, at least if it’s a GGUF model. Maybe it’s not, and that’s what’s wrong?
Ok, so using my “older” 2070 Super, I was able to get a response from a 70B parameter model in 9-12 minutes. (Llama 3 in this case.)
I’m fairly certain that you’re using your CPU or having another issue. Would you like to try and debug your configuration together?
Unfortunately, I don’t expect it to remain free forever.
No offense intended, but are you sure it’s using your GPU? Twenty minutes is about how long my CPU-locked instance takes to run some 70B parameter models.
On my RTX 3060, I generally get responses in seconds.
Or maybe just let me focus on who I choose to follow? I’m not there for content discovery, though I know that’s why most people are.
I was reflecting on this myself the other day. For all my criticisms of Zuckerberg/Meta (which are very valid), they really didn’t have to release anything concerning LLaMA. They’re practically the only reason we have viable open source weights/models and an engine.
That’s the funny thing about UI/UX - sometimes changing non-functional colors can hurt things.
At some point, you lose productivity and reduced work weeks have shown increases in productivity can happen.
I mean, sysvinit was just a bunch of root-executed bash scripts. I’m not sure if systemd is really much worse.
Systemd was created to allow parallel initialization, which other init systems lacked. If you want proof that one processor core is slower than one + n, you don’t need to compare init systems to do that.
With UI decisions like the shortcut bar, they really don’t. I switched to another SMS app because I couldn’t stand it.
On Android, it moved SMS messages from the shared SMS store upon receipt and to Signal’s own database, which was more secure.
I…do not miss XP, but I understand the nostalgia attached to it.
I learned a lot of technical skills on XP, but that’s what made me appreciate the architectural decisions behind UNIX-likes all the more.
They do, actually!