• 13 Posts
  • 832 Comments
Joined 2 years ago
cake
Cake day: June 24th, 2023

help-circle














  • fmstrat@lemmy.nowsci.comtoPrivacy@lemmy.mlFree AI chat bots
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    6 days ago

    I’m trying to switch to this from Ollama after seeing the benchmarks, so much faster. But it has given me nothing but issues with CUDA incompatibility where Ollama runs smooth as butter. Hopefully I get some feedback on my repo discussion. Same docker setup as working Ollama, but Ollama has a lot more detailed docs.

    Ignore that, thought you said LMDeploy.