LX Pontual
  • Communities
  • Create Post
  • Create Community
  • heart
    Support Lemmy
  • search
    Search
  • Login
  • Sign Up
Avieshek@lemmy.world to Technology@lemmy.worldEnglish · 3 months ago

Edward Snowden slams Nvidia's RTX 50-series 'F-tier value,' whistleblows on lackluster VRAM capacity

www.tomshardware.com

external-link
message-square
113
fedilink
281
external-link

Edward Snowden slams Nvidia's RTX 50-series 'F-tier value,' whistleblows on lackluster VRAM capacity

www.tomshardware.com

Avieshek@lemmy.world to Technology@lemmy.worldEnglish · 3 months ago
message-square
113
fedilink
Blackwell consumer GPUs offer 'F-tier value for S-tier prices, ' moans the naturalized Russian.
  • TeamAssimilation@infosec.pub
    link
    fedilink
    English
    arrow-up
    399
    ·
    3 months ago

    Edward Snowden doing GPU reviews? This timeline is becoming weirder every day.

    • Winged_Hussar@lemmy.world
      link
      fedilink
      English
      arrow-up
      91
      ·
      3 months ago

      Legitimately thought this was a hard-drive.net post

    • GamingChairModel@lemmy.world
      link
      fedilink
      English
      arrow-up
      57
      ·
      3 months ago

      “Whistleblows” as if he’s some kind of NVIDIA insider.

      • 0x0@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 months ago

        Intel Insider now that would’ve made for great whistleblowing headlines.

    • Eager Eagle@lemmy.world
      link
      fedilink
      English
      arrow-up
      49
      ·
      3 months ago

      I bet he just wants a card to self host models and not give companies his data, but the amount of vram is indeed ridiculous.

      • Jeena@piefed.jeena.net
        link
        fedilink
        English
        arrow-up
        25
        ·
        3 months ago

        Exactly, I’m in the same situation now and the 8GB in those cheaper cards don’t even let you run a 13B model. I’m trying to research if I can run a 13B one on a 3060 with 12 GB.

        • The Hobbyist@lemmy.zip
          link
          fedilink
          English
          arrow-up
          15
          ·
          3 months ago

          You can. I’m running a 14B deepseek model on mine. It achieves 28 t/s.

          • Jeena@piefed.jeena.net
            link
            fedilink
            English
            arrow-up
            6
            ·
            3 months ago

            Oh nice, that’s faster than I imagined.

          • levzzz@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            ·
            3 months ago

            You need a pretty large context window to fit all the reasoning, ollama forces 2048 by default and more uses more memory

          • Viri4thus@feddit.org
            link
            fedilink
            English
            arrow-up
            2
            ·
            3 months ago

            I also have a 3060, can you detail which framework (sglang, ollama, etc) you are using and how you got that speed? i’m having trouble reaching that level of performance. Thx

            • The Hobbyist@lemmy.zip
              link
              fedilink
              English
              arrow-up
              4
              ·
              edit-2
              3 months ago

              Ollama, latest version. I have it setup with Open-WebUI (though that shouldn’t matter). The 14B is around 9GB, which easily fits in the 12GB.

              I’m repeating the 28 t/s from memory, but even if I’m wrong it’s easily above 20.

              Specifically, I’m running this model: https://ollama.com/library/deepseek-r1:14b-qwen-distill-q4_K_M

              Edit: I confirmed I do get 27.9 t/s, using default ollama settings.

              • Jeena@piefed.jeena.net
                link
                fedilink
                English
                arrow-up
                2
                ·
                3 months ago

                Thanks for the additional information, that helped me to decide to get the 3060 12G instead of the 4060 8G. They have almost the same price but from what I gather when it comes to my use cases the 3060 12G seems to fit better even though it is a generation older. The memory bus is wider and it has more VRAM. Both video editing and the smaller LLMs should be working well enough.

              • Viri4thus@feddit.org
                link
                fedilink
                English
                arrow-up
                2
                ·
                3 months ago

                Ty. I’ll try ollama with the Q-4-M quantization. I wouldn’t expect to see a difference between ollama and SGlang.

        • manicdave@feddit.uk
          link
          fedilink
          English
          arrow-up
          4
          ·
          3 months ago

          I’m running deepseek-r1:14b on a 12GB rx6700. It just about fits in memory and is pretty fast.

    • secret300@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      11
      ·
      3 months ago

      Swear next he’s gonna review hentai games

      Oh wait… https://www.youtube.com/watch?v=fAf1Syz17JE

      • newcockroach@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        3 months ago

        “Some hentai games are good” -Edward Snowden

        • Siegfried@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 months ago

          Note that this is from 2003

    • ඞmir@lemmy.ml
      link
      fedilink
      English
      arrow-up
      9
      ·
      3 months ago

      I’ll keep believing this is a theonion post

    • Simulation6@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 months ago

      Does he work for Nvidia? Seems out of character for him.

Technology@lemmy.world

technology@lemmy.world

Subscribe from Remote Instance

Create a post
You are not logged in. However you can subscribe from another Fediverse account, for example Lemmy or Mastodon. To do this, paste the following into the search field of your instance: !technology@lemmy.world

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


  • @L4s@lemmy.world
  • @autotldr@lemmings.world
  • @PipedLinkBot@feddit.rocks
  • @wikibot@lemmy.world
Visibility: Public
globe

This community can be federated to other instances and be posted/commented in by their users.

  • 4.16K users / day
  • 9.93K users / week
  • 18.2K users / month
  • 37.8K users / 6 months
  • 2 local subscribers
  • 69.9K subscribers
  • 12.9K Posts
  • 525K Comments
  • Modlog
  • mods:
  • L3s@lemmy.world
  • enu@lemmy.world
  • Technopagan@lemmy.world
  • L4sBot@lemmy.world
  • L3s@hackingne.ws
  • L4s@hackingne.ws
  • BE: 0.19.8
  • Modlog
  • Instances
  • Docs
  • Code
  • join-lemmy.org