I’m currently getting a lot of timeout errors and delays processing the analysis. What GPU can I add to this? Please advise.

  • grue@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    5 days ago

    I’m glad you posted this because I need similar advice. I want a GPU for Jellyfin transcoding and running Ollama (for a local conversation agent for Home Assistant), splitting access to the single GPU between two VMs in Proxmox.

    I would also prefer it to be AMD as a first choice or Intel as a second, because I’m still not a fan of Nvidia for their hostile attitude towards Linux and for proprietary CUDA.

    (The sad thing is that I probably could have accomplished the transcoding part with just integrated graphics, but my AMD CPU isn’t an APU.)

    • BaroqueInMind@lemmy.one
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      4 days ago

      The problem with AMD graphics cards is that the performance that CUDA, xformers and pytorch provide for nVidia cards blows anything AMD has away by a significantly high order of magnitude.

      I have no idea why AMD gpus are so trash when it comes near anything involving generative AI/LLMs, DLSS, Jellyfin transcoding, or even raytracing; i would recommend waiting until their upcoming new GPU announcements.

      • sith@lemmy.zip
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        4 days ago

        Is that still true though? My impression is that AMD works just fine for inference with ROCm and llama.cpp nowadays. And you get much more VRAM per dollar, which means you can stuff a bigger model in there. You might get fewer tokens per second compared with a similar Nvidia, but that shouldn’t really be a problem for a home assistant. I believe. Even an Arc a770 should work with IPEX-LLM. Buy two Arc or Radeon with 16 GB VRAM each, and you can fit a Llama 3.2 11B or a Pixtral 12B without any quantization. Just make sure that ROCm supports that specific Radeon card, if you go for team red.

        • OpossumOnKeyboard@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          4 days ago

          I’m also curious. I have also heard good things this past year about AMD and ROCm. Obviously not as close to Nvidia yet (or maybe ever) but considering the price I’ve been considering trying.