• 2 Posts
  • 9 Comments
Joined 1 month ago
cake
Cake day: November 25th, 2024

help-circle



  • I’m an active user who post and comment regularly, and I would say that the experience is very similar to Reddit. Except for less adds and smaller numbers on the main/all page. The experience is probably very different if you’re mainly a passive consumer of content.

    Though I’ve never been active in “large” subreddits and I tend to block them from my feed. So guess I don’t know what I’m missing.



  • Is that still true though? My impression is that AMD works just fine for inference with ROCm and llama.cpp nowadays. And you get much more VRAM per dollar, which means you can stuff a bigger model in there. You might get fewer tokens per second compared with a similar Nvidia, but that shouldn’t really be a problem for a home assistant. I believe. Even an Arc a770 should work with IPEX-LLM. Buy two Arc or Radeon with 16 GB VRAM each, and you can fit a Llama 3.2 11B or a Pixtral 12B without any quantization. Just make sure that ROCm supports that specific Radeon card, if you go for team red.




  • Yes, but all that is true for Facebook, Reddit and whatever. It’s still nice to have this feature in the “reference” implementation of Lemmy. I think. Then it will also be easier for instance owners and moderators to follow any local laws that requires this.

    I don’t know if this is already in the ActivityPub protocol, but it would be nice if all instances who has a copy of some content, deletes it, if it has been marked “request for deletion” by the creator or the owner of the instance where it was first posted. There will always be actors that store specifically all posts that’s been marked for delete, but I still think this is preferable.