

Lolwut?
You think that’s why Canonical and RedHat make money, huh? 🤣🤣🤣


Lolwut?
You think that’s why Canonical and RedHat make money, huh? 🤣🤣🤣


Symlinks are just links to a hard file. Unless you’re setting specific flags, you’re coming the hard files along with everything else. I’d run a dedupe script on your copied files and see if you didn’t happen to double up on some things.


Depends on how the code is using it. You could look deeper, but that’s not what OP is asking for help with.
It’s not about how big they are really, it’s about how many can be open at a time. Without sane limits, then anything is a ticking time bomb.


Reduce the number of active connections, or the total number of active transfers available at once, and that will lower that number.
If you’re POSITIVE your memory situation is in good shape (meaning you’re not running out of memory), then you can increase the max number of open files allowed for your user, or globally: https://www.howtogeek.com/805629/too-many-open-files-linux/
Again: if you do this, you will likely start hitting OOMkill situations, which is going to be worse. The file limit set right now are preventing that from happening.


You have a process holding open a bunch of FD’s. Instead of just blindly increasing the system limits, try and find the culprit with something like: lsof | awk '{print $1}' | sort | uniq -c | sort -nr
That will give you a list of which processes are holding open descriptors. See which are the worst offenders and try and fix the issue.
You COULD just increase the fd open max, but then you actually will more than likely run into OOMkill issues because you aren’t solving the problematic process.


Back up a bit because you’re conflating a number of things, so let me try to break it down:
dmesg to see what live changes your hardware controllers might be making

Your power restrictions are preventing the higher power settings.
If you’re using Gnome or KDE, you can use the applets to just turn it up a notch and get more brightness.


PXE is unnecessary unless you’re going to be creating a reusable boot image. Just faster to use LiveUSB.
What did you getaid off, and what are you trying to apply to? Maybe help to understand on what you’re trying to learn.
Just for your own sanity, just install Talos on the 3 machines, understand how to join them to a cluster, then deploy some stuff around the cluster. Get a feel for the basics before you get into the mess of trying to do it all in VMs.
I’d also check some comparisons on the various flavors of different lube stacks: k3s, microk8s, kubedge…etc. Theres so many now it’s hard to track.


Because you’re being rate limited. Don’t let these tools constantly hammer Github’s API in mass fits and starts or you may get a back off from GH.


And I got downvoted into oblivion for bringing it up 🤣


Then why if you aren’t familiar would you make a comment you didn’t see anything?
Do you randomly walk into other people’s jobs with zero proficiency and speak to how they’re doing at it?


Here’s a very simple list of issues that any Node dev would immediately say is generated and has not been cleaned up:
I mean I can keep going, but if you even glanced at this and didn’t IMMEDIATELY get it, you are bad at your job.


Lolwut??? Did you check the GitHub at all?


This is so vibecoded 🤣 Nawthx


Logs


And literally everything else can do it better.
But at least you got that.
😘


No. Just…no.
Okay, soooooo…basically disregarding the entire point and benefit of Gentoo? The entire reason you’d want to build from source on a specific machine or architecture is for the compiler optimizations done on that hardware. Just shipping binaries around is normal, so I’m not getting what the point is here.