

Why wrap it in a function at all? Why not just put the if at the top of the file?


Why wrap it in a function at all? Why not just put the if at the top of the file?


While true and some will do it for that reason, I bet most do it simply because the friction to forking is so low.
Some might have an intention to work on it but then don’t or might start looking at it in detail then give up or get to busy or lose interest.
Others might just click it to save it for later.
And don’t forget all the people that click it by accident.
It’s not like it is a big investment to click the button.
Probably something to do with their main package deleting the pacman lock file so it can run a nested pacman update command… Which means two pacman instances running at the same time and nothing stopping other ones after that nested one has completed.
I have more then once gave up on pressing up, hit ctrl + c to reset only to see the command I wanted briefly flash up as I am hitting ctrl + c


Important part from the article, not all VPNs are equal:
Instead, the authors recommend using paid VPNs, which are generally considered to be more reliable and secure. For example, no serious privacy or security issues were found with Lantern, Psiphon, ProtonVPN or Mullvad.
VPNs are not a magic bullet. They are at best a transfer of trust from your IPS - which in a lot of cases you have little to no control over and also very little trust in.
Picking a well trusted VPN can improve things, picking a bad one can make your situation worse. And there are more important things that you need to worry about first before a VPN will really help at all.


You never want build artifacts to be committed. You don’t want to have everyone working on your project to need to setup their own gitignore for every project. So it makes sense to have a common commited gitignore for files the project produces that should never be tracked by git.
I dislike when people put in editor files in the gitignore though. People should setup global ones for their local tooling.


Yes. They can. But they do not mix well with required checks. From githubs own documentation:
If a workflow is skipped due to path filtering, branch filtering or a commit message, then checks associated with that workflow will remain in a “Pending” state. A pull request that requires those checks to be successful will be blocked from merging.
If, however, a job within a workflow is skipped due to a conditional, it will report its status as “Success”. For more information, see Using conditions to control job execution.
So even with github actions you cannot mix a required check and path/branch or any filtering on a workflow as the jobs will hang forever and you will never be able to merge the branch in. You can do either or, but not both at once and for larger complex projects you tend to want to do both. But instead you need complex complex workflows or workflows that always start and instead do internal checks to detect if they need to actually run or not. And this is with github actions - it is worst for external CICD tooling.


If you have folderA and folderB each with their own set of tests. You don’t need folderAs tests to run with a change to folderB. Most CI/CD systems can do this easily enough with two different reports. But you cannot mark them both as required as they both wont always run. Instead you need a complicated fan out pipelines in your CICD system so you can only have one report back to GH or you need to always spawn a job for both folders and have the ones that dont need to run return successful. Neither of these is very good and becomes very complex when you are working with large monorepos.
It would be much better if the CICD system that knows which pipelines it needs to run for a given PR could tell GH about which tests are required for a particular PR and if you could configure GH to wait for that report from the CICD system. Or at the very least if the auto-merge was blocked for any failed checks and the manual merge button was only blocked on required checks.


One problem is GHs auto-merge when ready button. It will merge when there are still tests running unless they are required. It would be much better if the auto merges took into account all checks and not just required ones.


Yeah there are ways to run partial tests on modified code only. But they interact poorly with GH required checks. https://github.com/orgs/community/discussions/44490 goes into a lot more detail on similar problems people are having with GH actions - though our problem is with external CICD tools that report back to GH. Though it does look like they have updated the docs that are linked to in that discussion so maybe something has recently changed with GH actions - but I bet it still exists for external tooling.


We have a few non-required checks here and there - mostly as you need an admin to list a check as required and that can be annoying to do. And we still get code merged in occasionally that fails those checks. Hell, I have merged in code that fails the checks. Sometimes checks take a while to run, and there is this nice merge when ready button in GH. But it will gladly merge your code in once all the required checks have passed ignoring any non-required checks.
And it is such a useful button to have, especially in a large codebase with lots of developers - just merge in the code when it is ready and avoid forgetting about things for a few hours and possibly having to rebase and run all the checks again because of some minor merge conflict…
But GH required checks are just broken for large code bases as well. We don’t always want to run every check on every code change. We don’t need to run all unit tests when only a documentation has changed. But required checks are all or nothing. They need to return something or else you cannot merge at all (though this might apply to external checks more then gh actions maybe). I really wish there was a require all checks to pass and a at least one check must run. Or if external checks could tell GH when they are required or not. Either way there is a lot of room for improvement on the GH PR checks.
Yeah I don’t like this either. So many chances for a mistake, be in the wrong dir, file misspelled, something not cloned correctly or anything else not setup as you think it might be and suddenly the package manage does something you don’t expect (like try to install globally rather then in a project or vice versa).
For a lot of things I would rather have something web based than app based. I hate having to download some random app from some random company just to interact with something one time. Why do all restaurants, car parking places etc require apps rather than just having a simple site. Not everything should be native first IMO.
If the package is popular then it is very likely already packaged by your distro. You should always go there first if you care that much. If the package is not popular enough to be packaged by a distro then how does another centralized approach help? Either it is fully curated like a distro package list and likely also wont contain some random small project, or it is open for anyone to upload scripts to so will become vulnerable to malicious scripts. Worst yet people would be able to upload scripts to projects they don’t control as the developers of said project likely wont.
Basically it is not really any safer then separate dev owned websites if open nor offer better package support then distro repos if curated.
Maybe the server was hacked and the script was changed?
Same thing can happen to any system though. What happens if your servers for this service are hacked? Being a central point makes you a bigger target and with more people able to change (assuming you are not going to be the only one to curate packages) things you have a bigger area of attack. And once hacked they can compromise far more downloads than a single package.
Your solution does not improve security - just shuffles it around a bit. Sounds nice on paper but when you look at it in more details there are a lot more things you need to consider to create an actually secure system that is better then what we currently have.
Then how would you trust these scripts in a central repo? Seems to add no real value or safety over dev managed scripts if you are not willing to go down the path of becoming yet another distro packaging system.
There is also no way to verify that the software that is being installed is not going to do anything bad. If you trust the software then why not trust the installation scripts by the same authors? What would a third party location bring to improve security?
And generally what you are describing is a software repo, you know the one that comes with your distro.


Random programming certificates are generally worthless. The course to get them might teach you a lot and be worth while, but the certificate at the end is worthless. If it is free then it does not matter too much either way, might be a good way to test yourself. But I would not rely on it to get you a job at all. For that you need other ways to prove you can do the job - typically with the ability to talk about stuff and having written some real world like application. Which a course might help you do to.


YAML is not a good format for this. But any line based or steamable format would be good enough for log data like this. Really easy to parse with any language or even directly with shell scripts. No need to even know SQL, any text processing would work fine.


CSV would be fine. The big problem with the data as presented is it is a YAML list, so needs the whole file to be read into memory and decoded before you get and values out of it. Any line based encoding would be vastly better and allow line based processing to be done. CSV, json objects encoded into a single line, some other streaming binary format. Does not make much difference overall as long as it is line based or at least streamable.
I find most people don’t create good unit tests. They create them too small which results in them being fragile to change or near useless. They end up being a tray for future you not a love letter.
The number of times I have refactored some code and broken a whole bunch of unit tests is unreal. These types of tests are less then useless. If you cannot refactor code then what is the point in a unit test? I don’t need to know that if I don’t change the code under test it doesn’t break… I need to know that when I change code my assumptions still hold true.
We should be testing modules as a whole using their external API (that is external to the module, not necessarily the project as a whole), not individual functions. Most people seem to call these integration tests though…