• 0 Posts
  • 50 Comments
Joined 2 years ago
cake
Cake day: June 17th, 2023

help-circle

  • Yes. They can. But they do not mix well with required checks. From githubs own documentation:

    If a workflow is skipped due to path filtering, branch filtering or a commit message, then checks associated with that workflow will remain in a “Pending” state. A pull request that requires those checks to be successful will be blocked from merging.

    If, however, a job within a workflow is skipped due to a conditional, it will report its status as “Success”. For more information, see Using conditions to control job execution.

    So even with github actions you cannot mix a required check and path/branch or any filtering on a workflow as the jobs will hang forever and you will never be able to merge the branch in. You can do either or, but not both at once and for larger complex projects you tend to want to do both. But instead you need complex complex workflows or workflows that always start and instead do internal checks to detect if they need to actually run or not. And this is with github actions - it is worst for external CICD tooling.


  • If you have folderA and folderB each with their own set of tests. You don’t need folderAs tests to run with a change to folderB. Most CI/CD systems can do this easily enough with two different reports. But you cannot mark them both as required as they both wont always run. Instead you need a complicated fan out pipelines in your CICD system so you can only have one report back to GH or you need to always spawn a job for both folders and have the ones that dont need to run return successful. Neither of these is very good and becomes very complex when you are working with large monorepos.

    It would be much better if the CICD system that knows which pipelines it needs to run for a given PR could tell GH about which tests are required for a particular PR and if you could configure GH to wait for that report from the CICD system. Or at the very least if the auto-merge was blocked for any failed checks and the manual merge button was only blocked on required checks.




  • We have a few non-required checks here and there - mostly as you need an admin to list a check as required and that can be annoying to do. And we still get code merged in occasionally that fails those checks. Hell, I have merged in code that fails the checks. Sometimes checks take a while to run, and there is this nice merge when ready button in GH. But it will gladly merge your code in once all the required checks have passed ignoring any non-required checks.

    And it is such a useful button to have, especially in a large codebase with lots of developers - just merge in the code when it is ready and avoid forgetting about things for a few hours and possibly having to rebase and run all the checks again because of some minor merge conflict…

    But GH required checks are just broken for large code bases as well. We don’t always want to run every check on every code change. We don’t need to run all unit tests when only a documentation has changed. But required checks are all or nothing. They need to return something or else you cannot merge at all (though this might apply to external checks more then gh actions maybe). I really wish there was a require all checks to pass and a at least one check must run. Or if external checks could tell GH when they are required or not. Either way there is a lot of room for improvement on the GH PR checks.



  • nous@programming.devtoProgramming@programming.devEverything web based
    link
    fedilink
    English
    arrow-up
    76
    arrow-down
    1
    ·
    2 months ago

    For a lot of things I would rather have something web based than app based. I hate having to download some random app from some random company just to interact with something one time. Why do all restaurants, car parking places etc require apps rather than just having a simple site. Not everything should be native first IMO.


  • If the package is popular then it is very likely already packaged by your distro. You should always go there first if you care that much. If the package is not popular enough to be packaged by a distro then how does another centralized approach help? Either it is fully curated like a distro package list and likely also wont contain some random small project, or it is open for anyone to upload scripts to so will become vulnerable to malicious scripts. Worst yet people would be able to upload scripts to projects they don’t control as the developers of said project likely wont.

    Basically it is not really any safer then separate dev owned websites if open nor offer better package support then distro repos if curated.

    Maybe the server was hacked and the script was changed?

    Same thing can happen to any system though. What happens if your servers for this service are hacked? Being a central point makes you a bigger target and with more people able to change (assuming you are not going to be the only one to curate packages) things you have a bigger area of attack. And once hacked they can compromise far more downloads than a single package.

    Your solution does not improve security - just shuffles it around a bit. Sounds nice on paper but when you look at it in more details there are a lot more things you need to consider to create an actually secure system that is better then what we currently have.



  • There is also no way to verify that the software that is being installed is not going to do anything bad. If you trust the software then why not trust the installation scripts by the same authors? What would a third party location bring to improve security?

    And generally what you are describing is a software repo, you know the one that comes with your distro.


  • Random programming certificates are generally worthless. The course to get them might teach you a lot and be worth while, but the certificate at the end is worthless. If it is free then it does not matter too much either way, might be a good way to test yourself. But I would not rely on it to get you a job at all. For that you need other ways to prove you can do the job - typically with the ability to talk about stuff and having written some real world like application. Which a course might help you do to.




  • Never said it had to be a text file. There are many binary serialization formats that could be used. But is a lot of situations the overhead you save is not worth the debugging effort of working with binary data. For something like this that is likely not going to be more then a GB or so, probably much less it really does not matter that much if you use binary or text formats. This is an export format that will likely just have one batch processing layer on. This type of thing is generally easiest for more people to work with in a plain text format. If you really need efficient querying of the data then it is trivial and quick to load it into a DB of your choice rather then being stuck with sqlite.


  • export tracking data to analyze later on

    That is essentially log data or essentially equivalent. Log data does not have to be human readable, it is just a series of events that happen over time. Most log data, even what you would think of as traditional messages from a program, is not parsed by humans manually but analyzed by code later on. It is really not that hard to slow to process log data line by line. I have done this with TB of data before which does require a lot more effort to do. A simple file like this would take seconds to process at most, even if you were not very efficient about it. I also never said it needed to be stored as text, just a simple file is enough - no need for a full database. That file could be binary if you really need it to be but text serialization would also be good enough. Most of the web world is processed via text serialization.

    The biggest problem with yaml like in OP is the need to decode the whole file at once since it is a single list. Line by line processing would be a lot easier to work with. But even then if it is only a few 100 MBs loading it all in memory once and analyzing it all in memory would not take long at all - it just does not scale very well.



  • The attack is known as the evil maid attack. It requires repeated access to the device. Basically if you can compromise the bootloader you can inject a keylogger to sniff out the encryption key the next time someone unlocks the device. This is what secure boot is meant to help protect against (though I believe that has also been compromised as well).

    But realistically very few people need to worry about that type of attack. Encryption is good enough for most people. And if you don’t have your system encrypted then it does not matter what bootloader you use as anyone can boot any live usb to read your data.



  • It does not matter if the battery is plugged in or not. Far more important is the state of the battery. All LiPo batteries degrade over time. But they can degrade faster or slower depending on the state they are stored in. They degrade faster when at higher charge levels or when stored in hotter environments or if they go through more charge/discharge cycles. Older battery technology also degraded faster in general, new ones tend to last longer in sub-optimal conditions.

    Apart from newer battery technology itself battery monitoring and charging technology has also improved. A lot of modern laptops have smarter charging circuitry that lets them stop charging before the battery is at 100%, sometimes configurable in the bios, sometimes controllable via the OS. This can help a lot to preserve the battery life for longer, especially if you leave it plugged in as it spends less time at 100% charge. Older devices also tended to run hotter for longer periods of time, even when idle. Both of these combined with worst battery technology would lead to batteries degrading quite a lot faster if you left them plugged in all the time - hence where the advice came from (note that removing the battery at 100% charge was also not great for it, better to store lipo batteries at 40-60% charge, but it did still save it from the heat of the device) . But when setup correctly modern devices suffer from this a lot less so it is much less important to remove the battery at all - I doubt you would really notice the difference overall on modern systems.