• 0 Posts
  • 12 Comments
Joined 2 months ago
cake
Cake day: December 4th, 2025

help-circle
  • True, but I got two problems with that thought chain:

    1. I don’t want any outdated dependencies within my network. There might be a critical bug in them and if I back up the images, I keep those bugs with me. That seems pretty silly.
    2. If an application breaks because you updated dependencies, you either have to upgrade the application aswell or got some abandonware on your hands, in which case it’s probably time to find a new one.

  • I’m kinda confused by all of the people here doing that tbh.

    The entire point of dockerfiles is to have them produce the same image over and over again. Meaning, I can take the dockerfile, spin it up on any machine on gods green earth and have it run there in the exact same state as anywhere else, minus eventual configs or files that need to be mounted.

    Now, if I’m worried about an image disappearing from a remote registry, I just download the dockerfile and have it stored locally somewhere. But backuping the entire image seems seriously weird to me and kinda goes against of the spirit of docker.






  • So, first of all, I barely ever had to work with d-bus directly - I used it a few times and it was fine to use.

    Without any well-defined standards, a protocol is essentially useless and/or lawless

    When I look for “D-Bus Specification”, I get this: https://dbus.freedesktop.org/doc/dbus-specification.html. This LOOKS like a proper documentation of the standard to me.

    the general lax nature of how endpoints are intended to be defined … is a significant factor for why many applications are the way they are

    I feel like this is the same complaint people have about other things, like PHP for example. They see shitty PHP code (like wordpress) and are like: “Oh my god PHP is such a shitty language because this application is written like shit”. But I don’t blame a language, a framework or a protocol for the failures of the users. I don’t feel like an application that close to the system core has to be absolutely “dummy proof”. At some point, we should just expect that people know what they’re doing, and if they don’t, we should blame them, not the underlying technology.


  • realitaetsverlust@piefed.ziptoLinux@programming.dev*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    22
    arrow-down
    1
    ·
    2 months ago

    Honestly 80% of the article is ranting about developer not writing proper documentation or following specs which is not the fault of D-Bus. The only point that I agree with is the lack of security features, but that has never really been a thing back then. Half of the shit that was developed was completely insecure. Not saying that’s a good thing btw. But that can be fixed.



  • How do you notify yourself about the status of a container?

    I usually notice if a container or application is down because that usually results in something in my house not working. Sounds stupid, but I’m not hosting a hyper available cluster at home.

    Is there a “quick” way to know if a container has healthcheck as a feature.

    Check the documentation

    Does healthcheck feature simply depend on the developer of each app, or the person building the container?

    If the developer adds a healthcheck feature, you should use that. If there is none, you can always build one yourself. If it’s a web app, a simple HTTP request does the trick, just validate the returned HTML - if the status code is 200 and the output contains a certain string, it seems to be up. If it’s not a web app, like a database, a simple SELECT 1 on the database could tell you if it’s reachable or not.

    Is it better to simply monitor the http(s) request to each service? (I believe this in my case would make Caddy a single point of failure for this kind of monitor).

    If you only run a bunch of web services that you use on demand, monitoring the HTTP requests to each service is more than enough. Caddy being a single point of failure is not a problem because your caddy being dead still results in the service being unusable. And you will immediately know if caddy died or the service behind it because the error message looks different. If the upstream is dead, caddy returns a 502, if caddy is dead, you’ll get a “Connection timed out”