• 0 Posts
  • 34 Comments
Joined 3 years ago
cake
Cake day: June 20th, 2023

help-circle
  • Well, yeah, that’s what Scrum is. From the guide which takes maybe 10 minutes to read

    Scrum Teams are cross-functional, meaning the members have all the skills necessary to create value each Sprint. They are also self-managing, meaning they internally decide who does what, when, and how.

    That’s not a throwaway sentence - it is fundamental to how scrum works and that is reinforced throughout the scrum guide.

    Every conversation about Agile and/or Scrum being “the worst”, after some prodding it turns out that their company has refused to read or implement one or several of the fundamental principles, often without even being aware that was an essential requirement. You’re baking a cake and you decided to not use any butter, that’s on you champ, don’t blame the fucking recipe.

    The biggest valid criticism of scrum is that the thing that makes it so great - its structural empowerment of individual teams - is also what makes it structurally incompatible with any traditional top-down management style. The company must fundamentally be (re-)organized to have a flat corporate structure within its R&D department - most are simply incapable of mustering the necessary changes, if only because too many middle managers’ jobs are at stake. So they call their middle managers “POs” or “Scrum Masters” and wonder why their version of Scrum sucks.




  • Ideally you’d use the docker executor with a dind service instead of docker commands in the shell. You’ll have better isolation (e.g. no conflicts from open port forwards) and better forward-compatibility (the pipeline won’t break every time a major upgrade is applied to the runner because the docker - especially compose - CLI is unstable).


  • For gitlab this is only correct with a shell executor which is to be avoided in the general case in favor of a docker or k8s executor for isolation&repeatability.

    Those you can actually run locally with gitlab-runner, but then you won’t have all your gitlab instance’s CI variables so it’s a PITA if you need a CI token which you probably do if you actually make decent use of gitlab’s features.

    In most cases I just end up committing my changes to avoid the headache. :!git commit --amend --no-edit && git push -f goes pretty dang fast and 60 % of the time third time’s the charm.



  • A 5 kW peak stovetop is already more power than anyone can reasonably use with the amount of space available on a standard stove. Literally the only useful thing you can do at full power is bring water to a boil, because no actual cooking can happen at full power unless your diet is carbonized food. I have a 3.5 kW stovetop and it’s perfectly adequate.

    After the first 15-20 minutes of cooking (bringing water to a boil while preparing some onions/garlic/sauce/seasonings) it gets very hard to keep using 1 kW. By that point you’ll be leaving things on medium heat at most. I can’t think of a single home-cooked meal that would require continuously drawing a full 2 kW from the stove for multiple hours, that’s a truly crazy amount of energy. Even an oven at full blast won’t use anywhere near 2 kW once it has reached 250 °C.



  • I don’t disagree with the point being made but I think the author is underselling the value of opentelemetry tracing here.

    OTEL tracing is not mere plumbing. The SDKs are opinionated and do provide very useful context out of the box (related spans/requests, thrown exceptions, built-in support for common libraries). The data model is easy to use and contextful by default.

    It’s more useful if the application developer properly sets attributes as demonstrated, but even a half-assed tracing implementation is still an incredibly valuable addition to logging for production use.




  • you’re at risk of becoming dependent, and not building the understanding you’ll need to make something that matters

    Could be – and has been – said about literally any abstraction. Some people still haven’t gotten over the fact that assembly is not the default system programming language anymore.

    For me vibe coding is more akin to vbscript or MS Access. It’s for people who neither know nor care about the “how” and don’t give a shit about quality, performance, security, or maintainability (i.e. people who have no interest in software development). It’s a market that’s looked down upon for many good reasons, but it does exist and it does serve a purpose for small-scale and low-stakes DIY automation. Unfortunately that purpose is a lot narrower than the marketing pitch, nevermind all the other deleterious problems with the AI industry as it exists today.




  • Real answer: it depends.

    • Deleting a file in use: no problemo. File is removed from the directory immediately, but exists on disk until last program who had the file open closes. Everyone wins! (Unless you’re trying to free up space by deleting a huge file that’s being held open by a program and not understanding why the filesystem usage didn’t go down)
    • Unmounting a hard drive in use: Will error out similarly to Windows. lsof can tell you which process has which files open. There’s nuance with lazy unmounts and whatnot but that should not be used in most cases.

    Now in practice you should be wary of one very important thing that changes compared to Windows: Writes are asynchronous on Linux. First the kernel writes to RAM, then it flushes to disk at a later time for performance reasons (this is one of the reasons why writing a bunch of small files is many times faster on Linux than Windows). The upshot is that just because your file copy is “done” doesn’t mean you can just yank the USB cable. Always safely unmount before unplugging a storage device on Linux.




  • Counterpoint: Yes, parse don’t validate, but CLIs should not be dealing with dependency management.

    I love Python’s argparse because:

    • It’s “Parse, don’t validate” (even supports FileType as a target)
    • It enforces or strongly encourages good CLI design
      • Required arguments should in most situations be positional arguments, not flags. It’s curl <URL> not curl --url <URL>.
      • Flags should not depend on each other. That usually indicates spaghetti CLI design. Don’t do server --serve --port 8080 and server --reload with rules for mix-and-matching those, do server serve --port 8080 and server reload with two separate subparsers.
      • Mutually exclusive flags sometimes make sense but usually don’t. Don’t do --xml --json, do -f [xml|json].
      • This or( pattern of yours IMO should always be replaced by a subparser (which can use inheritance!). As a user the options’ data model should be immediately intuitive to me as I look at the --help and having mutually exclusive flags forces the user to do the extra work of dependency management. Don’t do server --env prod --auth abc --ssl, do server serve prod --auth abc --ssl where prod is its own subparser inheriting from AbstractServeParser or whatever.

    Thinking of CLI flags as a direct mapping to runtime variables is the fundamental mistake here I think. A CLI should be a mapping to the set(s) of behavior(s) of your application. A good CLI may have mandatory positional arguments but has 0 mandatory flags, 0 mutually exclusive flags, and if it implements multiple separate behaviors should be a tree of subparsers. Any mandatory or mutually exclusive flags should be an immediate warning that you’re not being very UNIX-y in your CLI design.


  • I’ve been using the AI to help me with some beginner level rust compilation checks recently.

    I never once got an accurate solution, but half the time it gave me a decent enough keyword to google or broken pattern to fix myself. The other half of the time it kept giving me back my own code proudly telling me it fixed it.

    Don’t worry though, AGI is right around the corner. Just one more trillion dollars bro. One trillion and we’ll provide untold value to the shareholders bro. Trust me bro.


  • Did you even read the article? Even under the VERY GENEROUS interpretation of contract law that contracts can’t be predatory (which is not a particularly popular philosophical stance outside of cyberpunk fiction), AWS MENA fell short of even their typical termination procedures because they accidentally nuked it while doing a dry-run.

    I don’t know where you work but if we did that to a paying customer, even IF there was a technicality through which we could deny responsibility, we would be trying to make it right.