• 3 Posts
  • 105 Comments
Joined 2 years ago
cake
Cake day: June 18th, 2023

help-circle


  • I was planning to look into Zig for this year’s Advent of Code. Haven’t really looked at it yet, but I’ve heard good things about it. Nowadays I mostly write in C# or Python for smaller scripts, so I kind of expect getting back to C-style code might have some friction, but it’s about time to refresh my memory. I had a pretty good time with Rust for AoC in the previous years (not that I ever used it for anything else), but I guess it’s time to try something else.




  • By the way, if you are using Gmail for Email, have files stored on GDrive, OneDrive (Documents are by default in OneDrive on Windows) or iCloud, use Messenger, Whatsapp, Skype, Snapchat, Xbox or Instagram to communicate, your files and messages are already being scanned for the last 5 years, since 2021.

    ChatControl was already voluntary, and the products I mentioned villingly joined and are already doing it. For most of the people suddenly complaining, not much actually changes. They could do something about it for the past 5 years - not use the apps that do it, but “I don’t want to install another chat apps, I have everyone on messenger” have been forcing people like me to choose between privacy and having a way how to contact friends and familly. And I’m 90% sure that most of them vouldn’t switch even if this new law did not pass.

    Anyway, if you haven’t already, look up “Matrix ansible project”, it’s an extremely easy way how to set up a server, with awesome guides and actually a very robust implementation. It will save you a lot of time. I"m just paying 6$ a month for Hetzner cloud, and setting it up took like an hour tops.

    Self-hosted open source solutions will always be an alternative, the major problem is that they will soon ban side-loading of apps to phones, so you won’t be able to install a FOSS messenger that connects to your solution, or a browser that doesn’t scan you, unless you have something like GrapheneOS.


  • I was doing cybersecurity for a few years before I moved to gamedev, and I vaguely remember that at least the older versions of GUID were definitely not safe, and could be “easily” guessed.

    I had to look it up, in case anyone’s interrested, and from a quick glance to the GUID RFC, it depends on the version used, but if I’m reading it right, 6 bits out of the 128 are used for version identification, and then based on the version it’s some kind of timestamp, either from UTC time or some kind of a name-space (I didn’t really read through the details), and then a clock sequence, which make it a lot more guessable. I wonder how different would the odds be for different versions of the UUID, but I’m too tired to actually understand the spec enough to be able to tell.

    However, for GUID version 4, both the timestamp and clock sequence should instead be a randomly generated number, which would give you 122 bits of entropy. It of course depends on the implementation and what kind of random generator was used when generating it, but I’d say it may be good enough for some uses.

    The spec also says that you specifically should not use it for auth tokens and the like, so there’s that.










  • Aren’t neural networks AI by definition, if we take the academic definition into account?

    I know that thermostat is an AI, because it reacts to a stimuli (current temperature) and makes an action (starts heating) basted on it’s state. Which is the formal AI definition.

    Wait. That actually means transformers are not AI by definition. Hmm, I need to look into it some more.

    EDIT: I was confusing things, that’s the definition of AI Agent. I’ll go research the AI definition some more :D


  • The major advantage of Matrix (not sure if DeltaChat can do the same) is the support for a lot of bridges, and how easily can you host it.

    Matrix has a really good and robust ansible project, with which you can set up your own sever in like an hour, assuming you have a place where to host it (I use Hetzner for like 7$ a month) and a domain. Adding bridges and configuring the ansible only needed like changing 5 config lines at most, and it’s very well documented. It’s also super easy to maintain, I “just update” every few weeks and it’s so robustly written, that it lets me know what changed and what config I need to update. I never had an issue with it in the past two or three years I’ve been using it.

    And then the bridges - I did not need to convince others to switch, becuase I run Discord, WhatsApp, Telegram, Signal and Messenger bridges on my Matrix server, which does bridge all of the apps into my Matrix server. Sure, they still get your conversations data, but at least you don’t have to have their spyware installed on your phone/PC and have it all consolidated into one Matrix app. I can also slowly convince people to switch to the more secure messengers like Signal, but don’t have to drop contact if they decide not to.


  • I second this. I only started slowly switching to nvim few months ago, and I already can feel slightly annoyed when I have to take off my hands of the keyboard to reach for a mouse, or when I’m editting a text in i.e a browser, want to make an edit few words back, and I have to spam keys like a madman instead of just jumping where I need to be.

    It’s addicting and extremely comfortable, having a good keyboard navigation controls.

    I really need to look into tiled window managers and a browser.


  • I do also like all the alt and ctrl combinations with arrow keys to move lines, blocks and jump over words.

    That’s what I love the most about VIM, that it has dozen little tricks like these. Need to jump over a word? Jump to next occurance of letter L? Jump five words? Jump to second parameter of a function definition? Jump to matching bracket? There’s a motion for all of that, and more. Including “go to definition” or “go to references”, if you set up your vim correctly.

    I don’t even know where to start to make vim or neovim do all that.

    What I did was simply install IdeaVIM into my Rider, so I can start learning the motions while also keep the features of the IDE I’m used to, but also more importantly installed LazyVim, which is a pre-made config for nvim that can do most of that by default, or has a simple addon menu (LazyExtras) that automatically download and install plugins relevant for a language you are working on. I.e I need to work in Zig, I just open LazyExtras menu, find zig-lang, and it install LSP, debugger, linter, etc that’s specific for that language.



  • Definitely, but the issue is that even the security companies that actually do the assesments also seem to be heavily transitioning towards AI.

    To be fair, in some cases, ML is actually really good (i.e in EDRs. Bypassing a ML-trained EDR is really annoying, since you can’t easily see what was it that triggered the detection, and that’s good), and that will carry most of the prevention and compensate for the vulnerable and buggy software. A good EDR and WAF can stop a lot. That is, assuming you can afford such an EDR, AV won’t do shit - but unless we get another Wannacry, no-one cares that a few dozen of people got hacked through random game/app, “it’s probably their fault for installing random crap anyway”.

    I’ve also already seen a lot of people either writing reports with, or building whole tools that run “agentic penetration tests”. So, instead of a Nessus scan, or an actual Red Teamer building a scenario themselves, you get a LLM to write and decide a random course of action, and they just trust the results.

    Most of the cybersecurity SaaS corporates didn’t care about the quality of the work before, just like the companies that are actually getting the services didn’t care (but had to check a checkbox). There’s not really an incentive for them to do so, worst case you get into a finger-pointing scenario (“We did have it pentested” -> “But our contract says that we can’t 100% find everything, and this wasn’t found because XYZ… Here’s a report with our methodology that we did everything right”), or the modern equivalent of “It was the AI’s fault”, maybe get a slap on the wrist, but I think that it will not get more important, but way, way more depressing than it already was three years ago.

    I’d estimate it will take around a decade of unusable software and dozens of extremely major security breaches before any of the large corporations (on any side) concedes that AI was really, really stupid idea. And at that time they’ll probably also realize that they can just get away with buggy vulnerable software and not care, since breaches will be pretty common place, and probably won’t affect larger companies with good (and expensive) frontline mitigation tools.