• 0 Posts
  • 141 Comments
Joined 3 years ago
cake
Cake day: June 30th, 2023

help-circle

  • Though, if you prefer, you can also move your hand to the mouse. With the scroll wheel and good hand-eye coordination, you can get pretty close to the speed of a true vim exper–haha jk, they finished converting the entire source file from python to rust using a specially crafted regex by the time your hand reached the mouse and implemented a matrix view by the time you scrolled to the line you wanted.

    And when you say that falling green symbols aren’t that impressive, they look at you in confusion for a moment before realizing what you meant and handing you a VR plug to show you what “matrix view” really means.












  • The reason that 25 number came up is that’s how old the cohort they were studying brain development got when their funding was cut. There’s no reason to not believe brains continue developing all our lives, or that even if that study did find a “cut off point” that it would be the same from person to person.



  • Just realized that even if there is no mechanism to get the exact date from any of these age tracking systems, they’ll be able to infer the exact dates by just looking at when the user/device transitions to the next bracket. Then they’ll know the birthday for the start of that bracket falls somewhere between the last check and the current one.

    Though maybe that data can be poisoned by making it transition backwards occasionally, so it looks like the user is editing their age older and back or something. But, on the other hand, a lack of data or poisoned data is going to be a flag on its own at some point (if not already).



  • It’s not even a junior dev. It might “understand” a wider and deeper set of things than a junior dev does, but at least junior devs might have a sense of coherency to everything they build.

    I use gen AI at work (because they want me to) and holy shit is it “deceptive”. In quotes because it has no intent at all, but it is just good enough to make it seem like it mostly did what was asked, but you look closer and you’ll see it isn’t following any kind of paradigms, it’s still just predicting text.

    The amount of context it can include in those predictions is impressive, don’t get me wrong, but it has zero actual problem solving capability. What it appears to “solve” is just pattern matching the current problem to a previous one. Same thing with analysis, brainstorming, whatever activity can be labelled as “intelligent”.

    Hallucinations are just cases where it matches a pattern that isn’t based on truth (either mispredicting or predicting a lie). But also goes the other way where it misses patterns that are there, which is horrible for programming if you care at all about efficiency and accuracy.

    It’ll do things like write a great helper function that it uses once but never again, maybe even writing a second copy of it the next time it would use it. Or forgetting instructions (in a context window of 200k, a few lines can easily get drowned out).

    Code quality is going to suffer as AI gets adopted more and more. And I believe the problem is fundamental to the way LLMs work. The LLM-based patches I’ve seen so far aren’t going to fix it.

    Also, as much as it’s nice to not have to write a whole lot of code, my software dev skills aren’t being used very well. It’s like I’m babysitting an expert programmer with alzheimer’s but thinks they are still at their prime and don’t realize they’ve forgotten what they did 5 minutes ago, but my company pays them big money and get upset if we don’t use his expertise and probably intend to use my AI chat logs to train my replacement because everything I know can be parsed out of those conversations.