• 0 Posts
  • 32 Comments
Joined 2 years ago
cake
Cake day: July 1st, 2023

help-circle




  • saltesc@lemmy.worldtoADHD memes@lemmy.dbzer0.comNot my strongest area
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    18 days ago

    Okay. Well the good thing about science is there’s lots of correlation figures you can go check out at any time instead.

    But there are some more simple ways to see it like how obviously no one was known to have ADHD before it was identified. And how more people have it as it’s definition continues evolving to be more detailed and broad. That’s normal behaviour for conditions in medical science. We must know of its existence before anyone can have it; more people tend to have it as our understanding of the thing improves by leaps and bounds.




  • saltesc@lemmy.worldtoADHD memes@lemmy.dbzer0.comNot my strongest area
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    1
    ·
    edit-2
    20 days ago

    I honestly feel modern take on ADHD is just a label for people that don’t fit into how society wants it.

    Tens of thousands of years, we nurtured those that picked up on the butterfly. Those minds will always be critical to human survival. But it’s only recently, incompatibility with the standard that causes need for more effort, “Well, they must be broken”.

    Nope. They never were. We just got real bad at bringing out the best in everyone. At this rate of classification down the path we’re going, it’ll be weird to not have ADHD.






  • saltesc@lemmy.worldtoADHD memes@lemmy.dbzer0.comtrue
    link
    fedilink
    English
    arrow-up
    29
    ·
    2 months ago

    I’m constantly in situations where someone needs to know but no one tries, so I take it on and level up.

    It’s the “doer” classification. Certainly not the best person for the job, but no one else is doing it, so gotta try. Over time, a new skill is built.

    Patience and persistence are beneficial traits to have. I don’t think they’re natural, I think they’re forced but become natural-like after repeatedly experiencing their inevitably rewarding outcomes.


  • Light debugging I actually use an LLM for. Yes, I know, I know. But when you know it’s a syntax issue or something simple, but a quick skim through produces no results; AI be like, “Used a single quote instead of double quote on line 154, so it’s indirectly using a string instead of calling a value. Also, there’s a typo in the source name on line 93 because you spelled it like this everywhere else.”

    By design, LLMs do be good for syntax, whether a natural language or a digital one.

    Nothing worse than going through line by line, only to catch the obvious mistake on the third “Am I losing my sanity?!” run through.





  • We can, but it’s a lot of effort and time. Good AI requires a lot of patience and specificity.

    I’ve sort of accepted the gimmick of LLMs being a bit of a plateau in training. It has always been that we teach AI to learn, but currently the public has been exposed to what they perceive to be magic and that’s “good enough”. Like, being wrong so often due to bad information, bad interpretation of information, and bias within information is acceptable now, apparently. So teaching to learn isn’t a high mainstream priority compared to throwing in mass information instead—it’s far less exciting working on infrastructure.

    But here’s the cool thing about AI, it’s pretty fucking easy to learn. If you have patience and creativity to put toward training, you can do what you want. Give it a crack! But always be working on refining it. I’m sure out there right now someone’s been inspired enough to do what you’re talking about and in a few years of tears and insane electricity bills, there’ll be a viable model.


  • Yeah, get too far in or give it too much to start with, it can’t handle it. You can see this with visual generators. “Where’s the lollypop in its hand? Try again… Okay now you forgot about the top hat.”

    Have to treat them like simple interns that will do anything to please rather than admit the task is too complex or they’ve forgotten what they were meant to do.


  • saltesc@lemmy.worldtoProgrammer Humor@programming.devEfficiency
    link
    fedilink
    arrow-up
    12
    arrow-down
    2
    ·
    3 months ago

    I use Claude for SQL and PowerQuery whenever I brain fart.

    There’s more usefulness in reading its explanation than its code, though. It’s like bouncing ideas back off someone except you’re the one that can actually code them. Never bother copying it’s code unless it’s a really basic request that’s quicker to type than to code.

    Bad quality and mass quantity in is obviously much quicker for LLMs and people that don’t understand the tech behind AI don’t understand this actually what’s going on, so it’s “magic”. A GPT is fundamentally quite simple and produces simple results full of potential issues, combine that with poor training quality and “gross”. There’s minimal check iterations it can do and how would it even do them when it’s knowledge base is more bullshit than it is quality?

    Truth is it will be years before AI can reliably code. Training for that requires building a large knowledge base of refined working solutions covering many scenarios, with explanation, to train off. It’d take longer for AI to self-learn these too without significant input from the trainer.

    Right now you can prompt the same thing six times and hope it manages a valid solution in one. Or just code it yourself.