

Linux gamers receive significant attention as Proton is now enabled by default
Significant attention = A 0 became a 1, saving you a few clicks after installing Steam.
Linux gamers receive significant attention as Proton is now enabled by default
Significant attention = A 0 became a 1, saving you a few clicks after installing Steam.
I’ve stopped ordering breakfast dishes that say they come with sourdough. Apparently no one knows it should only be very lightly toasted and instead I get petrified bread that cuts up my mouth and is hard to cut with a knife.
Australia pats self on back. Leaves coal dust handprints.
Okay. Well the good thing about science is there’s lots of correlation figures you can go check out at any time instead.
But there are some more simple ways to see it like how obviously no one was known to have ADHD before it was identified. And how more people have it as it’s definition continues evolving to be more detailed and broad. That’s normal behaviour for conditions in medical science. We must know of its existence before anyone can have it; more people tend to have it as our understanding of the thing improves by leaps and bounds.
There’s nothing to indicate that’s true. If anything, the leaps in medical science have increased the likelihood of being born “normal”.
It also means we have identified or created many “issues” and continue to increase how well they’re diagnosed. Everyone’s got multiple things now.
Yep, “classification”.
The proportion will always be the same whether classified or not.
I honestly feel modern take on ADHD is just a label for people that don’t fit into how society wants it.
Tens of thousands of years, we nurtured those that picked up on the butterfly. Those minds will always be critical to human survival. But it’s only recently, incompatibility with the standard that causes need for more effort, “Well, they must be broken”.
Nope. They never were. We just got real bad at bringing out the best in everyone. At this rate of classification down the path we’re going, it’ll be weird to not have ADHD.
We…needed a study for this?
Sometimes the internet makes me feel like the smartest person alive, but then I bump into a random stranger on the street and remember I’m normal.
Edit: And, yes, I apologise to them.
I wish I could steal code. Everything I do is so situation-specific. I save my own snippets “for next time” but they never seem to come up again lol
Nothing. The abortion was self-administered, the family became concerned for her wellbeing and went to the police, the police looked for their numberplate in the nationwide network.
You can read the original article linked in this one. They are vastly different.
50% of the post here are. Echo chambers rely on most users never reading the article.
I’m constantly in situations where someone needs to know but no one tries, so I take it on and level up.
It’s the “doer” classification. Certainly not the best person for the job, but no one else is doing it, so gotta try. Over time, a new skill is built.
Patience and persistence are beneficial traits to have. I don’t think they’re natural, I think they’re forced but become natural-like after repeatedly experiencing their inevitably rewarding outcomes.
Light debugging I actually use an LLM for. Yes, I know, I know. But when you know it’s a syntax issue or something simple, but a quick skim through produces no results; AI be like, “Used a single quote instead of double quote on line 154, so it’s indirectly using a string instead of calling a value. Also, there’s a typo in the source name on line 93 because you spelled it like this everywhere else.”
By design, LLMs do be good for syntax, whether a natural language or a digital one.
Nothing worse than going through line by line, only to catch the obvious mistake on the third “Am I losing my sanity?!” run through.
I don’t even know if I pronounce it right. Is it like libré or is it more like a lobster-tiger hybrid? Or a Lemur-cobra.hybtid?
Also, if they’re watching, the themes still make the UI all funky unless it’s on the default—at least for Calc, anyway.
I put Spotify on the block out whatever it is that slightly squeaks in the back every time drive over a bump
does mental gymnastics
Wait, I think I got it…
OP is saying we can reduce emissions by reducing aircraft traffic by pointing out the bathrooms are not gender specific.
Edit: No, that can’t be it. Because we want people to use trains more over cars, so…
We can, but it’s a lot of effort and time. Good AI requires a lot of patience and specificity.
I’ve sort of accepted the gimmick of LLMs being a bit of a plateau in training. It has always been that we teach AI to learn, but currently the public has been exposed to what they perceive to be magic and that’s “good enough”. Like, being wrong so often due to bad information, bad interpretation of information, and bias within information is acceptable now, apparently. So teaching to learn isn’t a high mainstream priority compared to throwing in mass information instead—it’s far less exciting working on infrastructure.
But here’s the cool thing about AI, it’s pretty fucking easy to learn. If you have patience and creativity to put toward training, you can do what you want. Give it a crack! But always be working on refining it. I’m sure out there right now someone’s been inspired enough to do what you’re talking about and in a few years of tears and insane electricity bills, there’ll be a viable model.
Yeah, get too far in or give it too much to start with, it can’t handle it. You can see this with visual generators. “Where’s the lollypop in its hand? Try again… Okay now you forgot about the top hat.”
Have to treat them like simple interns that will do anything to please rather than admit the task is too complex or they’ve forgotten what they were meant to do.
I use Claude for SQL and PowerQuery whenever I brain fart.
There’s more usefulness in reading its explanation than its code, though. It’s like bouncing ideas back off someone except you’re the one that can actually code them. Never bother copying it’s code unless it’s a really basic request that’s quicker to type than to code.
Bad quality and mass quantity in is obviously much quicker for LLMs and people that don’t understand the tech behind AI don’t understand this actually what’s going on, so it’s “magic”. A GPT is fundamentally quite simple and produces simple results full of potential issues, combine that with poor training quality and “gross”. There’s minimal check iterations it can do and how would it even do them when it’s knowledge base is more bullshit than it is quality?
Truth is it will be years before AI can reliably code. Training for that requires building a large knowledge base of refined working solutions covering many scenarios, with explanation, to train off. It’d take longer for AI to self-learn these too without significant input from the trainer.
Right now you can prompt the same thing six times and hope it manages a valid solution in one. Or just code it yourself.
Damn. So it turns out I can feel empathy but it takes something as tragic as this. Poor Balrog 😔