

they’re just a radical left communist
God I wish that was remotely true
made you look
they’re just a radical left communist
God I wish that was remotely true
JXL is two separate image formats stuck together. An improved version of JPEG that can also losslessly and reversibly recode most existing JPEG images at a smaller size, and the PNG like format (evolved from FLIF/FUIF) that can do lossless or lossy encoding.
“VarDCT” (The improved JPEG) turns out to be good enough that the “Modular” mode (The FLIF/FUIF like one) isn’t needed much outside of lossless encoding. One neat feature of modular mode though is that it progressively encodes the image in different sizes, that is if you decode the stream as you read in bytes you start with a small version of the image and get progressively larger and larger output sizes until you get the original.
Why is that useful? Well you can encode a single high DPI image (e.g. 2x scale), and then clients on 1x scale monitors can just stop decoding the image at a certain point, and get a half sized image out of it. You don’t need separate per-DPI variants.
iirc the main reason for QOI was to have a simple format because “complexity is slow”, so by stripping things that the author didn’t consider important the idea was the resulting image format would be quicker and smaller than something like PNG or WebP.
Not sure how well that held up in practice, a lot of that complexity is actually necessary for a lot of use cases (e.g. you need colour profiles unless you’re only ever dealing with sRGB), and I remember a bunch of low hanging fruit optimisations for PNG encoders at the time that improved encoding speed by quite a bit.
AVIF is funny because they kept the worst aspects of WebP (lossy video based encoding), while removing the best (lossless mode) There was an attempt at WebP2, using AV1 and a proper lossless mode, but Google killed that off as well.
But hey, now that they’re releasing AV2 soon, we’ll eventually have an incompatible AVIF2 to deal with. Good thing they didn’t support JPEG-XL, it’d just be too confusing to have to deal with multiple formats.
Lossless is fine, lossy is worse than JPEG.
That’d just be overall worse, it’d never be smaller than a comparable JPEG image, and it wouldn’t allow for any compression/quality benefits.
Businesses require VPNs to function. Banning them would decimate Michigan’s economy. The only thing these people truly value is money
I mean it’s not hard to see them carve out an exception for business uses, and allow them only on business-grade ISP plans. Tech won’t stump these people because they don’t care about it, when they can just force the people to play along.
They can just make it a legal requirement to allow MITM, like Kazakhstan tried doing back in 2015. If every ISP requires you to have this cert installed before you can go online, you don’t have many options.
Yep, their frontend used a shared caller that would return the parsed JSON response if the request was successful, and error otherwise. And then the code that called it would use the returned object directly.
So I assume that most of the backend did actually surface error codes via the HTTP layer, it was just this one endpoint that didn’t (Which then broke the client side code when it tried to access non-existent properties of the response object), because otherwise basic testing would have caught it.
That’s also another reason to use the HTTP codes, by storing the error in the response body you now need extra code between the function doing the API call and the function handling a successful result, to examine the body to see if there was actually an error, all based on an ad-hoc per-endpoint format.
Ehh, that really feel like “But other people do it wrong too” to me, half the 4xx error codes are application layer errors for example (404 ain’t a transport layer error, neither is 403, 415, 422 or 451)
It also complicates actually processing the request as you’ve got to duplicate error handling between “request failed” and “request succeeded but actually failed”. My local cinema actually hits that error where their web frontend expects the backend to return errors, but the backend lies and says everything was successful, and then certain things break in the UI.
Well no, the HTTP error codes are about the entire request, not just whether or not the actual header part was received and processed right.
Like HTTP 403, HTTP only has a basic form of authentication built in, anything else needs the server to handle it externally (e.g. via session cookies). It wouldn’t make sense to send “HTTP 200” in response to trying to access a resource without being logged in just because the request was well formed.
Sudo is worth redoing regardless of language.
Or move away from it entirely, e.g. to something like doas
which OpenBSD migrated to a decade ago.
And unsurprisingly, a majority of the comments on that post are complaining about systemd.
I want my NKRO.
Which can be done over USB, cheap keyboards just aren’t wired for it.
Well that’s disappointing.
The funny thing is that for the longest time Intel actually had the majority share of GPUs, just by counting the ones embedded in motherboards of laptops and the like. No idea if that’s still the case, or if Nvidia or AMD has been eating into it with their new models (e.g. what powers the Steam Deck)
They’ve tried to break into the discrete market a few times, most recently with their Arc cards, but the way they approach things is just so odd. It’s like they assume the first attempt will be a smash hit and dominate, and when it doesn’t they just flounder? The Arc cards launched to a lot of fanfare and then there was just silence and delays from Intel.
Bad management, bad luck, and usual market stuff. They’re going to do anything to cut costs.
Their R&D for new fab work is falling behind competitors (Technically better doesn’t matter if nobody is buying it), they’ve had a bunch of bad CPU releases with hardware failures, and they’ve got next to no market presence with GPUs which are currently making money hand over fist (Mostly for dumb AI reasons, which is going to bite Nvidia hard when the bubble pops, because their new datacenter hardware is hyper tuned for LLMs at the expense of general compute, unlike AMD).
I mean yeah, there’s extra stuff layered on top of the underlying protocols that is badly designed. Docker was built with a hard dependency on IPv4, so was the Dat protocol. If these things were designed properly from the start we wouldn’t be having these issues.
Apple was smart here, they mandate that iOS apps must support single stack IPv6 only and perform functional testing of that as part of the app store process. Devs can’t get away with pretending it’s not necessary and not wiring up support for it.
IPv6 is too complex, error prone and unsupported to deploy without shooting yourself in the foot, even now, a few decades after introduction.
Which is purely down to people not testing things before releasing them, because the support is there but there’s layers of unnecessary stuff put in the way. Like I had an old ISP provided router that ran Linux, but the management UI was only ever tested against v4 networks so none of the v6 stuff was actually hooked up correctly.
Support in desktops and mobile devices is effectively 100%, but even in embedded hardware there’s often full support, just not enabled correctly or tested.
The headline makes this sound a lot worse than the article does.
From the article there’s basically a list of exemptions in the law that describes who doesn’t need to follow it (e.g. an online booking site for doctors visits), everybody else needs to check the rules to see if they do. And if they do, they then need to follow extra child safety rules (e.g. Roblox is opting out under-16s from open DMs by default)
GitHub can quite rightly say they don’t fall under the restrictions of the law, and that could be the end of it. The simple fact that it doesn’t have any form of private messaging feature is probably enough.