• 0 Posts
  • 36 Comments
Joined 2 years ago
cake
Cake day: July 5th, 2023

help-circle
  • Still a pretty limited palette, everyone wearing the same color shirts.

    PNG tends to fail hard with textures. For example, my preferred theme in my chess app, which has some wood grain textures, generates huge screenshot file sizes (2MB), whereas the default might be less than 10% as large. Similarly, when I screenshot this image the file size jumps to 2MB for a 0.8 megapixel image.

    Rendered textured scenes could easily overload the PNG compression algorithm to where they’re huge, and if Discord is historically associated with gaming, one can imagine certain video game screenshots blasting past that 40mb limit.


  • I think HEIC plays friendly for how they store live photos: a container that has both a still image and a video of the surrounding time context. HEIC for the still photo and HEVC for the video probably optimizes the hardware acceleration for fast, low power processing of both parts of the data, and allows for a higher quality extraction of an alternative still photo from a different part of the video.

    And maybe they want to have more third party support in place before they set JXL as a default. All the power and space savings in the world on capture might not mean as much if the phone has to do the work of exporting a JPEG or HEIC for each time that file interfaces with an app or the browser or whatever.



  • Google didn’t kill JPEG XL. It might have set browser support back some, but there’s still a place for JPEG XL to take over.

    All the modern video-derived formats (webp, heif/heic, avif) tend to be optimized for screen resolutions. But for print photography (including just plain old regular photography that wants to keep the option open of maybe printing some of the images eventually), the higher resolutions and higher quality stretches the limits of where those codecs actually perform well (in terms of file sizes, perceived quality, computational power of coding or decoding).

    JPEG XL knocks the other modern images out of the water at those print resolutions and color spaces and quality. It’s not just for photography, either: medical imaging, archiving, printing, etc., all use much higher resolutions that what is supported on any screen.

    And perhaps most importantly for future support, the iPhone now supports taking images in JPEG XL. If that becomes a dominant format for photographic workflows, to replace stuff like DNG and other raw formats, browser support won’t hold back the format’s adoption.


  • And if you already have compression artifacts, what use is lossless?

    To further reduce file size without further reducing quality.

    There are probably billions of jpeg files out there in the world already encoded in lossy JPEG, with no corresponding higher quality version actually available (e.g., the camera that captures the image and immediately saves it as JPEG). We shouldn’t simply accept that those file sizes are going to forever be stuck, and can think through codecs that further compress the file size losslessly from there.


  • It was the Joint Picture Experts Group that invented it, so Google had no ownership over it, unlike WebP.

    No, JPEG called for submission of proposals to define the new standard, and Google submitted its own PIK format, which provided much of the basis for what would become the JXL standard (the other primary contribution being Cloudinary’s FUIF).

    Ultimately, I think most of the discussion around browser support thinks too small. Image formats are used for web display, sure, but they’re also used for so many other things. Digital imaging is used in medicine (where TIFF dominates), print, photography, video, etc.

    I’m excited about JPEG XL as a replacement for TIFF and raw photography sensor data, including for printing and medical imaging. WebP, AVIF, HEIF, etc. really are only aiming for replacing web distributed images on a screen.


  • If you screenshot computer/phone interfaces (text, buttons, lots of flat colors with adjacent pixels the exact same color), the default PNG algorithm does a great job of keeping the file size small. If you screenshot a photograph, though, the PNG algorithm makes the file size huge, because it’s just really poorly optimized for re-encoding images that are already JPG.


  • That’s kinda always been how technology changes jobs, though, by slowly making the job one of supervising the technology. I’m no longer carving a piece of wood myself, but I’m running the CNC machine by making sure it’s doing things properly and has everything it needs to work properly. I’m not physically stabbing the needle through the fabric every time, myself, but I am guiding the sewing machine path on that fabric. I’m not feeding fuel into the oven to maintain a particular temperature, but I am relying on the thermocouple to turn the heating element on and off to maintain the assigned equilibrium that I’ll use to bake food.

    Many jobs are best done as a team effort between human and machine. Offloading the tedious tasks to the machine so that you can focus on the bigger picture is basically what technology is for. And as technology changes, we need to always be able to recalibrate which tasks are the tedious ones that machines do better, and which are the higher level decisions best left to humans.





  • Javascript for this seems like the wrong tool. The http server itself can usually be configured to serve alternative images (including different formats) to supporting browsers, where it serves JXL if supported, falls back to webp if not, and falls back to JPEG if webp isn’t supported.

    And the increased server side adoption for JXL can run up the stats to encourage the Chromium team to resume support for JXL, and encourage the Firefox team to move support out from nightly behind a flag, especially because one of the most popular competing browsers (Safari on Apple devices) does already support JXL.


  • It’s not too late.

    The current standard on the web is JPEG for photographic images. Everyone agrees that it’s an inefficient standard in terms of quality for file size, and that its 8-bit RGB support isn’t enough for higher dynamic range or transparency. So the different stakeholders have been exploring new modern formats for different things:

    WEBP is open source and royalty free, and has wide support, especially by Google (who controls a major image search engine and the dominant web browser), and is more efficient than JPEG and PNG in lossy and lossless compression. It’s 15 years old and is showing its age as we move towards cameras that capture better dynamic range than the 8-bit limits of webp (or JPEG for that matter). It’s still being updated, so things like transparency have been added (but aren’t supported by all webp software).

    AVIF supports HDR and has even better file size efficiency than webp. It’s also open source and royalty free, and is maintained by the Linux Foundation (for those who prefer a format controlled by a nonprofit). It supports transparency and animation out of the box, so it doesn’t encounter the same partial support issues as webp. One drawback is that the AVIF format requires a bit more computational power to encode or decode.

    HEIC is more efficient than JPEG, supports high bit depth and transparency, but is encumbered by patents so that support requires royalty payments. The only reason why it’s in the conversation is because it has extensive hardware acceleration support by virtue of its reliance on the HEVC/h.265 codec, and because it’s Apple’s default image format for new pictures taken by its iPhone/iPad cameras.

    JPEG XL has the best of all possible worlds. It supports higher bit depths, transparency, animation, lossless compression. It’s open source and royalty free. And most importantly, it has a dedicated compression path for taking existing JPEG images and losslessly shrinking the file size. That’s really important for the vast majority of digitally stored images, because people tend to only have the compressed JPEG version. The actual encoding and decoding is less computationally intensive than webp or avif. It’s a robust enough standard for not just web images, but raw camera captures (potentially replacing DNG and similar formats), raw document scans and other captured imagery (replacing TIFF), and large scale printing (where TIFF is still often in the workflow).

    So even as webp and avif and heic show up in more and more places, the constant push forward still allows JXL to compete on its own merits. If nothing else, JXL is the only drop in replacement where web servers can silently serve the JXL version of a file when supported, even if the “original” image uploaded to the site was in JPEG format, with basically zero drawbacks. But even on everything else, the technical advantages might support processing and workflows in JXL, from capture to processing to printing.



  • From a business perspective it makes sense, to throw all the rendering to the devices to save cost.

    Not just to save cost. It’s basically OS-agnostic from the user’s point of view. The web app works fine in desktop Linux, MacOS, or Windows. In other words, when I’m on Linux I can have a solid user experience on apps that were designed by people who have never thought about Linux in their life.

    Meanwhile, porting native programs between OSes often means someone’s gotta maintain the libraries that call the right desktop/windowing APIs and behavior between each version of Windows, MacOS, and the windowing systems of Linux, not all of which always work in expected or consistent ways.



  • Yeah, from what I remember of what Web 2.0 was, it was services that could be interactive in the browser window, without loading a whole new page each time the user submitted information through HTTP POST. “Ajax” was a hot buzzword among web/tech companies.

    Flickr was mind blowing in that you could edit photo captions and titles without navigating away from the page. Gmail could refresh the inbox without reloading the sidebar. Google maps was impressive in that you could drag the map around and zoom within the window, while it fetched the graphical elements necessary on demand.

    Or maybe web 2.0 included the ability to implement states in the stateless HTTP protocol. You could log into a page and it would only show you the new/unread items for you personally, rather than showing literally every visitor the exact same thing for the exact same URL.

    Social networking became possible with Web 2.0 technologies, but I wouldn’t define Web 2.0 as inherently social. User interactions with a service was the core, and whether the service connected user to user through that service’s design was kinda beside the point.


  • Yeah, you’re describing an algorithm that incorporates data about the user’s previous likes. I’m saying that any decent user experience will include prioritization and weight of different posts, on a user by user basis, so the provider has no choice but to put together a ranking/recommendation algorithm that does more than simply sorts all available elements in chronological order.