𝕽𝖚𝖆𝖎𝖉𝖍𝖗𝖎𝖌𝖍

       🅸 🅰🅼 🆃🅷🅴 🅻🅰🆆. 
 𝕽𝖚𝖆𝖎𝖉𝖍𝖗𝖎𝖌𝖍 𝖋𝖊𝖆𝖙𝖍𝖊𝖗𝖘𝖙𝖔𝖓𝖊𝖍𝖆𝖚𝖌𝖍 

Ceterum Lemmi necessitates reactiones

  • 4 Posts
  • 303 Comments
Joined 3 years ago
cake
Cake day: August 26th, 2022

help-circle
  • Wait… vfat supports Unicode? The filesystem that craps out if the file path length is longer than a couple hundred characters; that is an extension of a filesystem that couldn’t handle file names longer than 8.3 characters; that doesn’t have any concept of file permissions, much less ACLs; the one that partitioned filenames in 13 character hunks in directories to support filenames longer than 12 characters… that isn’t case sensitive, except in all the wrong ways - this filesystem can handle Unicode?

    I greatly doubt that. FAT doesn’t even support 8-bit ASCII, does it? 7-bit only. Unless you mean FAT32, which can optionally have UTF-16 support enabled. And it’s far easier to manage case changes in UTF-16 than UTF-8, using case mapping as MS does. The API handles all of this for you; it keeps track of what the the user calls them, but uses it’s own internal name for the file. And na’er the two shall meet, lest there be trouble.

    I do think it’s sloppy and lazy; it’s very easy to avoid doing actual work thinking about the problem and to bang out some hack solution. In the end, far more work is done, but for the wrong reasons.

    I don’t know what Apple’s excuse is, except maybe DNA. Apple ][ were not only case insensitive, they didn’t even have lower case characters at all. There was only one case, and maybe those engineers brought that mind set forward with the Lisa, and then the Mac. How it got into Darwin… is Darwin really case insensitive? I’m pretty sure on the company line - at the filesystem level, it is.












  • And most people don’t complain about computers being slow anymore. And when they do, it’s usually because of memory, disc, or network speeds. It’s almost never because of CPU cycles. The people complaining about performance that’s related to cycles are usually complaining about GPU processing.

    It’s almost never a CPU power issue, anymore. Unless you’re a developer or scientist, and you’re actually trying to compute something. I have two beefy computers in my house - my desktop, for coding, and my media server, because Jellyfin insists on transcoding everything. The rest are all ARM, and mostly old ARM, and they’re all perfectly capable of doing their jobs. RISCV would be, too.







  • It’d be more space efficient to store a COW2 of Linux with a minimum desktop and basically only DarkTable on it. The VM format hasn’t changed in decades.

    Shoot. A bootable disc containing Linux and the software you need to access the images, and on a separate track, a COW2 image of the same, and on a third, just DarkTable. Best case, you pop in the drive & run DarkTable. Or, you fire up a VM with the images. Worst case, boot into linux. This may be the way I go, although - again - the source images are the important part.

    I’d be careful with using SSDs for long term, offline storage.

    What I meant was, keep the master sidecar on SSD for regular use, and back it up occasionally to a RW disc. Probably with a simply cp -r to a directory with a date. This works for me because my sources don’t change, except to add data, which is usually stored in date directories anyway.

    You’re also wanting to archive the exported files, and sometimes those change? Surely, this is much less data? Of you’re like me, I’ll shoot 128xB and end up using a tiny fraction of the shots. I’m not sure what I’d do for that - probably BD-RW. The longevity isn’t great, but it’s by definition mutable data, and in any case the most recent version can be easily enough regenerated as long as I have the sidecar and source image secured.

    Burning the sidecar to disk is less about storage and more about backup, because that is mutable. I suppose an append backup snapshot to M-Disc periodically would be boots and suspenders, and frankly the sidecar data is so tiny I could probably append such snapshots to a single disc for years before it all gets used. Although… sidecar data would compress well. Probably simply tgz, then, since it’s always existed, and always will, even if gzip has been superseded by better algorithms.

    BTW, I just learned about the b3 hashing algorithm (about which I’m chagrined, because I thought I kept an eye out on the topic of compression and hashing). It’s astonishingly fast - for the verification part, is what I’m suggesting.


  • The densities I’m seeing on M-Discs - 100GB, $5 per, a couple years ago - seemed acceptable to me. $50 for a TB? How big is your archive? Mine still fits in a 2TB disk.

    Copying files directly would work, but my library is real big and that sounds tedious.

    I mean, putting it in an archive isn’t going to make it any smaller. Compression on even lossless compressed images doesn’t often help.

    And we’re talking about 100GB discs. Is squeezing that last 10MB out of the disk by splitting an image across two disks worth it?

    The metadata is a different matter. I’d have to think about how to handle the sidecar data… but that you could almost keep on a DVD-RW, because there’s no way that’s going to be anywhere near as large as the photos themselves. Is your photo editor DB bigger than 4GB?

    I never change the originals. When I tag and edit, that information is kept separate from the source images - so I never have multiple versions of pictures, unless I export them for printing, or something, and those are ephemeral and can be re-exported by the editor with the original and the sidecar. Music, and photos, I always keep the originals isolated from the application.

    This is good, though; it’s helping me clarify how I want to archive this stuff. Right now mine is just backed up on multiple disks and once in B2, but I’ve been thinking about how to archive for long term storage.

    I think in going to go the M-Disc route, with sidecar data on SSD and backed up to BluRay RW. The trick will be letting DarkTable know that the source images are on different media, but I’m pretty sure I saw an option for that. For sure, we’re not the first people to approach this problem.

    The whole static binary thing - I’m going that route with an encrypted share for financial and account info, in case I die, but that’s another topic.


  • This is an interesting problem for the same use case which I’ve been thinking about lately.

    Are you using standard BluRay, or M-Discs?

    My plan was to simply copy files. These are photos, and IME they don’t benefit from compression (I stopped taking raw format pictures when I switched to Fujifilm, and the jpgs coming from the camera were better than anything I could produce from raw in Darktable). Without compression, putting then in tarballs then only adds another level of indirection, and I can just checksum images directly after write, and access them directly when I need to. I was going to use the smallest M-Disc for an index and just copy and modify it when it changed, and version that.

    I tend to not change photos after they’ve been processed through my workflow, so in my case I’m not as concerned with the “most recent version” of the image. In any case, the index would reflect which disc the latest version of an image lived, if something did change.

    For the years I did shoot raw, I’m archiving those as DNG.

    For the sensitive photos, I have a Rube Goldberg plan that will hopefully result in anyone with the passkey being able to mount that image. There aren’t many of those, and that set hasn’t been added to in years, so it’ll go on one disc with the software necessary to mount it.

    My main objective is accessibility after I’m gone, so having a few tools in the way makes trump over other concerns. I see no value in creating tarballs - attach the device, pop in the index (if necessary), find the disc with the file, pop that in, and view the image.

    Key to this is

    • the data doesn’t change over time
    • the data is already compressed in the file format, and does not benefit from extra compression