That’s actually doable. Thanks for that friend.
That’s actually doable. Thanks for that friend.
Hell 50 bucks you can get a decent SSD.
If only it were that easy, I would have already thrown a spare 2.5" into the system, but it’s only got a single nvme slot for local storage.
That’s really good to know. Do you ever have issues writing database files on those disks? Database files on nfs mounts have been the bane of my existence.
Interesting, it was running on this system, so it may actually have been wear that killed the drive. I’ll have to look into that config and see if it’s worth getting a new nvme to throw into the cook box.
Thanks for that info!
Yeah, pretty much what I guessed. The drive came with a cooling pad but it didn’t do much at all
Yeah, NFS bind mounts aren’t an issue. The issue I run into is database lock errors when I try to write a database file to the NFS share. I’ve got 1G networking as well and haven’t seen issues accessing regular files from my containers via the bind mounts.
Yeah, I didnt think that was a realistic possibility. Given that it was a bitty fan less nuc style system, I’m leaning more to a heat death as I originally surmised.
E: though another person suggested a frigate misconfig could have worn the drive out early
I docker’d all of my systems a few years ago, and I’m so glad I did. So much easier to manage, and when I lost a system I was able to get most of my services back up and running with minimal configuration on a VM same day.
As for hardware, you might check and see if you’ve got a local reseller of retired business equipment. Before I moved, I had a place I went to from my work that accepted shit we were getting rid of that disposed of stuff and resold at a bargain the stuff that was still good. I got more than one hp tower from a few years previous that ran (and still runs) like a champ. Felt like night and day when I upgraded to that from my Pi setup, and they were only like $35 each.
My ideal is something more like a netboot-able image that I can modify/recreate and have it pull on next boot. But those options aren’t a bad thought either. I’d just need to have the bootable image configured with the info needed to bootstrap it. I’ve got another VM that’s got a different automation platform running (Powershell Universal), but it would give me an excuse to learn another well known automation platform.
I might be able to hook it up to a usb NVMe reader, but when I initially tried I barely got any recognition of the drive from the OS. My primary system is windows, so I might get more info from one of my linux systems, just haven’t had the fucks to give to the dead drive. As for a replacement drive, funds are scarce and time/learning is (comparatively) free. Someone else suggested kubernetes, so I might look into that to see if that can accomplish what I’m looking for.
I’m leery about using a USB for long term persistent OS storage due to lifespan issues I’ve seen when just running a hypervisor from one. A ‘real’ usermode OS is probably going to have a worse lifespan than what I was seeing at work.
I don’t want to use a USB for storage, because those aren’t going to have a great lifespan in my experience. I’ve used them as the install media for something like ESX, but I’d rather not run a ‘real’ OS from a disk because I wasn’t impressed with overall lifespan on some of the systems we managed at work.
Realistically, I just want to have a system that can act as the hardware end point for a coral processor to do image recognition. I don’t need to write a lot on demand, and what was being written previously was all to the NAS (other than the app’s database)
I’m actually not 100% what killed the drive. It could have been an issue with the drive wearing out, but my services didn’t write much locally and it wasn’t super old so I assume its a heat issue with a fanless micro system. I try to write everything important to my NASs so I don’t have to worry about random hardware failures, but this one didn’t have backups configured before it failed. Other than the drive issue its been solid for 1.5-2 years of near constant uptime.
So I amend to you don’t need it to be stateful, you could have an image like you talked about that is loaded every time (that’s essentially what kubernetes does), but you will still need space somewhere as scratch drive. A place docker will places images and temporary file systems while it’s running.
Putting the image somewhere is easy. I’ve got TBs of space available on my NAS drives, especially right now with not acquiring any additional linux ISOs.
For state, check out docker’s volume backings here: https://docs.docker.com/engine/storage/volumes/. You could use nfs to another server as an example for your volumes. Your volumes would never need to be on your “app server”, but instead could be loaded via nfs from your storage server.
I’ll check that out. If that allows me to actually write databases to disk on the nfs backing volume, that would be amazing. That’s the biggest issue I run into (regularly).
This is all nearing into kubernetes territory though. If you’re thinking about netboot and automatically starting containers, and handling stateless volumes and storing volumes in a way that are synced with a storage server… it might be time for kubernetes.
I don’t think I’ve ever looked into kubernetes. I’ll have to look into that at some point… Any good beginner resources?
Thanks for sharing! ByteStash and Bezel look like interesting projects, I’ll have to check them out at some point.
Is it not the same guy from DS9, or am I just racist against cadassians?
Probably just a hold over from when I was first learning. Had issues with a couple services not actually updating without it, so I just do it to be absolutely sure. Also, I only ever run one app per compose, so that forces a “reboot” of the whole stack when I update.
But rebuilding your container is pretty trivial from the command line all said and done. I have something like this alias’d in my .bashrc to smooth it along:
Docker compose pull; docker compose down; docker compose up -d
I regularly check on my systems and go through my docker dirs and run my alias to update everything fairly simply. Add in periodic schedule image cleanups and it has been humming along for a couple years for the most part (aside from one odd software issues and hardware failures).
How often are there issues with dependencies? Is that a problem with a lot of software these days?
I started using docker 3-4 years ago specifically because I kept having issues with dependencies of one app breaking others, but I also tend to run a lot of services per VM. Honestly, the overhead of container management is infinitely preferable to the overhead that comes with managing OS level stuff. But I’m also not a Linux expert, so take that for what you will.
Do you have any info on the custom setup? Sounds like a fun project/learning experience.
And do you mean OCI like oracle cloud?