Hello all, i know this seems like a stupid question to ask in this community, but i am serious.
To me: i am generally comfortable around computers, tried a few simple projects (like a music streamer based on a rpi) but i have no real education on this - all i do is follow the documentation and then google for troubleshooting. I am also (kinda) privacy focused and really annoyed at all the enshittification observable everywhere so i installed revanced and about half a year ago when i bought a new laptop i set my old thinkpad up as a proxmox-server. This is running my HomeAssistant-Instance in one VM and has another VM running ubuntu for my docker containers (paperless-ngx and immich). I really like the services these provide, but to be honest i feel uncomfortable with entrusting my data to them, as i am constantly worried i will break something and corrupt the data. Also i think i underestimated the amount of updates and maintenance that accumulates.
I am also not really willing to spend too much time learning all this from the ground up, as my dayjob is 8h in front of the computer anyway, so i dont want to spend my whole evening or weekend there as well.
I guess what i am really searching for is a service which i can just trust and pay for myself OR a very userfriendly suite of selfhosted apps.
Services i need would be:

  • general cloud storage
  • document organization (a la paperless-ngx)
  • photos

I would also like:

  • some kind of shared notes
  • a media suite (like plex)

I am fine with Home Assistant, as that has no real consequences should i really mess it up badly

Thank you for any suggestions on how to move on in this matter.

  • atzanteol@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    45
    ·
    5 days ago

    i feel uncomfortable with entrusting my data to them, as i am constantly worried i will break something and corrupt the data

    Backups. If you’re not willing to setup and test proper backups then no - you should not self-host.

    • litron3000@feddit.orgOP
      link
      fedilink
      English
      arrow-up
      5
      ·
      5 days ago

      currently i am doing them manually everytime i am changing something. I had to deploy them before (successfully) but if i am going to keep going with the selfhosting route i am planning to set up syncthing to the NAS of a friend

      • pe1uca@lemmy.pe1uca.dev
        link
        fedilink
        English
        arrow-up
        14
        ·
        5 days ago

        I’d say syncthing is not really a backup solution.
        If for some reason something happens to a file on one side, it’ll also happen to the file on the other side, so you’ll loose your “backup”.
        Plus, what ensures you your friend won’t be going around and snooping or making their own copies of your data.
        Use a proper backup software to send your data offsite (restic, borg, duplicati, etc) which will send it encrypted (use a password manager to set a strong and unique password for each backup)

        And follow the 3-2-1 rule MangoPenguin mentioned.
        Remember, this rule is just for data you can’t find anywhere else, so just your photos, your own generated files, databases of the services you self-host, stuff like that. If you really want you could make a backup of hard to find media, but if you already have a torrent file, then don’t go doing backup of that media.

        • litron3000@feddit.orgOP
          link
          fedilink
          English
          arrow-up
          4
          ·
          4 days ago

          Thank you for actually explaining why it’s not suitable
          I will look into those should I decide to keep this setup running

          • lemmy_get_my_coat@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            4 days ago

            I use backblaze, which by consensus seemed to be the best bang for buck option that I found when I looked into this a few months ago.

            I pay in the ballpark of $1AUD a month to host my backups on backblaze - they currently sit at around the 200GB mark. And then I have duplicacy doing the actual backup and encryption and then it sends it over to backblaze.

      • MangoPenguin@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        4
        ·
        5 days ago

        The general rule is the 3-2-1 rule, so 3 copies of your data, 2 different storage types, and 1 of them offsite.

        Make sure you run backups at least daily too for your data, and keep a month or so worth of incremental snapshots.

        Restic + Backblaze B2 is great for an offsite backup.

      • Justin@lemmy.jlh.name
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        5 days ago

        You should keep your docker/kubernetes configuration saved in git, and then have something like rclone take daily backups of all your data to something like a hetzner storage box. That is the setup I have.

        My entire kubernetes configuration: https://codeberg.org/jlh/h5b/src/branch/main/argo/custom_applications

        My backup cronjob: https://codeberg.org/jlh/h5b/src/branch/main/argo/custom_applications/backups/rclone-velero.yaml

        With something like this, your entire setup could crash and burn, and you would still have everything you need to restore safely stored offsite.

  • schizo@forum.uncomfortable.business
    link
    fedilink
    English
    arrow-up
    14
    ·
    5 days ago

    You can find reasonably stable and easy to manage software for everything you listed.

    I know this is horribly unpopular around here, but you should, if you want to go this route, look at Nextcloud. It 's a monolithic mess of PHP, but it’s also stable, tested, used and trusted in production, and doesn’t have a history of lighting user data on fire.

    It also doesn’t really change dramatically, because again, it’s used by actual businesses in actual production, so changes are slow (maybe too slow) and methodical.

    The common complaints around performance and the mobile clients are all valid, but if neither of those really cause you issues then it’s a really easy way to handle cloud document storage, organization, photos, notes, calendars, contacts, etc. It’s essentially (with a little tweaking) the entire gSuite, but self-hosted.

    That said, you still need to babysit it, and babysit your data. Backups are a must, and you’re responsible for doing them and testing them. That last part is actually important: a backup that doesn’t have regular tests to make sure they can be restored from aren’t backups they’re just thoughts and prayers sitting somewhere.

    • litron3000@feddit.orgOP
      link
      fedilink
      English
      arrow-up
      5
      ·
      5 days ago

      thank you
      a friend has nextcloud setup, i will ask him about it / try his setup regarding performance

      • LordKitsuna@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 days ago

        Seafile is another to check. It’s focused entirely on the file storage aspect and doesn’t really do all the other stuff next Cloud does but I greatly prefer it

  • DichotoDeezNutz@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    5 days ago

    You could but a NAS, or build one and install unraid on it. It should take a day or so of tinkering to get it working, and you could always sync your data to a cloud backup provider.

    • litron3000@feddit.orgOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      5 days ago

      that would only handle the backups (and possible data loss) right?
      all the tinkering about the specific services is the same on unraid isn´t it?

          • BartyDeCanter@lemmy.sdf.org
            link
            fedilink
            English
            arrow-up
            4
            ·
            5 days ago

            As a point of reference, I built a 32TB Synology last year. I took me an afternoon to get it done, plus set up Plex media server, all the arrs and friends, a backup server and a couple other things. Since then maintenance has consisted of remembering to hit the “update containers” button once a month or so. I should probably automate that part but just haven’t bothered yet.

      • iAmTheTot@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        5 days ago

        Unraid is pretty simple to use. It has made setting up my services easy with just a little bit of googling for trouble shooting.

      • Justin@lemmy.jlh.name
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 days ago

        Yeah unraid is the same, it just adds a Gui to make it easier to learn. The downside is that unraid is very non-standard and is basically impossible to back up or manage in source control like vanilla docker or kubernetes

  • haui@lemmy.giftedmc.com
    link
    fedilink
    English
    arrow-up
    4
    ·
    5 days ago

    I only had to read like three or four sentences to arrive at the conclusion: yes, you should absolutely self host and you‘re already pretty far.

    Depending on your location, I suggest you first visit your local hackspace, if that exists. I have had a similar journey and it took me years to arrive at my current state. Had I learned about „hacker communities“ earlier, I would have taken a very different and less stressful path.

    You can check my personal setup at https://forge.giftedmc.com/haui/Setup

  • vzq@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    2
    ·
    5 days ago

    There is no compelling reason to run home assistant in a vm instead of in a container.

    I’d get rid of the virtualization layer in your specific case and save yourself a lot of hassle, especially on smaller systems.

      • litron3000@feddit.orgOP
        link
        fedilink
        English
        arrow-up
        3
        ·
        5 days ago

        that was my reasoning as well: with HA OS it is as little trouble for me to maintain - at least that was my line of thinking

      • vzq@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        5 days ago

        Yeah, just set them up in another container.

        Addons are just software.

        I only run mosquitto right now, but it’s not exactly hard or complicated or time consuming to set up.

        • MangoPenguin@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          5
          ·
          5 days ago

          Oh I know it’s just more of a pain, there’s very little overhead from a VM running Linux so it’s well worth doing that IMO.

          • vzq@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            3
            ·
            5 days ago

            There is little performance overhead from running a VM. However there’s a substantial administrative overhead for keeping a virtualization system running just for home assistant.

            If this a hobby to you, sure, you do you. Time you enjoyed wasting is not wasted. But it’s way less effort to just run everything you need in a container stack - especially if are you already running containers for other things.

            • MangoPenguin@lemmy.blahaj.zone
              link
              fedilink
              English
              arrow-up
              5
              arrow-down
              1
              ·
              5 days ago

              substantial administrative overhead for keeping a virtualization system running just for home assistant.

              I’m not sure what you mean? HA OS in a VM doesn’t have any administrative overhead.

    • Justin@lemmy.jlh.name
      link
      fedilink
      English
      arrow-up
      3
      ·
      5 days ago

      Yeah full VMs are pretty old school, there are a lot more management options and automation available with containers. Not to mention the compute overhead.

      Red Hat doesn’t even recommend businesses to use VMs anymore, and they offer a virtualization tool that runs the VMs inside a container for legacy apps. Its called Openshift Virtualization.

  • kata1yst@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    3
    ·
    5 days ago

    The best thing you can do to increase your confidence in the data reliability is to invest in backups AND doing at least RAID-1 on a reliable check-summing filesystem like ZFS, which Proxmox supports easily out of the box.

    I have ZFS and cloud based backups and I’ve never lost or corrupted data in over 10 years.

    And personally, I don’t back up my movies/TV shows. The volume is too high to bother and with ZFS snapshots and reliability, and the high seas being what they are, most everything is (eventually) recoverable.

    • litron3000@feddit.orgOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 days ago

      thank you
      that still leaves the part about all the hassle of maintaining every service on its own though

      • pe1uca@lemmy.pe1uca.dev
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 days ago

        I’m assuming you mean updating every service, right?
        If you don’t need anything new from a service you can just stay on the version you use for as long as you like as long as your services are not public.
        You could just install tailscale and connect everything inside the tailnet.
        From there you’ll just need to update tailscale and probably your firewall, docker, and OS, or when any of the services you use receives a security update.

        I’ve lagged behind several versions of immich because I don’t have time to monitor the updates and handle the breaking changes, so I just use a version until I have free time.
        Then it’s just an afternoon of reading through the breaking changes, updating the docker file and config, and running docker compose pull && docker compose up -d.
        In theory there could be issues in here, that’s were your backups come into place, but I’ve never had any issues.

        The rest of the 20+ services I have are just running there, because I don’t need anything new from them. Or I can just mindlessly run the same compose commands to update them.

        There was only one or two times I had to actually go into some kind of emergency mode because a service suddenly broke and I had to spend a day or two figuring out what happened.

        • litron3000@feddit.orgOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 days ago

          Yes updating is part of it, but at least on my current setup (using just the old laptop with limited storage) it often includes stuff like extending the partition for a VM. I understand that this is probably not a problem, when using a NAS with several TB of storage though
          The thing is I don’t want to invest in one of those while I am still unsure about which route to go.
          Also stiff like connecting to a service from outside my home network isn’t configured right now due to security concerns (stemming from the actually very little knowledge I have setting all this stuff up). I will look into tailscale though, maybe having this feature will give me a motivation boost :D

      • kata1yst@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 days ago

        Maintenance is easier than you think. I “maintain” 40+ services by simply automatically updating them. And if that makes one stop working (very rare) I get an alert and fix it when I get time.

        I use ansible for that, though you could more easily accomplish much the same with Watchtower.