

you arent the only one. I had suck a painful onboarding process with next cloud from the docker setup to the speed of it to the UI that I just gave up and decided to use a combination of immich and syncthing instead.
Just your normal everyday casual software dev. Nothing to see here.
People can share differing opinions without immediately being on the reverse side. Avoid looking at things as black and white. You can like both waffles and pancakes, just like you can hate both waffles and pancakes.


you arent the only one. I had suck a painful onboarding process with next cloud from the docker setup to the speed of it to the UI that I just gave up and decided to use a combination of immich and syncthing instead.

honestly this seems to be a reoccurring trait.


this entire thing has made me really rethink whether I want to swap to the new repo or not.
Why was there no communication about it. The gplay repo maintainer wasn’t informed of anything, no public notice to anyone was given, just a transfer of the repo and a status issue here explaining it.
Obviously the act is genuine as they were able to keep the original keys but like, this entire system seemed really sketchy.
I’m also not happy with the fact that it seems the first thing they added was removing checksums, but that might be a temp thing.
I also just noticed that it looks like they removed the entire public key for it, which if they had the original private keys using the existing public keys shouldn’t be an issue right?


One of my drives crippled itself a few days back, not sure what caused it. Wasn’t able to be resolved without a host restart which was unfortunate. SMART isn’t failing and has been working fine, so I’m chalking it down to a weird Proxmox bug or something.
For sure expected I was going to need to do a rollback on an entire drive after that restart though. Still may have to if it reoccurs.
I have Proxmox Backup Server backing up to an external drive nightly, and then about every 2 or 3 weeks also backup to a cold storage which I store offsite. (this is bad practice I know but I have enough redundancies in place of personal data that I’m ok with it).
For critical info like my personal data I have a sync-thing that is syncing to 3 devices, so for personal info I have roughly 4 copies(across different devices) + the PBS + potentially dated offsite.


despite recommendations, I run PBS along side the standard server barebone. I don’t store the backups on the same system they are stored to an external drive (which gets an offline copy every once and awhile) but I don’t like the idea of having PBS in a virtual environment, it’s just another layer that could go wrong in a restore process.


More like me wanting to add something but they are taking way too long to make their point and I don’t want my squirrel brain to forget what I wanted to add.
the implication of that is weird to me. I’m not saying that the horse is wrong, but thats such a non-standard solution. That’s implementing a CGNAT restriction without the benefits of CGNAT. They would need to only allow internal to external connections unless the connection was already established. How does standard communication still function if it was that way, I know that would break protocols like basic UDP, since that uses a fire and forget without internal prompting.
this might be my next project. I need uptime management for my services, my VPN likes to randomly kill itself.


I haven’t used a guide aside from the official getting started with syncthing page.
It should be similar to these steps though, I’ll use your desktop as the origin device.
Some things you may want to keep into consideration. Syncthing only operates when there are two devices or more that are online. I would recommend if you are getting into self hosting a server, having the server be the middle man. If you end up going that route these steps stay more or less the same, it’s just instead of sharing with the phone, its sharing with the server, and then moving to the server syncthing page and sharing with the mobile. This makes it so both devices use the server instead of trying to connect to each other. Additionally, if you do go that route, I recommend setting your remote devices on the server’s syncthing instance to “auto approve” this makes it so when you share a folder to the server from one of your devices, it automatically approves and makes a share using the name of the folder shared in the syncthing’s data directory. (ex. if your folder was named documents and you shared it to the server, it would create a share named “documents” in where-ever you have it configured to store data). You would still need to login to the server instance in the case of sharing said files to /another/ device, but if your intent was to only create a backup of a folder to the server, then it removes a step.
Another benefit that using the server middleman approach is that if you ever have to change a device later on down the road, you are only having to add 1 remote device to the server instance, instead of having to add your new device onto every syncthing that needs access to that device.
Additionally, if you already have the built in structure but it isn’t seeming like it is working, some standard troubleshooting steps I’ve found helpful:
it’s missing the third panel: My jacket is in my locked house because I forgot it before closing the door - Panik
This is actually one of the most accurate descriptions for the ADD part of ADHD I’ve seen yet. You hear everything but nothing at once, and you don’t even realize it till you get prompted about something you are listening to and you notice you don’t recall the last like 2 or 3 minutes period.


Keepass is a great way of password management, I use keepass as well. I also use syncthing to sync my password database across all devices and then I have the server acting as the “always on” device so I have access to all passwords at all times. Works amazing because syncthing can also be setup so when a file is modified by another device, it makes a backup of the original file and moves it to a dedicated folder (with retention settings so you can have them cleaned every so often). Life is so much easier.
For photo access you can look into immich, its a little more of an advanced setup but, I have immich looking at my photos folder in syncthing on the server, and using that location as the source. This allows me to use one directory for both photo hosting and backup/sync


I hard agree with this. I would NEVER have wanted to start with containerized setups. I know how I am, I would have given up before I made it past the second LXC. Starting as a generalized 1 server does everything and then learning as you go is so much better for beginnings. Worst case scenario is they can run docker as the later on containerized setup and migrate to it. Or they can do what I did, start with a single server setup, moved everything onto a few drives a few years later once I was comfortable with how it is, nuked the main server and installed proxmox, and hate life learning how it works for 2 or 3 weeks.
Do i regret that change? No way in hell, but theres also no way I would recommend a fully compartmentalized or containerized setup to someone just starting out. It adds so many layers of complexity.


15% off a logitech device purchase for the complete removal of a 100$ smart switch. that’s a slap to the face “Thank’s for being a customer here’s a coupon you can only use if you continue being a customer”


I wanted to add this discussion post to the mix because I saw this when reading about the related article
it seems that Rustdesk server pro is a parent of the non-pro edition of it, so its not that it uses code from the opensourced project, but that the open sourced project uses the code from the closed source project. Meaning that the licensing restriction of doesn’t apply to that project as it’s actually the original source, and that they made the open sourced project based off that closed source.


Woah, you separated it already? that’s insane. Defo checking it out! Cheers!


Honestly, this is a really innovative project. I wish it came in an extension because I feel that is likely your biggest bottleneck for getting people to try it. I don’t think many are going to build a browser from source & then port all their stuff over strictly for the integration. Plus it looks like a primary advertisement for it is that integration, but it also disables a lot of the QoL features that FF has that some don’t have any problem with. Like the fact that Sync is removed as a whole is a major dealbreaker for me, as I do like the feature and I am not concerned about the privacy aspects of having it on.
If an extension version ever releases for the lemmy integration though, I would for sure be looking at that!
I think my only real complaint about the deployment of this, is from a security standpoint. The password is hardcoded as “changeme” for the GitLab Runner container. which when run from an automated script like this the script itself doesn’t make the user aware of that. Like the script itself mentions that you should move credentials.txt but it never makes you aware of the hardcoded password.
it would be nice if it prompted for a password, or used a randomly generated one instead of that hardcode
Not only that, theres also an increasing number of anticheat’s that supports linux, however they have it intentionally disabled so when you run the game it either blocks you or in worse case bans your account as a whole