

See my other response. This is quite normal.


See my other response. This is quite normal.


Yes, that’s called Round-Robin Load Balancing.
To get more specific, your DNS provider spins up a large number of DNS resolvers out in the world on a CDN network that resolves clients to the most geographically convenient server(s) at any point in time based on the GeoIP info of your public IP.
Once you resolve one set of addresses at any given time, it caches your request, so the next time you ask these DNS servers for something you’ll get a response right back from them as fast as possible.
You constantly checking is just going to show this. It’s quite normal.


You’re going out of your way to prove some unnecessary point with this solution though.
Only the RPi5 has PCIe, first of all, and the older boards would need a slow USB interface for any type of larger storage. Then you have longevity and reliability questions because of the age of the boards…it’s just worth it.
OP wants a simple solution. RPi of any kind just ain’t it when you get down a simple list of Pros v Cons list.


Your public IP is DHCP. It changes from time to time. Nothing weird about that.
Any of the other IP’s in the DNS Servers list changing is just what you get pointed to when resolved based on your GeoIP location.


SigNoz or Uptrace are alternatives to something like DataDog, which is the route you want to go versus checking each individual machine.
You could also just use Prometheus+Grafana and build your own monitoring dashboards and alerts that way, but will be a bit more manual at first.


I might be misunderstanding, but you’re checking what exactly for DNS leaks?
If the IPs are changing, that’s not uncommon. The HOST changing would be though, like if you swapped from what you expected back to Comcast or something.
You need to get better control of your local network and not have to be paranoid about this. Static reservations for long lived hosts, your router should have a setting to override and prevent internal hosts (like guests) from sending OoB DNS requests, and any sort of VPS stack should as well.


The downvotes here are from people who have no idea what in the world actually works best, and just FEEL a certain way about things 🤣
Kinda the mantra of this entire sublem.
I’m honestly not even talking about a Minipc. I’m talking a cheap ass dual bay NAS. Let’s do a price breakdown:
So at the bare minimum that’s going to be $460 or $510 for the 1TB variant per device. Then you need to fuck with all the software side of things as well.
$400 and you’re done. All the software is ready to go, you’ll have automatic rebuilds of your array if you need to swap drives, and a simple interface to work with everything in.
I’m not even here simping for Synology, because QNAP and others have similarly priced solutions. I’m here pushing for SIMPLICITY and cost effective solutions.


If you’re going for reliability, and you just want things to be simple, you probably just want to spend the money on two cheap NAS boxes, honestly. There are some caveats that come with RPi’s, and you’re unfamiliar it’s: 1) going to cost about the same, 2) be simpler to manage and upgrade, and 3) be easier to repair disk columes when the time comes.
Even if you’re just looking to make these redundant to each other, just make it simple and easy.


HA is definitely the largest adopted. OpenHab is probably more geared for developers, but has a more concise and powerful automation system.
As for hardware to run it on: get a cheap n100 Minipc and be done with it. Uses 6-12W, and it’s going to miles.kore efficient for this use than a regular PC.


When talking about media streaming, there’s a number of other things that cause problems Bandwidth, meaning the total amount of information you can send overall, is less likely to be a problem versus jitter, packet loss, and latency spikes.
For this purpose, but OP would tune both the server and the clients to cache ahead more, or send in smaller packets, it could possibly be a good workaround.
Spending an insane amount of money putting what I’m guessing is illegally obtained content on a CDN distribution is crazypants.


Bandwidth does not degrade over distance. That’s not how that works…
Again, I’m confused on what you’re suggesting the actual issue is here.


Uplink is exactly the problem. Not sure why you think otherwise. The internet doesn’t work by multicast.


You’re describing a CDN. You can’t afford it.
I’d look more into boosting whatever your uplink is versus trying to distribute to localized users.
Mkay. So you’re just some person out here on the Internet who has zero concept of how this works as well I’m assuming?
Feel free to dispute any single point I’ve made.
Well, let me break it down for you since you don’t seem to work in this space:
A Roadmap is a strategic timeline of targeted goals that are estimated to be completed in a specific timeframe that is NOT nebulous. It’s done this way to provide consumers of a product some knowledge of where the product is going to entice them to buy-in to said product to allow them to estimate their own commitments to the project and adoption.
A backlog is NOT a Roadmap. I planned orchestration of tickets is a Roadmap. We create this to ensure users that problems they are experiencing will be resolved, and in what order to expect them to be resolved. This works for both for-profit engineering, and also FOSS projects. A great example of this is the Roadmaps provided by distros uses by Enterprise customers.
Your comment about “inflexible commitment” seems to say you don’t understand the above points. If you’re pushing a product which you want people to adopt, and you’re communicating to them why they should adopt it, the last thing you would want to do is say “Hey, we’re kiiiiinda going this way, but maybe not. We’ll see.”
Programming DOES work like an assembly in a sense. That’s why you have tickets, tags, classification, triage, status, and…backlog. What gets thrown in the floor is what I’m talking about.
Regardless of how you feel about the pace of the project, it’s absurd to throw out a bunch of ideas as tickets and expect them to all get done without a commitment. Or, dare I say, a roadmap.


Helm sucks. You don’t even need it for what you’re trying to do.


Mint is for desktops. Hands down.
Run something paired down for servers. Fedora Server, or plain Debian are fine. CoreOS or Talos if you’re trying out some k8s stuff.
Yes, it’s mostly just package selection, but you don’t need to sift through the cruft and clean up all the desktop shit running you don’t need.
It’s a wishlist of Open tickets. Wouldn’t necessarily even call this a commitment to a roadmap. 75% of Open tickets will never get resolved anyway.


Docs say you can choose what to sync, and disable syncing entirely where you don’t want it: https://docs.nextcloud.com/server/latest/user_manual/en/desktop/usage.html
Then something has changed about the local deployment and concentration of the network near you. Don’t know what to tell ya 🤷
As long as the provider is the same, and your instances are using properly using DoH or DoT, you have nothing to worry about.
If you’re super concerned though, I’d be using Mullvad over Cloudflare though. Just saying.