• 0 Posts
  • 18 Comments
Joined 2 years ago
cake
Cake day: June 10th, 2023

help-circle


  • Tailscale/headscale/wire guard is different from a normal vpn setup.

    VPN: you tunnel into a remote network and all your connections flow through as if you’re on that remote network.

    Tailscale: your devices each run the daemon and basically create a separate, encrypted, dedicated overlay network between them no matter where they are or what network they are on. You can make an exit node where network traffic can exit the overlay network to the local network for a specific cidr, but without that, you’re only devices on the network are the devices connected to the overlay. I can setup a set of severs to be on the Tailscale overlay and only on that network, and it will only serve data with the devices also on the overlay network, and they can be distributed anywhere without any crazy router configuration or port forwarding or NAT or whatever.



  • You’ll want to look into “keepalived” to setup a shared IP across all worker nodes in the cluster and either directly forward, or setup haproxy on each to do the forwarding from that keepalived IP to the ingresses.

    I’m running 6 kube nodes (running Talos) running in a 3node proxmox cluster. Both haproxy and keepalived run on the 3 nodes to manage the IP and route traffic to the appropriate backend. Haproxy just allows me to migrate nodes and still have traffic hit an ingress kube node.

    Keepalived manages which node is the active node and therefore listens to the IP based on backend communication and a simple local script to catch when nodes can’t serve traffic.



  • I completely agree, but every week or two is too long. At one point we had ours running builds + automated regression testing => release twice or more a day. Along with automatic change logs and monitoring, It was so nice. Tiny updates are always better to test and know exactly what/where/how a failure or positive change occurs when the cadence is that fast. The devs loved it, the QA loved it, and as a DevOps, I loved it. We were even able to do AB testing and rolling updates.

    It only got worse when management changed hands and some people decided on going agile in a “Scrum-but” method and it’s been a drag that sprints are 3 weeks long. Now releases take longer, have larger impact for better or worse, and regression testing is much more complex and I have to be more involved in releasing new code. The faster cadence meant it happened so often it was fully automated and I didn’t even know when most went out unless I was watching a dashboard.




  • Current homelab+desktop+laptop host count here is 22. All anime characters or references. It’s a fairly large pool to pull from, so it’s worked for me for 20+ years now. Mobile devices (phones, tablets, etc) and game consoles aren’t really as clever though.

    All of them are in a piHole DNS though so no host files keeps it easy to track. Services have names that mostly are just what they are though and cnames to the matching host that hosts them (or load balancer, whatever)







  • Nice, we’ll all look out for an update in a year!

    I try to mix brands and lots (buy a few from one retailer and some from another). I used to work for a storage/NAS company and we had many incidents when we’d fill a 12 or 24 drive raid with drives right from the same order and had multiple drives die within hours of each other. Which isn’t usually enough for replacement/resilvering.


  • Mine are 3x 27k and 1x 47k. I just started replacing them… not because they’re old or have any issues, just because they’re becoming too small. Going from 4 to 8 tb disks and transferring the old ones to an external raid enclosure for backups.

    Actually brings up a question I had… what do people think about refurbished drives for a NAS?


  • Finish my migration to my local Kubernetes cluster. Tired of running a mix of vms, docker, and bare metal. I got it setup and a few things, just have to power through.

    I also need to bump the drive size in my NAS as I’m running low and want to leverage it more, not less. (Pods use PVs hosted on the NAS over NFS or iSCSI).

    And get my offsite backups going again, I had to move this last year and it put a real damper on my goals for last year so there’s a lot of “got the stuff just have to make it work”.

    Edit: the UDM Pro is pretty nice. That, a rack and a 2.5G enterprise switch were last year’s acquisitions.