• 0 Posts
  • 72 Comments
Joined 2 years ago
cake
Cake day: July 2nd, 2023

help-circle

  • https://ipv6now.com.au/primers/IPv6Reasons.php

    Basically, Legacy IP (v4) is a dead end. Under the original allocation scheme, it should have ran out in the early 1990s. But the Internet explosion meant TCP/IP(v4) was locked in, and so NAT was introduced to stave off address exhaustion. But that caused huge problems to this day, like mismanagement of firewalls and the need to do port-forwarding. It also broke end-to-end connectivity, which requires additional workarounds like STUN/TURN that continue to plague gamers and video conferencing software.

    And because of that scarcity, it’s become a land grab where rich companies and countries hoard the limited addresses in circulation, creating haves (North America, Europe) and have-nots (Africa, China, India).

    The want for v6 is technical, moral, and even economical: one cannot escape Big Tech or American hegemony while still having to buy IPv4 space on the open market. Czechia and Vietnam are case studies in pushing for all-IPv6, to bolster their domestic technological familiarity and to escape the broad problems with Business As Usual.

    Accordingly, there are now three classes of Internet users: v4-only, dual-v4-and-v6, and v6-only. Surprisingly, v6-only is very common now on mobile networks for countries that never had many v4 addresses. And it’s an interop requirement for all Apple apps to function correctly in a v6-only environment. At a minimum, everyone should have access to dual-stack IP networks, so they can reach services that might be v4-only or v6-only.

    In due course, the unstoppable march of time will leave v4-only users in the past.


  • You might also try asking on !ipv6@lemmy.world .

    Be advised that even if a VPN offers IPv6, they may not necessarily offer it sensibly. For example, some might only give you a single address (aka a routed /128). That might work for basic web fetching but it’s wholly inadequate if you wanted the VPN to also give addresses to any VMs, or if you want each outbound connection to use a unique IP. And that’s a fair ask, because a normal v6 network can usually do that, even though a typical Legacy IP network can’t.

    Some VPNs will offer you a /64 subnet, but their software might not check if your SLAAC-assigned address is leaking your physical MAC address. Your OS should have privacy-extensions enabled to prevent this, but good VPN software should explicitly check for that. Not all software does.


  • Connection tracking might not be totally necessary for a reverse proxy mode, but it’s worth discussing what happens if connection tracking is disabled or if the known-connections table runs out of room. For a well-behaved protocol like HTTP(S) that has a fixed inbound port (eg 80 or 443) and uses TCP, tracking a connection means being aware of the TCP connection state, which the destination OS already has to do. But since a reverse proxy terminates a TCP connection, then the effort for connection tracking is minimal.

    For a poorly-behaved protocol like FTP – which receives initial packets in a fixed inbound port but then spawns a separate port for outbound packers – the effort of connection tracking means setting up the firewall to allow ongoing (ie established) traffic to pass in.

    But these are the happy cases. In the event of a network issue that affects an HTTP payload sent from your reverse proxy toward the requesting client, a mid-way router will send back to your machine an ICMP packet describing the problem. If your firewall is not configured to let all ICMP packets through, then the only way in would be if conntrack looks up the connection details from its table and allows the ICMP packet in, as “related” traffic. This is not dissimilar to the FTP case above, but rather than a different port number, it’s an entirely different protocol.

    And then there’s UDP tracking, which is relevant to QUIC. For hosting a service, UDP is connectionless and so for any inbound packet we received on port XYZ, conntrack will permit an outbound packet on port XYZ. But that’s redundant since we presumably had to explicitly allow inbound port XYZ to expose the service. But in the opposite case, where we want to access UDP resources on the network, then an outbound packet to port ABC means conntrack will keep an entry to permit an inbound packet on port ABC. If you are doing lots of DNS lookups (typically using UDP), then that alone could swamp the con track table: https://kb.isc.org/docs/aa-01183

    It may behoove you to first look at what’s filling conntrack’s table, before looking to disable it outright. It may be possible to specifically skip connection tracking for anything already explicitly permitted through the firewall (eg 80/443). Or if the issue is due to numerous DNS resolution requests from trying to look up spam sources IPs, then perhaps either the logs should not do a synchronous DNS lookup, or you can also skip connection tracking for DNS.


  • https://github.com/Overv/vramfs

    Oh, it’s a user space (FUSE) driver. I was rather hoping it was an out-of-tree Linux kernel driver, since using FUSE will: 1) always pass back to userspace, which costs performance, and 2) destroys any possibility of DMA-enabled memory operations (DPDK is a possible exception). I suppose if the only objective was to store files in VRAM, this does technically meet that, but it’s leaving quite a lot on the table, IMO.

    If this were a kernel module, the filesystem performance would presumably improve, limited by how the VRAM is exposed by OpenCL (ie very fast if it’s just all mapped into PCIe). And if it was basically offering VRAM as PCIe memory, then this potentially means the VRAM can be used for certain RAM niche cases, like hugepages: some applications need large quantities of memory, plus a guarantee that it won’t be evicted from RAM, and whose physical addresses can be resolved from userspace (eg DPDK, high-performance compute). If such a driver could offer special hugepages which are backed by VRAM, then those application could benefit.

    And at that point, on systems where the PCIe address space is unified with the system address space (eg x86), then it’s entirely plausible to use VRAM as if it were hot-insertable memory, because both RAM and VRAM would occupy known regions within the system memory address space, and the existing MMU would control which processes can access what parts of PCIe-mapped-VRAM.

    Is it worth re-engineering the Linux kernel memory subsystem to support RAM over PCIe? Uh, who knows. Though I’ve always like the thought of DDR on PCIe cards. All technologies are doomed to reinvent PCIe, I think, said someone from Level1Techs.



  • For my own networks, I’ve been using IPv6 subnets for years now, and have NAT64 translation for when they need to access Legacy IP (aka IPv4) resources on the public Internet.

    Between your two options, I’m more inclined to recommend the second solution, because although it requires renumbering existing containers to the new subnet, you would still have one subnet for all your containers, but it’s bigger now. Whereas the first solution would either: A) preclude containers on the first bridge from directly talking to containers on the second bridge, or B) you would have to enable some sort of awful NAT44 translation to make the two work together.

    So if IPv6 and its massive, essentially-unlimited ULA subnets are not an option, then I’d still go with the second solution, which is a bigger-but-still-singular subnet.


  • I like vimdiff, since it’s fair quick to collapse and expand code chunks if you know the keyboard shortcuts. Actually, since it’s vim, knowing the keyboard shortcuts is the entire game lol.

    I usually have vimdiff open in a horizontal pane in tmux, then use the other horizontal pane to look at other code that the change references. Could I optimize and have everything in a single vim session? Sure, but at that point, I’d also want cscope set up to find references within vim, and I’m now trivial steps away from a full IDE in vim.

    … which people do have, and more power to them. But alas, I don’t have the luxury of fastidious optimization of my workflow to that degree.


  • In my personal workflow, I fork GitHub and Codeberg repos so that my local machine’s “origin” points to my fork, not to the main project. And then I also create an “upstream” remote to point to the main project. I do this as a precursor before even looking at a code on my local machine, as a matter of course.

    Why? Because if I do decide to draft a change in future, I want my workflow to be as smooth as possible. And since the norm is to push to one’s own fork and then create a PR from there to the upstream, it makes sense to set my “origin” to my fork; most established repos won’t allow pushing to a new topic branch.

    If I decide that there’s no commit to do, then I’ll still leave the fork around, because it’s basically zero-cost.

    TL;DR: I fork in preparation of an efficient workflow.


  • For a link of 5.5 km and with clear LoS, I would reach for 802.11 WiFi, since the range of 802.11ah HaLow wouldn’t necessarily be needed. For reference, many WISPs use Ubiquiti 5 GHz point-to-point APs for their backhaul links for much further distances.

    The question would be what your RF link conditions look like, whether 5 GHz is clear in your environment, and what sort of worst-case bandwidth you can accept. With a clear Fresnel zone, you could probably be pushing something like 50 Mbps symmetrical, if properly aimed and configured.

    Ubiquiti’s website has a neat tool for roughly calculating terrain and RF losses.





  • Tbf, can’t the other party mess it up with signal too?

    Yes, but this is where threat modeling comes into play. Grossly simplified, developing a threat model means to assess what sort of attackers you reasonably expect to make an attempt on you. For some people, their greatest concern is their conservative parents finding out that they’re on birth control. For others, they might be a journalist trying to maintain confidentiality of an informant from a rogue sheriff’s department in rural America. Yet others face the risk of a nation-state’s intelligence service trying to find their location while in exile.

    For each of these users, they have different potential attackers. And Signal is well suited for the first two, and only alright against the third. After all, if the CIA or Mossad is following someone around IRL, there are other ways to crack their communications.

    What Signal specifically offers is confidentiality in transit, meaning that all ISPs, WiFi networks, CDNs, VPNs, script skiddies with Wireshark, and network admins in the path of a Signal convo cannot see the contents of those messages.

    Can the messages be captured at the endpoints? Yes! Someone could be standing right behind you, taking photos of your screen. Can the size or metadata of each message reveal the type of message (eg text, photo, video)? Yes, but that’s akin to feeling the shape of an envelope. Only through additional context can the contents be known (eg a parcel in the shape of a guitar case).

    Signal also benefits from the network effect, because someone trying to get away from an abusive SO has plausible deniability if they download Signal on their phone (“all my friends are on Signal” or “the doctor said it’s more secure than email”). Or a whistleblower can send a message to a journalist that included their Signal username in a printed newspaper. The best place to hide a tree is in a forest. We protect us.

    My main issue for signal is (mostly iPhone users) download it “just for protests” (ffs) and then delete it, but don’t relinquish their acct, so when I text them using signal it dies in limbo as they either deleted the app or never check it and don’t allow notifs

    Alas, this is an issue with all messaging apps, if people delete the app without closing their account. I’m not sure if there’s anything Signal can do about this, but the base guarantees still hold: either the message is securely delivered to their app, or it never gets seen. But the confidentiality should always be maintained.

    I’m glossing over a lot of cryptographic guarantees, but for one-to-one or small-group private messaging, Signal is the best mainstream app at the moment. For secure group messaging, like organizing hundreds of people for a protest, that is still up for grabs, because even if an app was 100% secure, any one of those persons can leak the message to an attacker. More participants means more potential for leaks.




  • Having previously been on the reviewing side of job applications, if you have GitHub/Codeberg repos with your work, please, please, please include those links somewhere on the resume, ideally spelled out and also clickable in the PDF. It’s a neat trick to showcase more work than what fits on a page.

    Although the non-technical recruiters might gloss over links, the technical reviewers very much look at your code examples. Why? Because seeing your coding style and hygiene, Git workflow and commit messages, documentation, and overall approach to iterative improvement of a codebase is far more revealing than anything that AI-nonsense coding tests can show.

    So while this won’t necessarily get your resume past the first gate, always be thinking about the different audiences whom your resume might be passed around to, within the prospective organization you’re applying to.


  • I use LibreOffice has my word processor, and no substantial amounts of automation to speak of. And each time I intend to submit a resume, I save off a new copy and tailor it specifically for the recipient employer. After all, what’s relevant and worth highlighting (not literally!) to one employer won’t be the same as for another.

    Yes, I’m aware that a lot of recruiters/reviewers use LLMs as a first-pass filter, but that’s precisely why my submission should be crafted by hand each time: if it’s an LLM, then I want its checkbox exercises to be easily met, and if it’s a human, I want to put my best foot forward.

    In days of yore, where paper resumes were circulated by hand to prospective employers at career fairs, having a bespoke resume for each would have been difficult to pull off. But with PDF submissions, there’s no reason not to gear your submission to exactly the skills that a company is looking for.

    To be clear, tailoring a resume does not mean adding fake or hallucinated qualifications that you do not possess. Rather, it means that you copyedit the resume so that your relevant skills are readily apparent. If you already listed an example project from a prior employer or internship, but a different project would better align to the prospective employer, consider swapping out the example for max appeal. Bullet-points are particularly easy to rearrange: if you have web-dev skills and that’s desirable by the employer, those should be moved up the list of bullet-points. And so on.

    Although resumes are now mostly PDFs, the custom remains – both as an informal fairness criteria between applicants, but also because it would be more to read – that one’s resume should fit on a single sheet of US Letter or A4 paper, barring unique exceptions like professors that have long lists of published papers or systems architects that hold patent numbers. And so the optimization problem is how to most effectively use the space on that sheet of digital paper.


  • Let me make sure I understand everything correctly. You have an OpenWRT router which terminates a Wireguard tunnel, which your phone will connect to from somewhere on the Internet. When the Wireguard tunnel lands within the router in the new subnet 192.168.2 0/24, you have iptable rules that will:

    • Reject all packets on the INPUT chain (from subnet to OpenWRT)
    • Reject all packets on the OUTPUT chain (from OpenWRT to subnet)
    • Route packets from phone to service on TCP port 8080, on the FORWARD chain
    • Allow established connections, on the FORWARD chain
    • Reject all other packets on the FORWARD chain

    So far, this seems alright. But where does the service run? Is it on your LAN subnet or the isolated 192.168.2.0/24 subnet? The diagram you included suggests that the service runs on an existing machine on your LAN, so that would imply that the router must also do address translation from the isolated subnet to your LAN subnet.

    That’s doable, but ideally the service would be homed onto the isolated subnet. But perhaps I misunderstood part of the configuration.