I take my shitposts very seriously.

  • 1 Post
  • 119 Comments
Joined 2 years ago
cake
Cake day: June 24th, 2023

help-circle







  • How much experience do you have with networking, exactly?

    The DNS record points to a private IPv4 address (10.0.0.41), which cannot be accessed from the internet for multiple reasons; first of which is that it’s almost certainly behind a NAT gateway.

    Your internet provider has given you a single publicly routable IPv4 address and assigned it to the WAN interface on your modem or router. If you want to access a host on the LAN, you’ll first have to configure port mapping or port forwarding on the router. Then you’ll have to open holes in your firewall and accept the fact that every bad actor will try to break into that host unless you know how to set up network security.


  • Linux has two different kinds of “used” memory. One is memory allocated for/by running processes that cannot be reclaimed or reallocated to another process. This memory is unavailable. The other kind is memory used for caching (ZFS, write-back cache, etc) that can be reclaimed and allocated for other things as needed. Memory that is not allocated in any way is free. Memory that is either free or allocated to cache is available.

    It looks like htop only shows unavailable memory as “used”, while proxmox shows the sum of unavailable and cached memory. Proxmox “uses” 11 GB, but it’s not running out of memory because most of it is “available”.



  • As a university sysadmin that spent half a fucking hour yesterday trying to log someone out of a classroom computer’s MS Office software (the “sign out” button did fuck all, go figure): fuck Microsoft, fuck Office, fuck Outlook, fuck Onedrive, fuck their SSO, and their mother too. Next semester I’m sanitizing the computers. Students will use LibreOffice and they’ll like it.

    I might be a little angry.



  • Proxmox is a great starting point. I use it in my home server and at work. It’s built on Debian, with a web interface to manage your virtual machines and containers, the virtual network (trivial unless you need advanced features), virtual disks, and installer images. There are advanced options like clustering and high availability, but you really don’t have to interact with those unless you need them.


  • Well that’s not true. I live in a Soviet era house that had an entire second floor built on top of it. We’ve had to drill through the brick walls to replace the natural gas pipes with pipes that run outside the walls, we’ve had to dig under the foundation when we got connected to the city’s sewer system (again, Soviet-built), and again when the main water pipe burst and threatened to wash out the foundation. If the load-bearing walls had been constructed to the same “it works” standard as the things we’ve had to fix, we wouldn’t have a house anymore.






  • THEN (and this is the part you don’t seem to understand) the client process has to waste time solving the challenge, which is, by the way, orders of magnitudes lighter on the server than serving the actual meaningful content, or cancel the request. If a new request is sent during that time, it will still have to waste time solving the challenge. The scraper will get through eventually, but the challenge delays the response and reduces the load on the server because while the scrapers are busy computing, it doesn’t have to serve meaningful content to them.


  • It’s not client-side because validation happens on the server side. The content won’t be displayed until and unless the server receives a valid response, and the challenge is formulated in such a way that calculating a valid answer will always take a long time. It can’t be spoofed because the server will know that the answer is bullshit. In my example, the server will know that the prime factors returned by the client are wrong because their product won’t be equal to the original semiprime. Delegating to a sub-process won’t work either, because what’s the parent process supposed to do? Move on to another piece of content that is also protected by Anubis?

    The point is to waste the client’s time and thus reduce the number of requests the server has to handle, not to prevent scraping altogether.