Admiral Patrick

I’m surprisingly level-headed for being a walking knot of anxiety.

Ask me anything.

Special skills include: Knowing all the “na na na nah nah nah na” parts of the Three’s Company theme.

I also develop Tesseract UI for Lemmy/Sublinks

Avatar by @SatyrSack@feddit.org

  • 109 Posts
  • 518 Comments
Joined 3 years ago
cake
Cake day: June 6th, 2023

help-circle
  • Basically the only thing you want to present with a challenge is the paths/virtual hosts for the web frontends.

    Anything /api/v3/ is client-to-server API (i.e. how your client talk to your instance) and needs to be obstruction-free. Otherwise, clients/apps won’t be able to use the API. Same for /pictrs since that proxies through Lemmy and is a de-facto API endpoint (even though it’s a separate component).

    Federation traffic also needs to be exempt, but it’s not based on routes but by the HTTP Accept request header and request method.

    Looking at the Nginx proxy config, there’s this mapping which tells Nginx how to route inbound requests:

    nginx_internal.conf: https://raw.githubusercontent.com/LemmyNet/lemmy-ansible/main/templates/nginx_internal.conf

        map "$request_method:$http_accept" $proxpass {
            # If no explicit matches exists below, send traffic to lemmy-ui
            default "http://lemmy-ui:1234/";
    
            # GET/HEAD requests that accepts ActivityPub or Linked Data JSON should go to lemmy.
            #
            # These requests are used by Mastodon and other fediverse instances to look up profile information,
            # discover site information and so on.
            "~^(?:GET|HEAD):.*?application\/(?:activity|ld)\+json" "http://lemmy:8536/";
    
            # All non-GET/HEAD requests should go to lemmy
            #
            # Rather than calling out POST, PUT, DELETE, PATCH, CONNECT and all the verbs manually
            # we simply negate the GET|HEAD pattern from above and accept all possibly $http_accept values
            "~^(?!(GET|HEAD)).*:" "http://lemmy:8536/";
    


  • Granted, I don’t think the instance level URL filters were meant to be used for the domains of other instances like I was doing here. They’re more for blocking spam domains, etc.

    e.g. I also have those spam sites you see in c/News every so often in that block list (e.g. dvdfab [dot] cn, digital-escape-tools [dot] phi [dot] vercel [dot] app, etc) , so I never see/report them because they’re rejected immediately.

    During one of the many, many spam storms here, it was desired by admins for those filters to stop anything that matched them from federating-in instead of just changing the text to removed on the frontend. So it is a good feature to have. Just maybe applied too widely.

    Though I think if a user edited their own description to include a widely-blocked URL (no URLs are blocked by default), they’d just be soft-banning themselves from everywhere that has that domain blocked.

    If a malicious community mod edited their communities’ descriptions to a include a widely-blocked URL, then yeah, that could cut off new posts coming in to any instance that has that domain blocked (old posts and the community itself would still be available).

    All of those would require instances to have certain URLs blocked. The list of blocked URLs for an instance is publicly available from the info in getSite API call, so it wouldn’t be hard to game if someone really wanted to. Fortunately, most people are too busy gaming the “delete account” feature right now 🙄.


  • Admiral Patrick@dubvee.orgtoFediverse@lemmy.worldGhost of Lemm.ee?
    link
    fedilink
    English
    arrow-up
    69
    ·
    edit-2
    2 days ago

    The person who cross-posted it was probably definitely from your local instance.

    You only ever interact with your local instance’s copy of any community, even remote ones. If the community is to a remote instance that is either offline or since de-federated, there’s nothing that prohibits you from interacting with it*. Because lemm.ee is no longer there to federate out the post/comments to any of the community’s subscribers, only people local to your instance will see it.

    *Admins can remove the community and, prior to it going offline, mods can lock it. But if an instance just disappears, you can still locally interact with any of its communities on your instance; the content just won’t federate outside your instance.





  • I haven’t been to Odysee for a good while, but is it still Rumble-lite?

    I only learned of Odysee because I saw a video linked to it here and went directly to the video. When I saw it had embed code, I added support in Tesseract UI so the videos would play from the post. Then I went to the main site and saw the front page full of rightwing nutjob rants and vaccine skepticism and was like “nope”. Had I saw that beforehand, I wouldn’t have added embed support, but the work was already done so I left it in. That’s basically why I refuse to add embed support for Rumble.

    Wondering if ownership/leadership/policies have changed since about 2 years ago when I wrote the embed components for it and last interacted with it.





  • I also run (well, ran) a local registry. It ended up being more trouble than it was worth.

    Would you have to docker load them all when rebuilding a host?

    Only if you want to ensure you bring the replacement stack back up with the exact same version of everything or need to bring it up while you’re offline. I’m bad about using the :latest tag so this is my way of version-controlling. I’ve had things break (cough Authelia cough) when I moved it to another server and it pulled a newer image that had breaking config changes.

    For me, it’s about having everything I need on hand in order to quickly move a service or restore it from a backup. It also depends on what your needs are and the challenges you are trying to overcome. i.e. When I started doing this style of deployment, I had slow, unreliable, ad heavily data-capped internet. Even if my connection was up, pulling a bunch of images was time consuming and ate away at my measly satellite internet data cap. Having the ability to rebuild stuff offline was a hard requirement when I started doing things this way. That’s now no longer a limitation, but I like the way this works so have stuck with it.

    Everything a service (or stack of services) needs is all in my deploy directory which looks like this:

    /apps/{app_name}/
        docker-compose.yml
        .env
        build/
            Dockerfile
            {build assets}
        data/
            {app_name}
            {app2_name}  # If there are multiple applications in the stack
            ...
        conf/                   # If separate from the app data
            {app_name}
            {app2_name}
            ...
        images/
            {app_name}-{tag}-{arch}.tar.gz
            {app2_name}-{tag}-{arch}.tar.gz
    

    When I run backups, I tar.gz the whole base {app_name} folder which includes the deploy file, data, config, and dumps of its services images and pipe that over SSH to my backup server (rsync also works for this). The only ones I do differently are ones with in-stack databases that need a consistent snapshot.

    When I pull new images to update the stack, I move the old images and docker save the now current ones. The old images get deleted after the update is considered successful (so usually within 3-5 days).

    A local registry would work, but you would have to re-tag all of the pre-made images to your registry (e.g. docker tag library/nginx docker.example.com/nginx) in order to push them to it. That makes updates more involved and was a frequent cause of me running 2+ year old versions of some images.

    Plus, you’d need the registry server and any infrastructure it needs such as DNS, file server, reverse proxy, etc before you could bootstrap anything else. Or if you’re deploying your stack to a different environment outside your own, then your registry server might not be available.

    Bottom line is I am a big fan of using Docker to make my complex stacks easy to port around, backup, and restore. There’s many ways to do that, but this is what works best for me.


  • Yep. I’ve got a bunch of apps that work offline, so I back up the currently deployed version of the image in case of hardware or other failure that requires me to re-deploy it. I also have quite a few custom-built images that take a while to build, so having a backup of the built image is convenient.

    I structure my Docker-based apps into dedicated folders with all of their config and data directories inside a main container directory so everything is kept together. I also make an images directory which holds backup dumps of the images for the stack.

    • Backup: docker save {image}:{tag} | gzip -9 > ./images/{image}-{tag}-{arch}.tar.gz
    • Restore docker load < ./images/{image}-{tag}-{arch}.tar.gz

    It will backup/restore with the image and tag used during the save step. The load step will accept a gzipped tar so you don’t even need to decompress it first. My older stuff doesn’t have the architecture in the filename but I’ve started adding that lately now that I have a mix of amd64 and arm64.