

Let people use post whatever they want.


Let people use post whatever they want.


I’ve known that this sort of thing was on the way but… god this is terrifying.
services:
qbittorrent:
image: lscr.io/linuxserver/qbittorrent
container_name: qbittorrent
environment:
- PUID=888
- PGID=888
- TZ=Australia/Perth
- WEBUI_PORT=8080
volumes:
- ./config:/config
- /srv/downloads:/downloads
restart: unless-stopped
network_mode: "container:wg_out"
this is my compose.yml for a qbittorrent instance.
the part you’re interested in is the final line. There’s another container with the wireguard instance called “wg_out”. This network mode attaches this qbittorrent container to that wireguard container’s network stack.
I’d seen gluetun mentioned but didn’t know what it was for until a moment ago.
I’ve heard of tailscale and at least know what that does but never used it.
I personally have a mullvad subscription. I have a container connected to that with wireguard, and then for services I want to use that VPN I just configure them to use the network stack from that container.
I’m not suggesting that my way is the best but it’s worked well for several years now.


continuity of the star trek universe
Actually I think this is a fairly low priority if you want to gather more viewers.


Sorry I’m still not really sure what you’re asking for.
I use Open Web UI, which is the worst name ever, but it’s a web ui for interacting with chat format gen AI models.
You can install that locally and point it at any of the models hosted remotely by an inference provider.
So you host the UI but someone else is doing the GPU intensive “inference”.
There seems to be some models for t his task available on huggingface like this one:
https://huggingface.co/fakespot-ai/roberta-base-ai-text-detection-v1
The difficulty may be finding a model which is hosted by an inference provider. Most of the models available on huggingface are just the binary model which you can download and run locally. The popular ones are hosted by inference providers so you can just point a query at their API and get a response.
As an aside, it’s possible or likely that you know more about how Gen AI works than I do, but I think this type of “probability table for the next token” is from the earlier generations. Or, this type of probability inference might be a foundational concept, but there’s a lot more sophistication layered on top now. I genuinely don’t know. I’m super interested in these technologies but there’s a lot to learn.


There are no decent GPT-detection tools.
If there were they would be locally hosted language models, and you’d need a reasonable GPU.
Confirmation bias.


The rules about insider trading only exist to make poor people feel better.

I feel like you must have slipped through a rift in the time / space continuum and are visiting us from another reality.
The first step would be to educate people with anarchist literature.
In 2025, we’re completely unable to expect “people” at large to engage in any kind of reasoning.
You can’t just propose to “educate people with anarchist literature” like that’s some kind of solution.

How do you propose to do that given the current slide into fascism?

Well yeah any potential solution is going to mean reduced potential profit, which you could describe as degrowth, but that would be the worst possible way to describe it.
The reason why we haven’t gotten anywhere with carbon reduction is because wealthy people block any efforts.
Telling them that degrowth is the solution is unlikely to motivate them.


console for quick and dirty understanding but inspector for more complex fixes.


Honestly, I don’t really have any idea how a laser printer works beyond the basics.
However, someone has invested the time to create an opensource inkjet printer. It’s a fair assumption that firstly, they know more about printers and hardware than either of us and secondly, they also know everyone prefers laser printers.
Those two assumptions lead me to the conclusion that there’s a significant barrier to producing an opensource laser printer of which you’re not aware.
My comment, although unnecessarily douchey, was an allusion to the age old refrain of open source enthusiasts everywhere: if the project isn’t good enough for you, fork it and make your own.


Ok, well… we’re all looking forward to you publishing the repo for an opensource laser printer then I guess.


Probably not that much I guess.
I mean if you could net $200 or so per hour of turd sifting I’d be game with the economy the way it is and all.




Everyone knows that, but the comment you replied to explains why anything else just isn’t feasible.
I’ve never used tailscale but use wireguard extensively.
There’s not much of a learning curve for you as the administrator. You have to discard some misconceptions you might bring from other VPNs but really after 30 minutes of looking at configs you’ll get it.
I use wireguard for my small team of 5 people to access self hosted services. You install wireguard, load the config, and then it just works.
The trick, if it can be called that, is using public dns for private services.
On your server, suppose you have service-a service-b and service-c in containers with ip addresses in the 10.0.2.0/24 range. Then you’d have a reverse proxy like traefik at 10.0.2.1. You’d also create a wireguard container with an IP in that same 10.0.2.0/24 range, and configure it’s wireguard adapter to be 10.0.12.1 or soomething so you have “2” for the containers and “12” for the wireguard clients.
Then in wireguard configurations you direct all traffic for 10.0.2.0/24 through the tunnel but everything else just uses their devices normal internet connection.
Finally create a public dns record pointing to the reverse proxy like *.mydomain.com > 10.0.12.1
now whatever.mydomain.com will resolve to your reverse proxy but is still only available to devices connected to the wireguard container on your server.