

SHNORLE developer working to reimplemented SHH in the Zorp programming language


SHNORLE developer working to reimplemented SHH in the Zorp programming language


Idk of any but I’m interested, commenting to add traffic 👍


The Rust hype is funny because it is completely based on the fact that a leading cause of security vulnerabilities for all of these mature and secure projects is memory bugs, which is very true, but it completely fails to see that this is the leading cause because these are really mature projects that have highly skilled developers fixing so much shit.
So you get these new Rust projects that are sometimes made by people that don’t have the same experience as these C/C++ devs, and they are so confident in the memory safety that they forget about the much simpler security issues.
This is ironic right? Because literally the word “Effort” is an item in the list. I think this is a woosh


Buying new: Basically all of the integrated memory units like macs and amd’s new AI chips, after that any modern (last 5 years) gpu while focusing only on vram (currently nvidia is more properly supported in SOME tools)
Buying second hand: not likely to find any of the integrated memory stuff, so any GPU from the last decade that is still officially supported and focusing on vram.
8gb is enough to run basic small models, 20+ for pretty capable 20-30b models, 50+ for the 70b ones and 100-200+ for full sized models.
These are rough estimates, do your own research as well.
For the most part with LLMs for a single user you really only care about VRAM and storage speed(ssd) Any GPU will perform faster than you can read for anything that fully fits on it’s VRAM, so the GPU only matters if you intend on running large models at extreme speeds (for automation tasks, etc) And the storage is a bottleneck at model load, so depending on your needs it might not be that big of an issue for you, but for example with a 30gb model you can expect to wait 2-10 minutes for it to load into the vram from an HDD, about 1 minute with a sata SSD, and about 4-30 seconds with an NVMe.


You can sniff the network and see if the TV is connecting anywhere.


It’s very very unlikely that your TV and your device connected to it both support and enable ethernet over HDMI by default. But if you are unsure you can test it by connecting and seeing if the TV is getting a connection.
Personally I also opened my TV and disconnected the wifi card since in theory the TV could also just try to connect to any open wifi in the area without me knowing, but to each their own threat model.


Anything exposed to the internet gets a daily / weekly update, depending on how exposed it is, how stable the updates are and how critical a breach would be. For example nginx would be a daily update.
Anything behind a vpn gets a more random update schedule mostly based on when I feel like it (probably around once a month or every other month)
Tip, look at second hand sites/fb marketplace (i know 😒) you can find great deals.
Low risk high tension social interaction with an ego pushing you to show off = easy dopamine


Ollama + open webui + tailscale/netbird
Open webui provides a fully functional docker with ollama, so just find the section that applies to you (amd, nvidia, etc) https://github.com/open-webui/open-webui?tab=readme-ov-file#quick-start-with-docker-
And on that host install netbird or Tailscale, install the same on your phone, in tailscale you need to enable magicdns but in netbird I think it provides dns by default.
Once the docker is running and both your server and phone are connected to the vpn (netbird or tailscale) you just type the dns of your server in your phone’s browser (in netbird it would be “yourserver.netbird.cloud” and in tailscale it would be “yourserver.yourtsnet.ts.net”)
Checkout networkchuck on youtube as he has a lot of simple tutorials.
There are a few reasons why someone might use Proxmox. It doesn’t have to be just security, it can also be different network architectures that don’t work as well in Docker and it can also be just greater control over the services which is less comfortable to do in Docker as it’s meant to have built images that are running and are ephemeral. There are also certain services that either don’t have a pre-built docker and someone might not want to bother with making their own docker infrastructure around it or have technologies that are not well supported or are not well executed in docker.
There is also the fact that Proxmox is meant to be used in production, which means that it’s more stable (than some casual docker rubning on whatever distro they have) and it does have a very low overhead, even if you do use dockers you can use them within Proxmox and it gives you a lot of capabilities that add to stability and manageability.
Generally speaking if your threat model is very small, you’re running this within your private network, and it’s not exposed to the internet or anything large like that, then it doesn’t really make a big difference and you should probably just use whatever is comfortable for you.
I personally moved to Proxmox for three reasons which are security, customizability and stability. I felt that within Docker containers it was a lot more annoying to have to pull the images and make my own Docker files and update them and build them every time. I find it easier to have my own server with its dedicated service and that I know how to update and how to modify more properly and that I built from scratch. There is also the advantage that I can use whatever OS I want for different situations. Of course I personally use exclusively Linux but even within that I can use different distros and I can have all kinds of different services running without interfering with one another in any way, and in extreme cases I can have a windows vm.
And another major factor for me was that I just wanted to learn how to do it. I think it’s cool and it was interesting and I have already experienced Docker to a level that I felt comfortable with it and it was time to move on and expand my horizons.
Tip, if you have the room for it, looking for second hand servers (as in actual servers with server hardware) is often really useful.
As you start hosting more stuff you realize that ram and cpu cores are very limited in consumer hardware. With a shitty second hand server you could have more cores and more ram than anything in the consumer category, and you can stick an old GPU on it if you want some better media performance.
But if you truly believe that you won’t spread out and that potentially 64gb ram and 8 cores will suffice, just go ahead and build it however you want. It is no different from a regular build. Get a nice ssd, get a wired ethernet connection and you are like 90% of the way there.
Edit: everyone else is giving much better advice, ignore my overkill here. For media and simple game servers with a low energy consumption target you are probably better off with a mini pc with an integrated gpu or if you want to future proof a bit, maybe one of those unified memory ones where you ram is also the vram and can produce pretty good performance.

Don’t be evil*
*definition of evil may be changed at any point and has no relation to the official dictionary.


Google has stopped releasing essential files—like device trees and driver binaries—with the Android 16 source code. These files are critical for developers to build and maintain custom ROMs, especially for Pixel devices. The new reference device for AOSP is now the virtual “Cuttlefish,” limiting real-device support.


Only tailscale fpr vpn and backblaze for backup
This implies that my code lives outside of modern tech. I write pure assembly
My system is to do or forget, I mostly forget but at least I don’t have an archive!
Gnu