Is 2026 going to be the year of the Linux handheld? /s
(No, Android is no more a Linux OS than the PlayStation is a BSD fork.)
I take my shitposts very seriously.
Is 2026 going to be the year of the Linux handheld? /s
(No, Android is no more a Linux OS than the PlayStation is a BSD fork.)


Open config.php and look for the entry named trusted_domains. Make sure it contains both the domain name and the local IP address:
'trusted_domains' => array(
0 => 'nextcloud.your.domain', // the public FQDN
1 => '172.22.?.?', // the local IP address
2 => '...', // other addresses, like if you're using a VPN
),
If the web app is opened using an address or DNS name that isn’t included in this list, the browser will connect, but the app will refuse to work.
Nevermind, I completely overlooked that the service is Opencloud, not Nextcloud. Nevertheless, you should investigate whether Opencloud has an equivalent config variable.


PocketOS founder blames ‘Cursor running Anthropic’s flagship Claude Opus 4.6’
Fuck that. I’m blaming the PocketOS founder and every person in the chain of decisions that led to a clanker being given this level of unrestricted access to the database and the backups.


https://tailscale.com/docs/how-to/set-up-https-certificates#machine-names-in-the-public-ledger
Your machine names and tailnet domain name will be added to a list that is publicly accessible when a new certificate is issued to one of your machines. CT is meant to verify, through one or multiple third parties, that a certificate was issued to a particular DNS name. This isn’t unique to Tailscale – all other CAs do this, and modern browsers will refuse to connect to websites if they can’t verify the certificate through at least one CT ledger.
This doesn’t expose your systems any more than getting a DNS entry and a certificate from other sources. If you don’t want your tailnet and machine names out in the public, you’ll have to use self-signed certs and self-hosted HTTPS-capable servers or proxies.


Right at this moment, I’m rebuilding my homelab after a double HDD failure earlier this year.
The previous build had a RAID 5 array of three 1TB Seagate Barracudas that I picked out of the scrap pile at work. I knew what I was getting into and only kept replaceable files on it. When one of the drives started doing the death rattle, I decided to yank some harder-to-acquire files to my 3TB desktop HDD before trying to resilver the entire array. Guess which device was the next to fail. I could mount and read it, but every operation took 2-5 minutes. SMART showed a reallocation count in the thousands. That drive contained some important files that I couldn’t replace, which were backed up to the (now dead) server. Fortunately ddrescue managed to recover damn near everything and I only lost 80 kilobytes out of the entire disk. That was a very expensive lesson that I’ve learned very cheaply.
The new setup has a RAIDz1 pool of 3x 4TB Ironwolf disks (constrained by the available SATA sockets on the motherboard), plus a new SSD for the OS and 16GB RAM (upgraded from literally the first SSD I ever bought and 10GB mis-matched DDR3).
Mounting it was a bit of a dilemma. The previous array was simply mounted to the filesystem from fstab and bind-mounted to the containers. I definitely wanted the storage to be managed from Proxmox’s web UI and to be able to create VDs and LXC volumes on it. Some community members helped me choose ZFS over LVM-on-RAID5. Setting up the correct permissions wasn’t as much of a headache as last time. I’ve just managed to get a Samba+NFS+HTTP file server and Jellyfin running and talking to each other. Forgejo and Nextcloud will be next.


Looks like the ejector switch. Imagine trying to scratch your balls mid-mission and immediately shooting out of the plane pulling break-your-fucking-spine Gs.


Just install linux bro, it’s not that difficult. You’ll have to compile the F-35 drivers from source, but that’s just the cost of having a reliable system.


ZFS uses the RAM intensively for caching operations. Way more than traditional filesystems. The recommended cache size is 2 GB plus 1 GB per terabyte of capacity. For my server, that would be three quarters of the RAM dedicated entirely to the filesystem.
NilePink: Making my own estradiol from dairy products
(According to one random comment under hbomberguy’s soy diet video, milk is full of mammalian sex hormones. I’m certain Nigel would be able to separate them, if that is true.)


Read my comment again, it has the answer. Most VPN services do not provide end-to-end tunnelling. If the exit node is located outside Russia, then what enters the Russian internet will be simple HTTPS traffic.


Been running it from Russia where stock WireGuard stopped working mid-2025.
Sounds like the issue is ISPs within Russia blocking outgoing Wireguard traffic from customers.
If the traffic exits the tunnel without hitting a Russian ISP (e.g. a Mullvad exit node in Sweden that routes the unencrypted traffic to the destination), you won’t be affected. If the exit node is behind a Russian ISP, it might get filtered by DPI depending on which direction is subject to the filter.
The person behind their twitter account is a notorious shitter.
It’s problematic, but possible: https://jamesguthrie.ch/blog/multi-tailnet-unlocking-access-to-multiple-tailscale-networks/
If the other person has a Tailscale account, it sounds like the most expedient method is to simply invite them to the tailnet as a non-admin user with strict access control.
You could share a node with an outside user, but I don’t know how much the quarantine would affect its functionality. You could also use Funnel to expose the node to the internet (essentially like a reverse proxy), but there are obvious vital security considerations with that approach.


The treekie in me wants BookData.
(edit) This made me remember The Measure Of A Man and now I’m fucking depressed. They had such high hopes for the future.
Fuck, I’m an idiot. I really shouldn’t be giving advice when I’m sleep-deprived like this. I completely forgot that when I used RDP, I did it through an SSH tunnel.
Removed.
deleted by creator
Three important factors:
Mine is using a network share to transfer files faster than any USB device we have at home.
Realistically, I don’t think any year will be the year when Linux for mobile takes off, ironically for the same reason why Windows Phone failed, and that is app availability, or the “will my banking app work on it?” problem.