

You’re saying targeting people who are taking steps to improve their privacy and security is ethical? Out do you just believe that there’s no such thing as ethics in CIS?
🅸 🅰🅼 🆃🅷🅴 🅻🅰🆆.
𝕽𝖚𝖆𝖎𝖉𝖍𝖗𝖎𝖌𝖍 𝖋𝖊𝖆𝖙𝖍𝖊𝖗𝖘𝖙𝖔𝖓𝖊𝖍𝖆𝖚𝖌𝖍
You’re saying targeting people who are taking steps to improve their privacy and security is ethical? Out do you just believe that there’s no such thing as ethics in CIS?
You know how popular VPNs are, right? And how they improve privacy and security for people who is them? And you’re blocking anyone who’s exercising a basic privacy right?
It’s not an ethically sound position.
That’s not what I’m complaining about. I’m unable to access the site because they’re blocking anyone coming through a VPN. I would need to lower my security and turn off my VPN to read their blog. That’s my issue.
They block VPN exit nodes. Why bother hosting a web site if you don’t want anyone to read your content?
Fuck that noise. My privacy is more important to me than your blog.
This is a really strong argument for not depending on non-federated, centrally controlled services. It doesn’t matter which country or company is behind Your Favorite Service™, they can be legally mandated to by Oppressive Regime (“it could never happen in my country!”), or they could just be arbitrary assholes.
I don’t care why Microsoft did it. I moved off Github when MS acquired them, although in this case it probably wouldn’t have made a difference. Regardless, what it proves is that you can not rely on a monopoly.
DS9 was a lot of good, some really spectacular great (Garak, the character & the arc!), and some really lousy.
I am really confused. Every shell over used has had a command history. zsh takes it to the next level, with optional history syncing between running terminals. Histories are always persistent, unless you unset HISTFILE. I’ve got my HISTSIZE set to an absurd 10,000 commands, with uniqueness enabled. Ctrl-R lets you autocomplete type-ahead search through history. Tools like fzf can make it marginally better, but ^r has nearly always been enough. You have !? history searching; !nnnn for referring to commands by number, the history
command to actually print out all of you history which you can then grep.
Most of this has been available in shells at lest since the mid-90’s, when I really started using Unix in earnest.
So I’m really confused about why you think shells haven’t had persistent histories. What’s new or different about this thing to which you refer?
man bash
or man zshall
and /HIST
, and that’ll get you started.
Canonically, moving during transport is pre-TNG, even. It first appeared in one of the TOS-cast feature films - maybe even the first: “The Motion Picture.” I don’t recall off the top of my head. But pattern buffers themselves I think weren’t “invented” until after the TOS generation. Probably introduced to address the horribly transporter accidents that could occur, as in the films. Anyway, I don’t think they had the buffer technology before TNG, and I don’t think it affected one east or 'tother whether people could move and talk during transport. I think that was more a result of the capabilities of period FX.
I can second tsmu. However, it does require some thoughtful use.
I usually compare it to GMail, which was the pioneer of eliminating folders. notmuch also does a good job here. tmsu isn’t quite as effortless - granted, it has a harder job in having to support multiple file types, not all of which are automatically indexable - but for tmsu to be effective I find I have to make extra (non-trivial) effort to manually tag files. Rather than, say, gmail and notmuch, where I only care about tags when I’m searching. buku is similar, although it’s somewhere in between; you can get by without manually tagging, but it isn’t perfect, and manually tagging is still better and isn’t a much extra work.
Yeah, but I’m a little sad about Gowron’s arc. I liked the character, and I thought his reaction to Martok and ending was… not honorable. The writers did him dirty. Just IMHO.
If Jekyll isn’t your jam, then Hugo probably won’t be, either.
I have a simple workflow based on a script on my desktop called “blog”. I Cask it with “blog Some blog title” and it looks in a directory for a file named some_blog_entry.md
, and if it finds it, opens it in my editor; if it doesn’t, it creates it using a template.md
that has some front matter filled in by the script. When I exit the editor, the script tests the modtime and updates the changed
front matter and the rsyncs the whole blog directory to my server, where Hugo picks up and regenerates the site if anything changed.
My script is 133 lines of bash, mostly involving the file named sanitization and front matter rewriting; it’s just a big convenience function that could be three lines of typing a little thought, and a little more editing of the template.
There’s no federation, though. I’m not sure what a “federated blog” would look like, anyway; probably something like Lemmy, where you create a community called “YourName”. What’s the value of a federated blog?
Edit: Oh, I forgot until I just checked it: the script also does some markdown editing to create gem files for the Gemini mirror; that’s at least a third to a half of the script (yeah, 60 LOC without the Gemini stuff), which you don’t need if you’re not trying to support a network that never caught on and that no-one uses.
I thought the buffer was more like stasis than a VR. I only really saw it referenced a lot in Strange New Worlds, where it was pretty clearly static.
Probably because of misunderstanding the project, and thinking it has something to do with Go (the language). Or, maybe not. But programmers can be really uncreative when naming projects; <language><function> is a pretty common naming scheme.
You have Frank Sinatra at your house‽
I miss Lower Decks so much. As much as TOS, and when I was growing up, TOS was all we had.
@Xanza’s suggestion is a good one. For me, it’s sufficient to fuse mount the backup and check a few files. It’s not comprehensive, but if a few files I know changed look good, I figure they all probably are.
Oh, hell yeah. That was TOS, which is still, in my opinion, the best series, and absolutely the best time line.
I do not like the current time line, where Starfleet has black OPs and performs CIA-level “diplomacy.”
I’m not the person who brought git up.
Then I apologize. All I can offer is that it’s a weakness of my client that it’s difficult and outside the inbox workflow to see any history other than the comment to which you’re replying. Not an excuse; just an explanation.
Work is the thing you’re complaining about, not the proof.
If given the option, I’d prefer all computing to have zero cost; sure. But no, I’m not complaining abou t the work. I’ll complain about inefficient work, but the real issue is work for work’s sake; in particular, systems designed specifically where the only important fact us proving that someone burned X pounds of coal to get a result. Because, while exaggerated and hyperbolically started, that’s exactly what Proof-of-Work systems are. All PoW systems care about is that the client provably consumed a certain amount of CPU power. The result is the work is irrelevant for anything but proving that someone did work.
With exceptions like BOINC, the work itself from PoW systems provides no other value.
Compare this to endlessh.
This is probably wrong, because you’re using the salesman idea.
It’s not. Computer networks can open only so many sockets at a time; threading on a single computer is finite, and programmers normally limit the amount of concurrency because high concurrency itself can cause performance issues.
If they’re going to use the energy anyway, we might as well make them get less value.
They’re going to get their value anyway, right? This doesn’t stop them; it just makes each call to this more expensive. In the end, they do the work and get the data; it just cost them - and the environment - more.
Do you think this will stop scrapers? Or is it more of a “fuck you”, but with a cost to the planet?
Honey pots are a better solution; they’re far more energy efficient, and have the opportunity to poison the data. Poisoned data is more like what you suggest: they’re burning the energy anyway, but are instead getting results that harm their models. Projects like Nepenthes go in the right direction. PoW systems are harmful - straight up harmful. They’re harmful by preventing access to people who don’t use JavaScript, and they’re harmful in exactly the same way crypto mining is.
It’s a rant, for sure
first of all, bitcoin in its original form was meant to be used as a transaction log between banks.
Satoshi Nakamoto, they guy who invented Bitcoin, was motivated by a desire to circumvent banks. Bitcoin is the exact opposite of what you claim:
A purely peer-to-peer version of electronic cash would allow online payments to be sent directly from one party to another without going through a financial institution. … Commerce on the Internet has come to rely almost exclusively on financial institutions serving as trusted third parties to process electronic payments. … What is needed is an electronic payment system based on cryptographic proof instead of trust, allowing any two willing parties to transact directly with each other without the need for a trusted third party.
https://www.bitcoin.com/satoshi-archive/whitepaper/
My comment is a rant, because I constantly see these strongly held opinions about systems by people who not only know nothing about the topic, but who believe utterly false things.
cryptocurrencies result in a centralisation of power by default, whether they use proof of work or proof of stake, because they are built so that people with more resources outside the network can more easily get sway over the system
Ok, now I have to wonder if you’re just trolling.
Bitcoin, in particular, has proven to be resilient against such takeovers. They’ve been attempted in the past several times, and successfully resisted.
Interesting. The most common setup I encounter is when the VPN is implemented in the home router - that’s the way it is in my house. If you’re connected to my WiFi, you’re going through my VPN.
I have a second VPN, which is how my private servers are connected; that’s a bespoke peer-to-peer subnet set up in each machine, but it handles almost no outbound traffic.
My phone detects when it isn’t connected to my home WiFi and automatically turns on the VPN service for all phone data; that’s probably less common. I used to just leave it on all the time, but VPN over VPN seemed a little excessive.
It sounds like you were a victim of a DOS attack - not distributed, though. It could have just been done directly; what about it being through a VPN made it worse?