• 0 Posts
  • 44 Comments
Joined 2 years ago
cake
Cake day: October 4th, 2023

help-circle

  • no matter how much you “love” your AI girlfriend she will never truly love you back because she can’t think or feel, and fundamentally isn’t real.

    On one hand, yeah, current generative AIs don’t have anything that approximates that as a mechanism. I would expect that to start being built in the future, though.

    Of course, even then, one could always assert that any feelings in any mental model, no matter how sophisticated, aren’t “real”. I think that Dijkstra had a point as to the pointlessness of our arguments about the semantics of mechanisms of the mind, that it’s more-interesting to focus on the outcomes:

    “The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.”

    Edsger Dijkstra


  • Will more VRAM solve the problem of not retaining context?

    IIRC — I ran KoboldAI with 24GB of VRAM, so wasn’t super-constrained – there are some limits on the number of tokens that can be sent as a prompt imposed by VRAM, which I did not hit. However, there are also some imposed by the software; you can only increase the number of tokens that get fed in so far, regardless of VRAM. More VRAM does let you use larger, more “knowledgeable” models, as well as putting more layers on a given GPU.

    I’m not sure whether those are purely-arbitrary, to try to keep performance reasonable, or if there are other technical issues with very large prompts.

    It definitely isn’t capable of keeping the entire previous conversation (once you get one of any length) as an input to generating a new response, though.

    EDIT: I think that last I looked at KoboldAI — I haven’t run it recently — the highest token count per prompt one could use was 2048, and this seems to mesh with that:

    https://www.reddit.com/r/KoboldAI/comments/yo31hj/can_i_get_some_clarification_on_some_things_that/

    The 2048 token limit of KoboldAI is set by pyTorch, and not system memory or vram or the model itself

    So basically, each response is being generated looking at a maximum of 2048 words for knowledge about the conversation and your characters and world. Other knowledge has to come from the model, which can be trained on a ton of — for sex chatbots — erotic text and literature, but that’s unchanging; it doesn’t bring any more knowledge as regards your particular conversation or environment or characters that you’ve created.


  • I’ve run Kobold AI on local hardware, and it has some erotic models. From my fairly quick skim of character.ai’s syntax, I think that KoboldAI has more-powerful options for creating worlds and triggers. KoboldAI can split layers across all available GPUs and your CPU, so if you’ve got the electricity and the power supply and the room cooling and are willing to blow the requisite money on multiple GPUs, you can probably make it respond about as arbitrarily-quickly as you want.

    But more-broadly, I’m not particularly impressed with what I’ve seen of sex chatbots in 2025. They have limited ability to use conversation tokens from earlier in the conversation in generating each new message, which means that as a conversation progresses, it increasingly doesn’t take into account content earlier in the conversation. It’s possible to get into loops, or forget facts about characters or the environment that were present earlier in a conversation.

    Maybe someone could make some kind of system to try to summarize and condense material from earlier in the conversation or something, but…meh.

    As generating pornography goes, I think that image generation is a lot more viable.

    EDIT:

    KoboldAI has the ability to prefix the current prompt with a given sentence if the prompt contains a prompt term that matches, which permits dumping information about a character into each prompt. For example, one could have a trigger such that “I asked Jessica to go to the store”, one could have a trigger that matches on “Jessica” and contains “Jessica is a 35-year-old policewoman”. That’d permit providing static context about the world. I think that maybe what would need to happen is to have a second automated process trying in the background to summarize and condense information from earlier in the conversation about important prompt words, and then writing new triggers attached to important prompt terms, so that each prompt is sent with a bunch of relevant information. Manually-writing static data to add context faces some fundamental limits.


  • I can’t imagine running a non-local sex chatbot unless you’ve got a private off-site server somewhere that you’re using. I mean, forget governments, the company operating the thing is going to be harvesting what it can. Do you really want to be sending a log of your sex chats to some company to make whatever money they can with the thing?

    EDIT: Well, maybe if they had some kind of subscription service, so an alternate way to make money, and a no-log, no-profile policy.


  • There’s not really enough to give an conclusive answer from “it’s not reachable”. All I can do is tell you what I’d probably do to try to troubleshoot further.

    My first steps in troubleshooting connectivity would probably be something like this:

    • Fire up something on the HTTP server (I’m assuming it’s running Linux) like sudo tcpdump port 80. That should let you see any packets that are reaching the HTTP server.

    • From a Linux machine on an outside network — a tethered cell phone might make a reasonable machine, if you don’t have another machine you control out there somewhere in the ether — running something like mtr --tcp -P 80 <hostname>. That’ll tell you, at an IP-hop-by-IP-hop level, whether there’s anything obstructing reaching the machine. Could be that your ISP blocks 80 inbound, for example.

    • So the next step is probably to see whether you can get regular ol’ HTTP through. Also from an outside network, running curl --verbose http://<hostname>/. That’ll let you see what’s happening at the HTTP level.

    I’m guessing that you’re probably going to have something along here break. It could be that the packets are being blackholed at a hop prior reaching your router, in which case your ISP may be firewalling inbound on that port. It may be that they’re reaching your router, but that your router is trying to forward to the wrong machine. It may be that you have some kind of firewall on the HTTP server that’s blocking connections that aren’t coming locally or from the WireGuard side. But at least it’ll probably give you a better idea as to how far it’s getting.

    Once you’ve got that up and running, can look at HTTPS:

    • If this is working, if you want to test the TLS certificate handshaking, see if there are any issues, again, from an outside network: openssl s_client -connect <hostname>:443 -prexit. That’ll let you see the TLS handshake and any issues that happen during it.

    • Also from an outside network, running curl --verbose https://<hostname>/. That’ll let you see what’s happening at the HTTPS level.

    EDIT: Oh, yeah, and someone else pointing out confirming that the DNS resolution is what you expect is probably also a good first step. host <hostname> from an outside network.


  • I would like to start managing ebooks and manga properly.

    I guess my question is how is everyone using these services for their own library :)

    I moved away from dedicated readers. They’re nice, but I have a tablet, a phone, and a laptop. I don’t need a fourth device with me.

    For me, the major selling point for dedicated readers is that they buy eInk: their insane battery life and how they work very well in sunlight or otherwise brightly-lit conditions, so you can read outside.

    For comics — I don’t know if you’re only viewing black-and-white manga — my understanding is that color eInk displays have limited contrast compared to the black-and-white ones. I think that if I were viewing anything in color, I’d probably want to use some kind of LED or LCD display.

    I will occasionally read content on my Android phone with fbreader. The phone isn’t really a great platform for reading books — just kind of small — but it does a good job of filling the “I’m waiting in a line and need to kill a few minutes”. With an e-reader, you need something like Calibre to transfer books on and off, but with Android, I can just transfer files the way I normally would, via sftp or similar. I don’t have any kind of synchronized system for managing those books spanning multiple devices.

    I use an Android tablet sometimes, almost always when I want to cuddle up on a couch or just want a larger display or want to watch videos. Same kind of management/use case. I think I used fbreader to last read an epub thing. I’ve switched among various comics and manga-viewing software, am not particularly tied to any one. There’s a family of manga-viewing software that downloads manga from websites that host it; I can’t recall the most-recent one I’ve used, but in my limited experience, they all work vaguely the same way.

    I’ve increasingly been just using GNU/Linux systems for more stuff, as long as space permits; I’d rather limit my Android exposure, as I’d rather be outside the Google ecosystem, and the non-Google non-Apple mobile and tablet world isn’t all that extensive or mature. For laptops, higher power consumption, but also vastly larger battery, and much more capable. On desktop, it’s nice to have a really large screen to read with. For comics — and I haven’t been reading graphic novels or comics in some time, so I’m kind of out of date — I use mcomix. For reading epubs, I use foliate in dark mode. I have, in the past, written some scripts to convert long text files into LaTeX and from thence into pretty-formatted PDFs; I’ll occasionally use those when reading long text files, as I have a bunch of prettification logic that I’ve built into those over the years.

    I don’t have any kind of system to synchronize material across devices or track reading in various things. Just hasn’t really come up. If I’m reading something on two different devices, I’ll just be reading two different books at the same time. Probably have some paper books and magazines that I’m working on at the same time too.


  • Just to be clear, I’m pretty sure that they don’t have a no-DRM-across-the-board policy, though, so if you’re going there for DRM-free ebooks, you probably want to pay attention to what you’re buying.

    checks

    Yeah, they have a specific category for DRM-free ebooks:

    https://www.kobo.com/us/en/p/drm-free

    I’ll also add that independent of their store, I rather like their hardware e-readers, have used them in the past, and if I wasn’t trying to put a cap on how many electronic devices I haul around and wanted a dedicated e-reader, the Kobo devices would probably be pretty high on my list. When I used them, I just loaded my own content onto them with Calibre, not stuff from the Kobo store.


  • If you use keys or strong passwords, it really shouldn’t be practical for someone to brute-force.

    You can make it more-obnoxious via all sorts of security-through-obscurity routes like portknocking or fail2ban or whatever, or disable direct root login via PermitRootLogin, but those aren’t very effective compared to just using strong credentials.





  • I agree that it’s less-critical than it was at one point. Any modern filesystem, including ext4 and btrfs, isn’t at risk of filesystem-level corruption, and a DBMS like PostgreSQL or MySQL should handle it at an application level. That being said, there is still other software out there that may take issue with being interrupted. Doing an apt upgrade is not guaranteed to handle power loss cleanly, for example. And I’m not too sanguine about hardware not being bricked if I lose power during an fwupd updating the firmware on attached hardware. Maybe a given piece of hardware has a safe, atomic upgrade procedure…and maybe it doesn’t.

    That does also mean, if there’s no power backup at all, that one won’t have the system available for the duration of the outage. That may be no big deal, or might be a real pain.


  • Yeah, I listed it as one possibility, maybe the best I can think of, but also why I’ve got some issues with that route, why it wouldn’t be my preferred route. Maybe it is the best generally available right now.

    The “just use a UPS plus a second system” route makes a lot of sense with diesel generator systems, because there the hardware physically cannot come up to speed in time. A generator cannot start in 10ms, so you need a flywheel or battery or some other kind of energy-storage system in place to bridge the gap…but that shouldn’t be a fundamental constraint on those home large-battery backup systems. They don’t have to be equipped with an inverter able to come online in 10ms…but they could. In the generator scenario, it’s simply not an option.

    I’d like to, if possible, have the computer have a “unified” view of all of the backing storage systems. In the generator case, the “time remaining” is a function of the fuel in the tank, and I’m pretty sure that it’s not uncommon for someone to be able to have some kind of secondary storage that couldn’t be measured; I remember reading about a New Orleans employee in Hurricane Katrina that stayed behind to keep the datacenter functioning mostly hauling drums of diesel up the stairs to the generator. But that’s not really a fundamental issue with those battery backup systems, not unless someone is planning on hauling more batteries in.

    If one gets a UPS and then backs it with a battery backup system, then there are two sets of batteries — one often lead-acid, with a shorter lifespan — and multiple inverters and battery charge controllers in multiple layers in the system. That’s not the end of the world, a “throw some extra money at it” issue, but one is having to get redundant hardware.


  • I’ll add one other point that might affect people running low-power servers, which I believe some people here are running for low-compute-load stuff like home automation: my past experience is that low-end, low power computers often have (inexpensive) power supplies that are especially intolerant of wall power issues. I have had multiple consumer broadband routers and switches that have gotten into a wonky, manual-reboot-requiring state after brownouts or power loss, even when other computers in the house continued to function without issue. I’d guess that those might be particularly-sensitive to a longer delay in changing over to a backup power source. I would guess that Raspberry Pi-class machines might have power supplies vulnerable to this. I suppose that for devices with standard barrel connectors and voltage levels, one could probably find a more-expensive power supply that can handle dirtier power.

    If you run some form of backup power system that powers them, have you had issues with Raspberry Pis or consumer internet routers after power outages?



  • I use gdb myself.

    I don’t know exactly what you’re after. From the above, I see:

    “easy to use”

    " the mouse is faster, not slower"

    You don’t specify a language, so I’m assuming you’re looking for something low-level.

    You don’t specify an editor, so I’m assuming that you want something stand-alone, not integrated with an editor.

    There are a number of packages that use gdb internally, but put some kind of visualization on it. I’ve used emacs’s before, though I’m not particularly married to it — mainly found it interesting as a way to rapidly move up and down frames in a stack — but I’m assuming that if you want something quick to learn, you’re not looking for emacs either.

    Maybe seer? That’d be a stand-alone frontend on gdb with a GUI. Haven’t used it myself.

    EDIT: WRT gdb, the major alternative that I can think of to gdb is dbx, and that’s also a CLI tool and looks dead these days. gdb is pretty dominant, so if you want something mouse-oriented, you’re probably going to have some form of frontend on gdb.

    There are other important debugging tools out there, stuff like valgrind, but in terms of a tool to halt and step through a program, view variables, etc, you’re most-likely looking at gdb, one way or another, unless you’re working in some sort of high-level language that has its own debugger. If you want a GUI interface, it’s probably going to be some sort of frontend to gdb.

    EDIT2: Huh. Apparently llvm has its own debugger, lldb. Haven’t used it, and it’s probably not what you want anyway, since it’s also a CLI-based debugger. I am also sure that it has far fewer users than gdb. But just for completeness…guess you already looked at that, mentioned it in your comment.