• 0 Posts
  • 27 Comments
Joined 2 years ago
cake
Cake day: June 6th, 2023

help-circle



  • When you implement the functionality of a piece of hardware in software, the software is said to “emulate” the hardware. The emulators you are used to are emulators, not because they are emulating a console (ex. N64), but because they’re emulating the hardware that was used to build that console (ex. a MIPS processor). That said, oftentimes console emulators need to account for specific quirks/bugs that were introduced specifically because of choices the console designers made. Ex. maybe the specific processor and memory they used for the N64 have a weird interaction that game devs at the time abused, and if your emulation doesn’t ensure that quirk works the same way, then some games won’t run.

    At the risk of adding unnecessary detail, a VM might use emulation or it might not. The QEMU package is often used for virtualization, but despite its name (Quick Emulator) if the system you’re virtualizing matches the architecture of the system you’re running on, no emulation is needed.

    \1a) In this case, it is risc-v hardware running software (built for risc-v) that emulates x86_64 hardware so that it can run an x86_64 binary.

    \1b) A compatibility layer is less well defined, but in general refers to: whatever environment is needed to get a binary running that was originally built for a different environment. This usually includes a set of libraries that the binary is going to assume are loaded, as well as accounting for any other possible unique attributes in the structure of the executable. But if the binary, the compatibility layer, and the CPU are all x86_64, then there’s no emulation involved.

    \2) to get a binary built for x86_64 windows running on risc-v Linux, you will need both emulation and a compatibility layer. In theory those two don’t need to be developed in tandem, or even know about each other at runtime, but i expect that there may be performance optimizations possible if they are aware if each other.

    I mentioned QEMU because my first thought when reading this was, isn’t this a prime usecase for QEMU?


  • As a technical user, I think of WSL as almost exclusively for technical users. It’s not really intended to enable normal users to run Linux programs, and more as an excuse to convince companies to keep developing on Windows. If the devs say “we need to write backend code for Linux servers, so we need our dev machines to run Linux” then management sets them up with linux, while the rest of the company uses windows. But if MSFT says “hey look, you can develop code for Linux in windows, and you can even deploy it in windows on our azure servers” then management says “great, everyone can use windows” and keeps buying those licences.




  • An artist produces content. They offer the ability to view the content in exchange for money. They rely on this income to make a living. Instead, you find a way to view the content without giving them money. A portion of their income that they would have otherwise received exists in your pocket instead of theirs.

    Maybe it will help to think of it as a service: if you get a haircut, and then leave without paying, have you stolen anything?

    Look, I’m not saying that stealing is always unethical. Robinhood is a story of someone who steals from the rich to give to the poor, and only temporarily embarrassed Prince Johns would say he’s not the good guy in that story. I’m just saying let’s be honest about it. Call a spade a spade.

    If you deliberately execute only the half of a transaction that is favorable to you, that’s stealing. If you sneak into a movie theater without paying, you’re stealing. If you download music without paying for it, you’re stealing. If a corporation takes art without paying to train a machine to produce facsimiles of that art to make money, they are stealing.

    Honestly, if we still disagree, fine. This discussion feels like one of semantics, completely tangential to the point I was making. Cheers.



  • I know it’s a popular meme to say, “if buying isn’t owning, then piracy isn’t stealing”, but that is brainrot. It’s not even consistent with fair labor practices. It would be like a company saying “if your work doesn’t produce value for me, then the time and effort you put in should not be compensated”. That’s not the deal.

    Artists should be paid, and pirating art is stealing. It’s just that, in the name of equity and the love of art, they might be OK with it if someone who can’t afford it doesn’t pay. But speaking on behalf of every artist ever: when a corporation who absolutely can afford it doesn’t pay, it’s stealing, and the artists want their damn compensation.





  • It’s not sunk cost, dude. We agreed that $120 will get them 5 years of service that meets their needs. Even if they switch to jellyfin after 5 years, they still got their money’s worth.

    It’s only sunk cost if they are worse off than if they had switched earlier. I guess if you’re arguing that they would still have $120 if they switch today, I would argue they should still pay that $120 toward jellyfin’s development. And that’s assuming they have time to switch to jellyfin AND it fits 100% of their usecases, either of which could be untrue.


  • Or Plex currently does everything they need it to, and $120 for 5+ years of keeping that going without any interruption of service is very reasonable. In the meantime, jellyfin will only get better and there might even be other options available by then.

    Stop trying to make the issue black and white, one-size-fits-all. There are perfectly legitimate reasons for people to use both Plex and Jellyfin.




  • Afaik the cookie policy on your site is not GDPR compliant, at least how it is currently worded. If all cookies are “technically necessary” for function of the site, then I think all you need to do is say that. (I think for a wiki it’s acceptable to require clients to allow caching of image data, so your server doesn’t have to pay for more bandwidth).


  • My recommendation would be, have two machines: new hw for all your services, and use the old hw for your NAS. Each could be whatever OS you’re comfortable with using. Most everything on the services machine could be in docker configs, including network mount points to the NAS. You might be able to get away with using the 1080TI in the services box depending on what all you want to do (AI stuff, or newer stream transcoding requirements may require newer hw).

    Moving the data from the old NAS to a new one without new disks will be a challenge, yes.

    I have a TrueNAS box and used jails for services. I recently set up a debian box separately, and am switching from jails on truenas to docker on debian. Wish I had done this from the start.