• 0 Posts
  • 94 Comments
Joined 2 years ago
cake
Cake day: December 29th, 2023

help-circle
  • the vuln afaik is for remote code execution via basically a mechanism that’s kinda like a transparent RPC to the server (think like you just write frontend code with like a “getUsers” and it just automatically retrieves and deserializes the results so you can render the UI without worrying about how that data exists in the browser)

    i’m not a front end engineer, and haven’t used react server components, but i am a principal software engineer, i do react for personal projects, and have written react professionally

    i can’t think of a way it’d be exploitable via purely client-side means

    i THINK what they mean is that you can use some of the RSC stuff without the RPC-style interfaces, and in that case they say the server component is still vulnerable, but you still need react things running on your server

    a huge majority of react code is client-side only, with server-side code written in other languages/frameworks and interfaces with something like REST or GraphQL (or even RPC of course)



  • most things scale if you throw enough resources at them. we generally say that things don’t scale if the majority case doesn’t scale… it costs far fewer resources to scale with multiple repos that it does to scale a monorepo, thus monorepo doesn’t scale: i’d argue even the google case proves that… they’ve already sunk so much into dev tooling to make it work… it might be beneficial to the culture (in that they like engineers to work across the entire google codebase), but it’s not a decision made because it scales: scale is an impediment


  • that’s a good and bad thing though…

    it’s easy to reference code, so it leads to tight coupling

    it’s easy to reference code, so let’s pull this out into a separately testable, well-documented, reusable library

    my main reason for ever using a monorepo is to separate out a bunch of shared libraries into real libraries, and still be able to have eg HMR



  • i’d say it’s less that it’s inadequate, and more that it’s complex

    for a small team, build a monolith and don’t worry

    for a medium team, you’ll want to split your code into discreet parts (libraries shared across different parts of your codebase, services with discreet test boundaries, etc)… but you still need coordination of changes across all those things, and team members will probably be touching every part of the codebase at some point

    for large teams, you want to take those discreet parts and make them fairly independent, and able to be managed separately: different languages, different deployment patterns, different test frameworks, heck even different infrastructure

    a monorepo is a shit version of real, robust tooling in many categories… it’s quick to setup, and allows you a path to easily change to better tooling when it’s needed


  • You should really not need to do a PR across multiple repos.

    different ways of treating PRs… it’s a perfectly valid strategy to say “a PR implements a specific feature”, in which case you might work in a backend, a front end, and library… of course, those PRs aren’t intrinsically linked (though they do have dependencies between them… heck i wouldn’t even say it’d be uncommon or wrong for the library to have schemas that do require changes in both the fronted and backend)

    if you implement something in eg the backend, and then get retasked with something else, or the feature gets dropped then sure it’s “working” still, but to leave unused code like that would be pretty bad… backend and front end PRs tend to be fairly closely tied to each other

    a monorepo does far more than i think you think it does… it’s a relatively low-infrastructure way of adding internal libraries shared across different parts of your codebase, external libraries without duplication (and ensuring versions are consistent, where required), and coordinating changes, and plenty more

    can these things be achieved with build systems and deployment tooling? absolutely… but if you’re just a small team, a monorepo could be the right call

    of course, once the team grows in size it’s no longer the correct option… real tooling is probably going to be faster and better in every way… but a monorepo allows you to choose when to replace different parts of the process… it emulates an environment with everything very separated


  • i’d say they’re pretty equivalent

    a monorepo is far easier to develop a single-language, fairly monolithic (ie you need the whole application to develop any part) codebase in

    (though as soon as you start adding multiple languages or it gets big enough that you need to work on parts without starting other parts of the application it starts to break down rather significantly)

    but as soon as your app becomes less of a cohesive thing and more separated it becomes problematic… especially when it comes to deployments: a push to a repo doesn’t mean “deploy changes to everything” or “build everything” any more

    i think the best solution (as with most things) is somewhere in the middle: perhaps several different repos, and a “monorepo” that’s mostly a bunch of subtrees or submodules… you can coordinate changes by committing to the monorepo (and changes are automatically duplicated), or just work on individual parts (tricky with pnpm since the workspace file would be in the monorepo)… but i’ve never really tried this: just had the thought for a while



  • right? like yeah i remember XMPP being cool n all, but all the experiences suuuuucked, not to mention (back in the day… i think its fixed now?) figuring out how the hell to get video calling working… “what extension does your client support?” is not a question a lay-person will ask: centralised systems don’t have extensions… they have “the way it’s done” and that’s it


  • inefficient in the sense that

    • traffic go over the internet rather than internal networks which means the routing is much longer, over slower links
    • not to mention that in distributed systems information frequently is duplicated many times rather than referenced on some internal system (sending out an email to 20 people duplicates that email 20 times across many providers rather than simply referencing an internal ID… you can just centralise content and send out a small notification message, but that’s generally not what people are talking about when they’re talking about modern distributed systems)
    • each system can’t trust any other, so there’s a lot more processing that each node has to do in order to maintain a consistent internal state: validating and transforming raw data for itself - not usually a particularly big task, but multiplied by millions per second it adds up fast
    • hardware scaling is simply not as easy either… with centralised systems you have, say, 1000 servers at 95% capacity (whatever that means): you can run them close to capacity because your traffic is generally insulated from load spikes due to volume, and generally you wouldn’t get 5% more load faster than you can scale up another server. in distributed systems (or rather smaller systems, because that’s implicit here unless you’re just running the hardware and software to duplicate the whole network, which would take more servers anyway due to the other inefficiencies and now you’re multiplying them) you need to have much more “room to breathe” to absorb load spikes
    • things like spares and redundancy for outage mitigations also become more expensive: if you have 1000 servers, having a couple of hot spares (either parts or entire systems depending on system architecture and uptime requirements) isn’t that big of a deal but in a distributed system you probably need those hot spares, but all of a sudden every instance needs those hot spares somewhere (though this can be seen as a similar issue to traffic issue: spares of all kinds are just unused capacity, so the higher your ratio the more under-utilised your hardware)
    • this is all without getting into the human effort of building systems… instance owners all need to manage their infrastructure which means that the mechanisms to handle things like upgrade without downtime, scaling, spam protection, bots, etc have all been built many many times

    NONE of this is to say that they’re worse. in many ways the have a lot of advantages, but it’s not a clear-cut win in a lot of cases either… as with most things in life “it depends”. distributed systems are resistant to whole-network outages (at the expense of many more partial network outages), they’re resistant to censorship, they implicitly have a machine to machine interface, so the network as a whole is implicitly automatable (that might be a bad thing for things like spam, privacy, bots, etc), but people tend to generally be pro-bots and pro-3rd party apps


  • Pup Biru@aussie.zonetoPrivacy@programming.devDelta chat criticism against Signal
    link
    fedilink
    English
    arrow-up
    50
    arrow-down
    1
    ·
    edit-2
    2 months ago

    this seems needlessly combative… prevailing opinions are exactly as signal says… think differently? great! let’s do it, talk about it, see how it goes, and when the solution has scaled in the real world to what it’s competing against then you can feel superior as the one that had the vision to see it

    but scaling is hard, and distributed tech is hugely inefficient

    there are so many unknowns

    anyone can follow a random “getting stared with web framework X” guide to make a twitter clone… making a twitter clone that handles the throughput that twitter does, that takes legitimately hard computer science (fuck twitter, but it remains both a good and common example)

    heck even lemmy has huge issues with sync at its current tiny scale when there’s any reasonable latency involved… i remember only months ago when aussie.zone was getting updates days late because of a 300ms latency to EU/US and lemmys sequential handling of outboxes (afaik)



  • Pup Biru@aussie.zonetoADHD memes@lemmy.dbzer0.comAn experiment
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    edit-2
    2 months ago

    also puppy play! i like to say puppy play is like active mindfulness: instead of focusing on nothing in order to exist in the moment, you pretend you’re a dog and focus on that… dogs don’t pay rent, have jobs, worry about politics, etc… dogs just play, so just play and be in the moment

    you get drawn into it, and it works incredibly well for people that get bored with things like meditation

    the more you do the easier it is, and the main thing that breaks the headspace is feeling self conscious, but that’s freeing too! once you realise it’s fine to be ridiculous - that nobody cares - it helps with so many other parts of enjoying life (including other kinds of BDSM: people often say puppy play is a gateway kink exactly for this reason)







  • the thing that everyone always glosses over is that jellyfin should not be run on a public network. it has known security vulnerabilities… that includes VPN remote proxy, so now you have to have external users on your actual VPN, and if that’s the case then plex will work fine because it’s “local”, and has a lot more features

    (and my main issue: media segments don’t work on swiftfin)