• 0 Posts
  • 110 Comments
Joined 2 years ago
cake
Cake day: June 21st, 2023

help-circle


  • In Zig, we would just allocate the list with an allocator, store pointers into it for the tag index, and mutate freely when we need to add or remove notes. No lifetimes, no extra wrappers, no compiler gymnastics, that’s a lot more straightforward.

    What happens to the pointers into the list when the list needs to reallocate its backing buffer when an “add” exceeds its capacity?

    Rust’s borrow checker isn’t usually just a “Rust-ism”. It’s all low level languages, and many times also higher level languages. Zig doesn’t let you ignore what Rust is protecting against, it just checks it differently and puts more responsibility on the developer.




  • Storing UI assets in a database is unusual because assets aren’t data, they are part of your UI. This is of course assuming a website - an application may choose to save assets in a local sqlite database or similar for convenience.

    It’s the same reason I wouldn’t store static images in a database though - there’s no reason to do so. Databases provide no additional value over just storing the images next to the code, and same with localizations.

    User-generated content changes things because that data is now dynamically generated, not static assets for a frontend.


  • I know I probably sound like an ass but it really is that bad

    Nah I work in shitty codebases on a regular basis, and the less I need to touch them, the happier I am.

    With regards to other localization changes, it’s not important to localize everything perfectly, but it’s good to be aware of what you can improve and what might cause some users to be less comfortable with the interface. That way you’re informed and can properly justify a sacrifice (like “it’d cost us a lot of time to support RTL interfaces but only 0.1% of users would use them”) rather than be surprised that there even is one being made.

    Also, user-generated content explains why these are in a DB, and now it makes a lot more sense to me. User-generated translations used as-is makes more sense than trying to force Project Fluent (or other similar tools) into it.


  • Localization is a hard problem, but storing your translations in the DB is a bit unusual unless you’re trying to translate user data or something.

    I’d recommend looking into tools like Project Fluent or similar that are designed around translating.

    As for the schema you have, if you’re sticking with it, I would change the language into an IETF language tag or similar instead. The important part is that it separates language variants. For example, US English and British (or international) English have differences, Brazilian Portuguese and Portugal Portuguese have differences, Mexican Spanish and Spain Spanish have differences, etc.

    Using an ID instead of the text content itself as part of the PK should be a no-brainer. Languages evolve over time, and translations change. PKs should not. Your choice of PK = (TextContentId, Language) is the most reasonable to me, though I still think that translations should live as assets to your application instead to better integrate with existing localization tools.

    One last thing: people tend to believe that translating is enough to localize. It is not. For example, RTL languages often swap the entire UI direction to RTL, not just the text direction. Also, different cultures sometimes use different colors and icons than each other.


  • TehPers@beehaw.orgtoProgramming@programming.devAI Coding
    link
    fedilink
    English
    arrow-up
    6
    ·
    12 days ago

    it is the same code as you produce manually.

    LLMs do not create the same code that I would, nor do they produce code at the same level that I would. Additionally, LLMs are not deterministic (normally - there are ways to manually seed some but it’s rare). Determinism has a very specific meaning. Compilers supporting reproducible builds are deterministic. LLMs producing a different output each time are not.

    it is a task of a programmer to review it before publishing it.

    Tell that to my coworkers. It’s honestly insulting the code I have to review and contribute to. Having used these tools myself, I’m better off writing the code myself.








  • Next.js is a highly opinionated framework. “Our way or the highway” is what should be expected going in. Good luck if your requirements change later on, and I hope your code is transferrable to a new framework if needed.

    Unfortunately, I have never need to follow “our way” because my projects are more complex than whatever basic blog setup they document. I always end up just building my own stack around Vite. I’m also not much of a fan of fighting against my tools when what I need isn’t something the tool devs already thought of.




  • For programming languages? I don’t need many features as long as what exists is enough to do everything I need. In fact, the less, the better (or you end up with C++'s regex/Python’s urllibN/etc).

    I guess that means that I’d end up more on the documentation side, though my reason isn’t because I want the most documented language of all time, but because I want the fewest built-in features.

    This is why I mostly write Rust when given the option. I write a lot of Python, but I hate the standard library so much. There’s the urllib stuff, plus there’s a bunch of deprecated stuff in the base64 module, plus I can’t stand Python’s implementation of async (coroutines are cool but asyncio is miserable to use imo).

    Edit: Oh, and nobody’s giving integers only when nuanced answers are more interesting to discuss.