As a Java engineer in the web development industry for several years now, having heard multiple times that X is good because of SOLID principles or Y is bad because it breaks SOLID principles, and having to memorize the “good” ways to do everything before an interview etc, I find it harder and harder to do when I really start to dive into the real reason I’m doing something in a particular way.

One example is creating an interface for every goddamn class I make because of “loose coupling” when in reality none of these classes are ever going to have an alternative implementation.

Also the more I get into languages like Rust, the more these doubts are increasing and leading me to believe that most of it is just dogma that has gone far beyond its initial motivations and goals and is now just a mindless OOP circlejerk.

There are definitely occasions when these principles do make sense, especially in an OOP environment, and they can also make some design patterns really satisfying and easy.

What are your opinions on this?

  • Log in | Sign up@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    1 hour ago

    The promise of oop is that if you thread your spaghetti through your meatballs and baste them in bolgnaise sauce before you cook them, it’s much simpler and nothing ever gets tangled up, so that when you come to reheat the frozen dish a month later it’s very easy to swap out a meatball for a different one.

    It absolutely does not even remotely live up to it’s promise, and if it did, no one in their right mind would be recommending an abstract singleton factory, and there wouldn’t be quite so many shelves of books about how to do oop well.

  • Jankatarch@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    1 hour ago

    What’s wrong with making a public static singleton class “isEven” and inheriting it to accomplish your goal class, “isOdd.”

  • JakenVeina@midwest.social
    link
    fedilink
    arrow-up
    12
    ·
    edit-2
    3 hours ago

    One example is creating an interface for every goddamn class I make because of “loose coupling” when in reality none of these classes are ever going to have an alternative implementation.

    That one is indeed objective horse shit. If your interface has only one implementation, it should not be an interface. That being said, a second implementation made for testing COUNTS as a second implementation, so context matters.

    In general, I feel like OOP principals like are indeed used as dogma more often than not, in Java-land and .NET-land. There’s a lot of legacy applications out there run by folks who’ve either forgotten how to apply these principles soundly, or were never taught to in the first place. But I think it’s more of a general programming trend, than any problem with OOP or its ecosystems in particular. Betcha we see similar things with Rust, when it reaches the same age.

    • boonhet@sopuli.xyz
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      1 hour ago

      Yeah… Interfaces are great, but not everything needs an interface.

      I ask myself: How likely is this going to have an alternative implementation in the future?

      If the answer is “kinda likely”, it gets an interface. If the answer is “idk, probably not? Why would it?” then it does not get an interface.

      Of course these days it’s more likely to be an unnecessary trait than an unnecessary interface. For me, I mean.

  • Beej Jorgensen@lemmy.sdf.org
    link
    fedilink
    arrow-up
    12
    ·
    5 hours ago

    I’m a firm believer in “Bruce Lee programming”. Your approach needs to be flexible and adaptable. Sometimes SOLID is right, and sometimes it’s not.

    “Adapt what is useful, reject what is useless, and add what is specifically your own.”

    “Notice that the stiffest tree is most easily cracked, while the bamboo or willow survives by bending with the wind.”

    And some languages, like Rust, don’t fully conform to a strict OO heritage like Java does.

    "Be like water making its way through cracks. Do not be assertive, but adjust to the object, and you shall find a way around or through it. If nothing within you stays rigid, outward things will disclose themselves.

    “Empty your mind, be formless. Shapeless, like water. If you put water into a cup, it becomes the cup. You put water into a bottle and it becomes the bottle. You put it in a teapot, it becomes the teapot. Now, water can flow or it can crash. Be water, my friend.”

    • Frezik@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      6
      ·
      4 hours ago

      It’s been interesting to watch how the industry treats OOP over time. In the 90s, JavaScript was heavily criticized for not being “real” OOP. There were endless flamewars about it. If you didn’t have the sorts of explicit support that C++ provided, like a class keyword, you weren’t OOP, and that was bad.

      Now we get languages like Rust, which seems completely uninterested in providing explicit OOP support at all. You can piece together support on your own if you want, and that’s all anyone cares about.

      JavaScript eventually did get its class keyword, but now we have much better reasons to bitch about the language.

  • Windex007@lemmy.world
    link
    fedilink
    arrow-up
    8
    arrow-down
    1
    ·
    6 hours ago

    Whoever is demanding every class be an implementation of an interface started thier career in C#, guaranteed.

      • Windex007@lemmy.world
        link
        fedilink
        arrow-up
        5
        ·
        4 hours ago

        I’m my professional experience working with both, Java shops don’t blindly enforce this, but c# shops tend to.

        Striving for loosely coupled classes is objectively a good thing. Using dogmatic enforcement of interfaces even for single implementors is a sledgehammer to pound a finishing nail.

  • Feyd@programming.dev
    link
    fedilink
    arrow-up
    42
    ·
    edit-2
    8 hours ago

    If it makes the code easier to maintain it’s good. If it doesn’t make the code easier to maintain it is bad.

    Making interfaces for everything, or making getters and setters for everything, just in case you change something in the future makes the code harder to maintain.

    This might make sense for a library, but it doesn’t make sense for application code that you can refactor at will. Even if you do have to change something and it means a refactor that touches a lot, it’ll still be a lot less work than bloating the entire codebase with needless indirections every day.

    • ugo@feddit.it
      link
      fedilink
      arrow-up
      7
      ·
      edit-2
      3 hours ago

      I call it mario driven development, because oh no! The princess is in a different castle.

      You end up with seemingly no code doing any actual work.

      You think you found the function that does the thing you want to debug? Nope, it defers to a different function, which calls a a method of an injected interface, which creates a different process calling into a virtual function, which loads a dll whose code lives in a different repo, which runs an async operation deferring the result to some unspecified later point.

      And some of these layers silently catch exceptions eating the useful errors and replacing them with vague and useless ones.

    • Mr. Satan@lemmy.zip
      link
      fedilink
      arrow-up
      11
      ·
      9 hours ago

      Yeah, this. Code for the problem you’re solving now, think about the problems of the future.

      Knowing OOP principles and patterns is just a tool. If you’re driving nails you’re fine with a hammer, if you’re cooking an egg I doubt a hammer is necessary.

    • Valmond@lemmy.world
      link
      fedilink
      arrow-up
      8
      arrow-down
      2
      ·
      9 hours ago

      I remember the recommendation to use a typedef (or #define 😱) for integers, like INT32.

      If you like recompile it on a weird CPU or something I guess. What a stupid idea. At least where I worked it was dumb, if someone knows any benefits I’d gladly hear it!

      • Hetare King@piefed.social
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 hours ago

        If you’re directly interacting with any sort of binary protocol, i.e. file formats, network protocols etc., you definitely want your variable types to be unambiguous. For future-proofing, yes, but also because because I don’t want to go confirm whether I remember correctly that long is the same size as int.

        There’s also clarity of meaning; unsigned long long is a noisy monstrosity, uint64_t conveys what it is much more cleanly. char is great if it’s representing text characters, but if you have a byte array of binary data, using a type alias helps convey that.

        And then there are type aliases that are useful because they have different sizes on different platforms like size_t.

        I’d say that generally speaking, if it’s not an int or a char, that probably means the exact size of the type is important, in which case it makes sense to convey that using a type alias. It conveys your intentions more clearly and tersely (in a good way), it makes your code more robust when compiled for different platforms, and it’s not actually more work; that extra #include <cstdint> you may need to add pays for itself pretty quickly.

        • Valmond@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          2 hours ago

          So we should not have #defines in the way, right?

          Like INT32, instead of “int”. I mean if you don’t know the size you probably won’t do network protocols or reading binary stuff anyways.

          uint64_t is good IMO, a bit long (why the _t?) maybe, but it’s not one of the atrocities I’m talking about where every project had its own defines.

          • xthexder@l.sw0.com
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            30 minutes ago

            I’ve seen several codebases that have a typedef or using keyword to map uint64_t to uint64 along with the others, but _t seems to be the convention for built-in std type names.

      • SilverShark@programming.dev
        link
        fedilink
        arrow-up
        7
        ·
        8 hours ago

        We had it because we needed to compile for Windows and Linux on both 32 and 64 bit processors. So we defined all our Int32, Int64, uint32, uint64 and so on. There were a bunch of these definitions within the core header file with #ifndef and such.

        • Valmond@lemmy.world
          link
          fedilink
          arrow-up
          4
          ·
          7 hours ago

          But you can use 64 bits int on a 32 bits linux, and vice versa. I never understood the benefits from tagging the stuff. You gotta go so far back in time where an int isn’t compiled to a 32 bit signed int too. There were also already long long and size_t… why make new ones?

          Readability maybe?

          • Consti@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            2 hours ago

            Very often you need to choose a type based on the data it needs to hold. If you know you’ll need to store numbers of a certain size, use an integer type that can actually hold it, don’t make it dependent on a platform definition. Always using int can lead to really insidious bugs where a function may work on one platform and not on another due to overfloe

            • Valmond@lemmy.world
              link
              fedilink
              arrow-up
              2
              ·
              2 hours ago

              Show me one.

              I mean I have worked on 16bits platforms, but nobody would use that code straight out of the box on some other incompatible platform, it doesn’t even make sense.

          • SilverShark@programming.dev
            link
            fedilink
            arrow-up
            1
            ·
            3 hours ago

            It was a while ago indeed, and readability does play a big role. Also, it becomes easier to just type it out. Of course auto complete helps, but it’s just easier.

  • melfie@lemy.lol
    link
    fedilink
    arrow-up
    11
    ·
    edit-2
    7 hours ago

    Like anything else, it can be useful in the right context if not followed too dogmatically, and instead is used when there is a tangible benefit.

    For example, I nearly always dependency inject dependencies with I/O because I can then inject test doubles with no I/O for fast and stable integration tests. Sometimes, this also improves re-usability, and for example, a client for one vendor’s API can be substituted with another, but this benefit doesn’t materialize that often. I rarely dependency inject dependencies with no side-effects because it’s rare that any tangible benefit materializes, and everyone deals with the additional complexity for years with no reason. With just I/O dependencies, I’ve generally found no need for a DI container in most codebases, but codebases that dependency inject everything make a DI container basically mandatory, and its usually extra overhead for nothing, IMO. There may be codebases where dependency injecting everything makes perfect sense, but I haven’t found one yet.

  • iii@mander.xyz
    link
    fedilink
    English
    arrow-up
    22
    arrow-down
    1
    ·
    edit-2
    9 hours ago

    Yes OOP and all the patterns are more than often bullshit. Java is especially well known for that. “Enterprise Java” is a well known meme.

    The patterns and principles aren’t useless. It’s just that in practice most of the time they’re used as hammers even when there’s no nail in sight.

    • SinTan1729@programming.dev
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      3 hours ago

      As an amateur with some experience in the functional style of programming, anything that does SOLID seems so unreadable to me. Everything is scattered, and it just doesn’t feel natural. I feel like you need to know how things are named, and what the whole thing looks like before anything makes any sense. I thought SOLID is supposed to make code more local. But at least to my eyes, it makes everything a tangled mess.

      • Matty Roses@lemmygrad.ml
        link
        fedilink
        arrow-up
        1
        ·
        6 minutes ago

        It’s not supposed to make it more local, it’s supposed to conform to a single responsibility, and allow encapsulation of that.

      • iii@mander.xyz
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 hours ago

        Especially in Java, it relies extremely heavy on the IDE, to make sense to me.

        If you’re minimalist, like me, and prefer text editor to be seperate from linter, compiler, linker, it’s not pheasable. Because everything is so verbose, spread out, coupled based on convention.

        So when I do work in Java, I reluctantly bring out Eclipse. It just doesn’t make any sense without.

        • SinTan1729@programming.dev
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          3 hours ago

          Yeah, same. I like to code in Neovim, and OOP just doesn’t make any sense in there. Fortunately, I don’t have to code in Java often. I had to install Android Studio just because I needed to make a small bugfix in an app, it was so annoying. The fix itself was easy, but I had to spend around an hour trying to figure out where the relevant code exactly is.

        • iii@mander.xyz
          link
          fedilink
          English
          arrow-up
          4
          ·
          edit-2
          8 hours ago

          Can I bring my own AbstractSingletonBeanFactoryManager? Perhaps through some at runtime dependency injection? Is there a RuntimePluginDiscoveryAndInjectorInterface I can implement for my AbstractSingletonBeanFactoryManager?

  • masterspace@lemmy.ca
    link
    fedilink
    English
    arrow-up
    9
    ·
    edit-2
    6 hours ago

    The SOLID principles are just that principles, not rules.

    As someone else said, you should always write your code to be maintainable first and foremost, and extra code is extra maintenance work, so should only really be done when necessary. Don’t write an abstract interface unless multiple things actually need to implement it, and don’t refactor common logic until you’ve repeated it ~3 times.

    The DRY principle is probably the most overused one because engineers default to thinking that less code = less work and it’s a fun logic puzzle to figure out common logic and abstract it, but the reality is that many of these abstractions in reality create more coupling and make your code less readable. Dan Abramov (creator of React) has a really good presentation on it that’s worth watching in its entirety.

    But I will say that sometimes these irritations are truly just language issues at the end of the day. Java was written in an era where the object oriented paradigm was king, whereas these days functional programming is often described as what OO programming looks like if you actually follow all the SOLID principles and Java still isn’t a first class functional language and probably never will be because it has to maintain backwards compatibility. This is partly why more modern Java compatible languages like Kotlin were created.

    A language like C# on the other hand is more flexible since it’s designed to be cross paradigm and support first class functions and objects, and a language like JavaScript is so flexible that it has evolved and changed to suit whatever is needed of it.

    Flexibility comes with a bit of a cost, but I think a lot of corporate engineers are over fearful of new things and change and don’t properly value the hidden costs of rigidity. To give it a structural engineering analogy: a rigid tree will snap in the wind, a flexible tree will bend.

  • aev_software@programming.dev
    link
    fedilink
    arrow-up
    11
    arrow-down
    1
    ·
    9 hours ago

    The main lie about these principles is that they would lead to less maintenance work.

    But go ahead and change your database model. Add a field. Then add support for it to your program’s code base. Let’s see how many parts you need to change of your well-architected enterprise-grade software solution.

    • justOnePersistentKbinPlease@fedia.io
      link
      fedilink
      arrow-up
      8
      ·
      9 hours ago

      Sure, it might be a lot of places, it might not(well designed microservice arch says hi.)

      What proper OOP design does is to make the changes required to be predictable and easily documented. Which in turn can make a many step process faster.

      • Log in | Sign up@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        1 hour ago

        I have a hard time believing that microservices can possibly be a well designed architecture.

        We take a hard problem like architecture and communication and add to it networking, latency, potential calling protocol inconsistency, encoding and decoding (with more potential inconsistency), race conditions, nondeterminacy and more.

        And what do I get in return? json everywhere? Subteams that don’t feel the need to talk to each other? No one ever thinks about architecture ever again?

        I don’t see the appeal.

      • aev_software@programming.dev
        link
        fedilink
        arrow-up
        3
        ·
        9 hours ago

        I guess it’s possible I’ve been doing OOP wrong for the past 30 years, knowing someone like you has experienced code bases that uphold that promise.

        • calliope@retrolemmy.com
          link
          fedilink
          arrow-up
          5
          ·
          edit-2
          8 hours ago

          Right, knowing when to apply the principles is the thing that comes with experience.

          If you’ve literally never seen the benefits of abstraction doing OOP for thirty years, I’m not sure what to tell you. Maybe you’ve just been implementing boilerplate on short-term projects.

          I’ve definitely seen lots of benefits from some of the SOLID principles over the same time period, but I was using what I needed when I needed it, not implementing enterprise boilerplate blindly.

          I admit this is harder with Java because the “EE” comes with it but no one is forcing you to make sure your DataAccessObject inherits from a class that follows a defined interface.

  • brian@programming.dev
    link
    fedilink
    arrow-up
    3
    arrow-down
    1
    ·
    5 hours ago

    most things should have an alternate implementation, just in the unit tests. imo that’s the main justification for most of SOLID.

    but also I’ve noticed that being explicit about your interfaces does produce better thought out code. if you program to an interface and limit your assumptions about implementation, you’ll end up with easier to reason about code.

    the other chunk is consistency is the most important thing in a large codebase. some of these rules are followed too closely in areas, but if I’m working my way through an unfamiliar area of the code, I can assume that it is structured based on the corporate conventions.

    I’m not really an oop guy, but in an oop language I write pretty standard SOLID style code. in rust a lot of idiomatic code does follow SOLID, but the patterns are different. writing traits for everything instead of interfaces isn’t any different but is pretty common

  • gezero@lemmy.bowyerhub.uk
    link
    fedilink
    arrow-up
    5
    ·
    edit-2
    8 hours ago

    If you are creating interfaces for classes that will not have second implementation, that sounds suspicious, what kind of classes are you abstracting? Are those classes representing data? I think I would be against creating interfaces for data classes, I would use records and interfaces only in rare circumstances. Are you complaining about abstracting classes with logic, as in services/controllers? Are you creating tests for those? Are you mocking external dependencies for your tests? Because mocks could also be considered different implementations for your abstractions. Some projects I saw definitely had taken SOLID principles and made them SOLID laws… Sometimes it’s an overzealous architect, sometimes it’s a long-lasting project with no original devs left… The fact that you are thinking about it already puts you in front of many others…

    SOLID principles are principles for Object Oriented programming so as others pointed out, more functional programming might give you a way out.

  • ravachol@lemmy.world
    link
    fedilink
    arrow-up
    9
    arrow-down
    1
    ·
    9 hours ago

    My opinion is that you are right. I switched to C from an OOP and C# background, and it has made me a happier person.

  • alexc@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    9 hours ago

    SOLID is generally speaking a good idea. In practice, you have to know when to apply it.

    it sounds like your main beef in Java is the need to create interfaces for every class. This is almost certainly over-engineering it, especially if you are not using dependency inversion. IMHO, that is the main point of SOLID. For the most part your inversions need interfaces, and that allows you create simple, performant unit tests.

    You also mention OOP - It has it’s place, but I would also suggest you look at functional programming, too. IMHO, OOP should be used sparingly as it creates it’s own form of coupling - especially if you use “Base” classes to share functionality. Such classes should usually be approached using Composition. Put this another way, in a mature project, if you have to add a feature and cannot do this without reusing a large portion of the existing code without modifications you have a code-smell.

    To give you an example, I joined a company about a year ago that coded they way you are describing. Since I joined, we’ve been able to move towards a more functional approach. Our code is now significantly smaller, has gone from about 2% to 60% unit testable and our velocity is way faster. I’d also suggest that for most companies, this is what they want not what they currently have. There are far too many legacy projects out there.

    So, yes - I very much agree with SOLID but like anything it’s a guideline. My suggestion is learn how to refactor towards more functional patterns.

    • aev_software@programming.dev
      link
      fedilink
      arrow-up
      5
      ·
      9 hours ago

      In my experience, when applying functional programming to a language like java, one winds up creating more interfaces and their necessary boilerplate - not less.