

C also sucks. Also, stop misgendering yourself; when you respect yourself more, you’ll respect others more, and then you’ll stop saying that people are cancer.


C also sucks. Also, stop misgendering yourself; when you respect yourself more, you’ll respect others more, and then you’ll stop saying that people are cancer.


Weren’t you taught not to use dehumanizing language when you were a child?


I want you to write kernel code for a few years. But we go to Lemmy with the machismo we have, not the machismo we wish we had. Write a JSON recognizer; it should have the following signature and correctly recognize ECMA 404, returning 0 on success and 1 on failure.
int recognizeJSON(const char*);
I estimate that this should take you about 120 lines of code. My prior estimated defect rate for C programs is about one per 60 lines. So, to get under par, your code should have fewer than two bugs.


They had you right the first time. You have a horde of accounts and your main approach is to post Somebody Else’s Opinion for engagement. You have roughly the political sophistication of a cornstalk and you don’t read the articles that you submit. You don’t engage on anything you’ve posted except to defend your style of posting. There’s no indication that you produce Free Software. You use Lemmy like Ghislane Maxwell used Reddit.


RPython, the toolchain which is used to build JIT compilers like PyPy, supports Windows and non-Windows interpretations of standard Python int. This leads to an entire module’s worth of specialized arithmetic. In RPython, the usual approach to handling the size of ints is to immediately stop worrying about it and let the compiler tell you if you got it wrong; an int will have at least seven-ish bits but anything more is platform-specific. This is one of the few systems I’ve used where I have to cast from an int to an int because the compiler can’t prove that the ints are the same size and might need a runtime cast, but it can’t tell me whether it does need the runtime cast.
Of course, I don’t expect you to accept this example, given what a whiner you’ve been down-thread, but at least you can’t claim that nobody showed you anything.


Java is bad but object-based message-passing environments are good. Classes are bad, prototypes are also bad, and mixins are unsound. That all said, you’ve not understood SOLID yet! S and O say that just because one class is Turing-complete (with general recursion, calling itself) does not mean that one class is the optimal design; they can be seen as opinions rather than hard rules. L is literally a theorem of any non-shitty type system; the fact that it fails in Java should be seen as a fault of Java. I is merely the idea that a class doesn’t have to implement every interface or be coercible to any type; that is, there can be non-printable non-callable non-serializable objects. Finally, D is merely a consequence of objects not being functions; when we want to apply a functionf to a value x but both are actually objects, both f.call(x) and x.getCalled(f) open a new stack frame with f and x local, and all of the details are encapsulation details.
So, 40%, maybe? S really is not that unreasonable on its own; it reminds me of a classic movie moment from “Meet the Parents” about how a suitcase manufacturer may have produced more than one suitcase. We do intend to allocate more than one object in the course of operating the system! But also it perhaps goes too far in encouraging folks to break up objects that are fine as-is. O makes a lot of sense from the perspective that code is sometimes write-once immutable such that a new version of a package can add new classes to a system but cannot change existing classes. Outside of that perspective, it’s not at all helpful, because sometimes it really does make sense to refactor a codebase in order to more efficiently use some improved interface.


This is too facile. First, in terms of capability maturity, management is not the goal of a fully-realized line of industry. Instead, the end is optimization, a situation where everything is already repeatable, defined, and managed; in this situation, our goal is to increase, improve, and simplify our processes. In stark contrast, management happens prior to those goals; the goal of management is to predict, control, and normalize processes.
Second, management is the only portion of a business which is legible to the government. The purpose of management is to be taxable, accountable, and liable, not to handle the day-to-day labors of the business. The Iron Law insists that the business will divide all employees into the two camps of manager and non-manager based solely on whether they are employed in pursuit of this legibility.
Third, consider labor as prior to employment; after all, sometimes people do things of their own cognizance without any manager telling them what to do. So, everybody is actually a non-manager at first! It’s only in the presence of businesses that we have management, and only in the presence of capitalism that we have owners. Consider that management inherits the same issues of top-down command-and-control hierarchy as ownership or landlording.


Look, just because you don’t click bluelinks doesn’t imply that anybody using them is a bot. Sometimes Wikipedia really does have useful information. If you don’t want to get talked to in a condescending manner, don’t reply to top-level posts with JAQs or sealions.


Y’know, knowing that you live in DACH, I can’t help but read this as sour grapes: if only you were allowed to be more fascist, but those mean old online communists just won’t let you!


Given that I’ve never seen you in the Ruby, Rails, or Sinatra communities, I’m going to guess that you aren’t actually part of this conversation. Also, you’ve been fairly obvious in your cryptofascism since this Lemmy instance was set up; you’re one of several users that have ensured that programming.dev has a fairly bad federated reputation, and I’m not sure that anybody really cares whether you’re included given that you don’t appear to publish Free Software or anything else useful.


Weird way to say that you haven’t heard of yinglets.


No, this is an explanation of dataflow programming. Functional programming is only connected to dataflow programming by the fact that function application necessarily forces data to flow. Quoting myself on the esolang page for “functional paradigm”:
The functional paradigm of language design is the oldest syntactic and semantic tradition in computer science, originating in the study of formal logic. Features of languages in the functional paradigm are not consistent, but often include:
- The syntactic traditions of combinatory logic and lambda calculus, carried through the Lisp, ML, and APL families
- Applicative trees and combining forms
- A single unified syntax for expressions, statements, declarations, and other parts of programs
- Domain-theoretic semantics which admit an algebra of programs
- Deprecation or removal of variables, points, parameters, and other binders in favor of point-free/tacit approaches
This definition comes from a famous 1970s lecture. The author is a Scala specialist and likely doesn’t realize that Scala is only in the functional paradigm to the extent that it inherits from Lisps and MLs; from that perspective, functional programming might appear to be a style of writing code rather than a school of programming-language design.


Not at the moment, no. The EU’s common laws don’t have anything like the First Amendment guaranteeing a right to speech, which means that there can’t be a court case like DJB v. USA serving as a permanent obstruction. Try seating more Pirates first.
You have no idea what an abstraction is. You’re describing the technological sophistication that comes with maturing science and completely missing out on the details. C was a hack because UNIX’s authors couldn’t fit a Fortran compiler onto their target machine. Automatic memory management predates C. Natural-language processing has been tried every AI summer; it was big in the 60s and big in the 80s (and big in the 90s in Japan) and will continue to be big until AI winter starts again.
Natural-language utterances do not have an intended or canonical semantics, and pretending otherwise is merely delaying the painful lesson. If one wants to program a computer — a machine which deals only in details — then one must be prepared to specify those details. There is no alternative to specification and English is a shitty medium for it.


Haskell isn’t the best venue for learning currying, monads, or other category-theoretic concepts because Hask is not a category. Additionally, the community carries lots of incorrect and harmful memes. OCaml is a better choice; its types don’t yield a category, but ML-style modules certainly do!
@thingsiplay@beehaw.org and @Kache@lemmy.zip are oversimplifying; a monad is a kind of algebra carried by some endofunctor. All endofunctors are chainable and have return values; what distinguishes a monad is a particular signature along with some algebraic laws that allow for refactoring inside of monad operations. Languages like Haskell don’t have algebraic laws; for a Haskell-like example of such laws, check out 1lab’s Cat.Diagram.Monad in Agda.


I’m most familiar with the now-defunct Oregon University System in the USA. The topics I listed off are all covered under extras that aren’t included in a standard four-year degree; some of them are taught at an honors-only level and others are only available for graduate students. Every class in the core was either teaching a language, applying a language, or discrete maths; and the selections were industry-driven: C, Java, Python, and Haskell were all standard teaching languages, and I also recall courses in x86 assembly, C++, and Scheme.


The typical holder of a four-year degree from a decent university, whether it’s in “computer science”, “datalogy”, “data science”, or “informatics”, learns about 3-5 programming languages at an introductory level and knows about programs, algorithms, data structures, and software engineering. Degrees usually require a bit of discrete maths too: sets, graphs, groups, and basic number theory. They do not necessarily know about computability theory: models & limits of computation; information theory: thresholds, tolerances, entropy, compression, machine learning; foundations for graphics, parsing, cryptography, or other essentials for the modern desktop.
For a taste of the difference, consider English WP’s take on computability vs my recent rewrite of the esoteric-languages page, computable. Or compare WP’s page on Conway’s law to the nLab page which I wrote on Conway’s law; it’s kind of jaw-dropping that WP has the wrong quote for the law itself and gets the consequences wrong.


Indeed, the best attribution gives it to Upton Sinclair in 1917 and likely reflected anxieties of WW1, not WW2; Sinclair wasn’t saying it themselves, but attributing it to a government employee. This doesn’t disconnect them, but shows that WW1 was the common factor.


My $HOME is recreated on boot and lives in RAM. I don’t care what gets written there; if I didn’t know about it and intend to save it to disk, then it won’t be saved. It would be nice if tools were not offenders here, but that doesn’t mean that we can’t defend ourselves somewhat.
Your analogy is bogus because this is the Fediverse and we can defederate from tankies without giving them money. The entire topic revolves around how Framework spends money. Whataboutism in this context is a classic defense of fascism, for what it’s worth.