• 0 Posts
  • 19 Comments
Joined 28 days ago
cake
Cake day: February 10th, 2025

help-circle
  • I grew up in the age of Internet forums, in the ancient days of the late '90s-early-00’s before the (Eternal September) Smartphone dumped every human being onto the landscape.

    Having small communities is so much better. I often hear people complain that Lemmy isn’t big because there are not communities with 3 million people like there are some subreddits. Much of the reason that Reddit is shit is because of how big it is.

    On the old Internet, you could know the people who were part of the community. I have old friends, that I’ve known for 20+ years, that I met playing MUDs on BBSs. Now, I couldn’t tell you the name of a single person that I’ve ever interacted with on social media in the past year.

    Digg and Reddit came on the scene and pulled a huge crowd because we didn’t have The Algorithm to recommend content and these link aggregation sites were the first time people got a taste of that kind of ‘See all of the newest things from every corner of the Internet in a single place, curated by a process that produces good quality results’ that we now just expect from recommendation algorithms.

    The old communities were essentially starved of population. Nobody wants to take the social effort required to become part of a community when they can just scroll Reddit mindlessly.

    There’s very few people that even had a chance to experience the magic of spontaneous communities full of people working together.


    If you still want a taste, check out the Something Awful forums.

    The barrier to entry is higher: you have to learn the rules (read the rules), the social norms and there is a $10 one-time fee (so getting banned has some sting to it, read the rules).

    In exchange you get an actual community of people. Many of the people posting there (or, in the various Discords now because that’s a thing) have been on SA since they were edgy teenagers and are now professionals with careers. That isn’t to say that there are not trolls and assholes, those exist in any community, but there’s a much higher ratio of good to bad posters.

    One of the interesting decisions that they do is that rulebreaking posts are rarely ever deleted. If a person is probated (temp ban) or banned, their comment stays up with a “(User was Probated/Banned for this post)” edited into the post so you can see, and hopefully learn, from the bad behavior. In addition, there’s a ‘Wall of Shame’ section where you can see everyone who’s been actioned against, who the moderator was and the moderation reason.

    I’ve always hated the fact that comments on Reddit just disappear. You can never see what a mod removed and there is no reason why it is removed. This allows all kinds of bad and manipulative behaviors to be done by people with moderation access.


  • Immigrant advocates certainly think so. Catalyze/Citizens, a pro-immigration group, said the change would “weaponize digital platforms” against immigrants. “This is not immigration policy—it is authoritarianism and undemocratic surveillance,” Beatriz Lopez, the group’s executive director, said in an emailed statement. “Trump is turning online spaces into surveillance traps, where immigrants are forced to watch their every move and censor their speech or risk their futures in this country. Today it’s immigrants, tomorrow it’s U.S. citizens who dissent with Trump and his administration.”

    The US has already turned online spaces into surveillance traps where people are forced to watch their every move and censor their speech or risk their futures.

    There are already a legally defined class of people who’re forced to register their social media accounts for monitoring by the government and people cheered when the laws were implemented.

    But you did not speak up, because you were not a sex offender.

    Now, they’re using that same surveillance apparatus to target immigrants.

    And people will not speak up, because they’re not immigrants.

    Protesters antisemitic rioting antifa terrorists will be next.





  • I used 3.7 on a project yesterday (refactoring to use a different library). I provided the documentation and examples in the initial context and it re-factored the code correctly. It took the agent about 20 minutes to complete the re-write and it took me about 2 hours to review the changes. It would have taken me the entire day to do the changes manually. The cost was about $10.

    It was less successful when I attempted to YOLO the rest of my API credits by giving it a large project (using langchain to create an input device that uses local AI to dictate as if it were a keyboard). Some parts of the codes are correct, the langchain stuff is setup as I would expect. Other parts are simply incorrect and unworkable. It’s assuming that it can bind global hotkeys in Wayland, configuration required editing python files instead of pulling from a configuration file, it created install scripts instead of PKGBUILDs, etcetc.

    I liken it to having an eager newbie. It doesn’t know much, makes simple mistakes, but it can handle some busy work provided that it is supervised.

    I’m less worried about AI taking my job then my job turning into being a middle-manager for AI teams.




  • I’m carrying on multiple conversations in this thread, so I’ll just copy what I said in a different thread:

    Of course people like these features, these algorithms are literally trained to maximize how likable their recommendations are.

    It’s like how people like heroin because it perfectly fits our opioid receptors. The problem is that you can’t simply trust that the person giving you heroin will always have your best interests in mind.

    I understand that the vast majority of people are simply going to follow the herd and use the thing that is most like Twitter, recommendation feed and all. However, I also believe that it is a bad decision on their part and that the companies that are intaking all of these people into their alternative social networks are just going to be part of the problem in the future.

    We, as the people who are actively thinking about this topic (as opposed to the people just moving to the blue Twitter because it’s the current popular meme in the algorithm), should be considering the difference between good recommendation algorithm use and abusive use.

    Having social media be controlled by private entities which use black box recommendation algorithms should be seen as unacceptable, even if people like it. Bluesky’s user growth is fundamentally due to people recognizing that Twitter’s systems are being used to push content that they disagree with. Except they’re simply moving to another private social media network that’s one sale away from being the next X.

    It’d be like living under a dictatorship and deciding that you’ve had enough so you’re going to move to the dictatorship next door. It may be a short-term improvement, but it doesn’t quite address the fundamental problem that you’re choosing to live in a dictatorship.


  • It also means decoupling the recommendation system from people’s feeds.

    Having a “you may like this” section is a lot less abusable than “the next item in your doomscroll is <recommendation>”.

    Bluesky is just another Twitter. Everything that happened to Twitter can happen to Bluesky. It’s not fundamentally changing anything except trading Elon for a different owner.

    It’s not a bad change, people want Twitter after all… but it isn’t fixing any problems in the underlying incentive structures or algorithm control.

    The core problem is that curated feeds allow the owner to substitute their recommendations in place of recommendations that would interest you.

    Until the owner can’t do that, the social network is always one sale away from being the next Twitter/Truth Social.

    Bluesky is fixing social media by changing the owner, Mastodon/ActivityPub is fixing social media by getting rid of the owner.

    I think the latter is the better choice for how to structure these things.


  • They’re good at predicting what people want to see, yes. But that isn’t the real problem.

    The problem isn’t that they predict what you want to see, it is that they use that information to give you results that are 90% what you want to see and 10% of results that the owner of the algorithm wants you to see.

    X uses that to mix in alt-right feeds. Google uses it to mix in messages from the highest bidder on their ad network and Amazon uses it to mix in product recommendations for their own products.

    You can’t know what they’re adding to the feed or how much is real recommendations that are based on your needs and wants and how much is artificially boosted content based on the needs and wants of the owner of the algorithm.

    Is your next TikTok really the next highest piece of recommended content or is it something that’s being boosted on the behalf of someone else? You can’t know.

    This has become an incredibly important topic since people are now using these systems to drive political outcomes which have real effects on society.


  • Some things are incredibly appealing to everyone and also bad for society. We have to treat those things responsibly.

    Recommendation algorithms can be useful, to assist you in discovering content. But only as a tool that you can choose to use. If I can select a person that I like listening to and get a list of other people who I may be interested in (assuming that the algorithm is simply matching me to similar peers and not also adding in some “also Elon/Bezos/whoever really wants you to see these guys” skew)… that would be a useful tool.

    However, the recommendation algorithms should not be used to make the second-by-second decision about what you see next. The next item in your feed should always be there because of a decision that you make, not as a means of “maximizing engagement” + whatever skew the owner wants to add.

    Of course people like these features, these algorithms are literally trained to maximize how likable their recommendations are.

    It’s like how people like heroin because it perfectly fits our opioid receptors. The problem is that you can’t simply trust that the person giving you heroin will always have your best interests in mind.

    Recommendation algorithms are a useful tool but, only when used in moderation. Attaching a recommendation algorithm directly to your brain via a curated content feed is incredibly unhealthy for both the individual and society.


  • For stuff like Twitter-likes and TikTok-likes I want an algorithm.

    Until recommendation algorithms are transparent and auditable, choosing to use a private service with a recommendation algorithm is giving some random social media owner the control of the attention of millions of people.

    Curate your own feed, subscribe to people that you find interesting, go and find content through your social contacts.

    Don’t fall into the trap of letting someone (ex: Elon Musk) choose 95% of what you see and hear.

    Algorithmic recommendations CAN be good. But when they’re privately owned and closed to public inspection, then there is no guarantee that they’re working in your best interest.







  • I grew up when the Internet was essentially a bunch of forum communities and 10k people was a lot of people. Something Awful felt massive with 300k registered users.

    You don’t need 150,000,000 people on a subreddit to have a good community.

    Communities are far better when you can recognize the names of people and remember then from previous interactions. On Reddit, you’ll probably never talk to the same person twice.

    You can’t have a community full of bots if there are only a few hundred people who all know each other.