The HackerOne report that does not even apply has 44 upvotes.
What do upvotes mean on HackerOne?
I guess, at least here, they’re mindless “looks interesting” or “looks well worded” or something?
“One way you can tell is it’s always such a nice report. Friendly phrased, perfect English, polite, with nice bullet-points … an ordinary human never does it like that in their first writing,”
Damn straight.
I shoot for this but am detectable by constantly making edits to make my point more understandable, adding something relevant that I thought of later (literally editing this post right now to include “adding something relevant that I thought of later”) or to correct typos.
Stenberg, saying that he’s “had it” and is “putting my foot down on this craziness,” suggested that every suspected AI-generated HackerOne report will have its reporter asked to verify if they used AI to find the problem or generate the submission. If a report is deemed “AI slop,” the reporter will be banned. “We still have not seen a single valid security report done with AI help,” Stenberg wrote.
I appreciate this because I’d hate to get my issue removed as AI slop because I wasn’t enough of an asshole and didn’t make enough English mistakes. All for rejecting AI slop but it’d feel bad being the false positive deemed “not human enough” and getting my efforts tossed out too.
I may or may not be one of those autistic people who tried to compensate for my social deficiencies and inability to read the room by doing my best to be polite, nice, and inoffensive. (It helps that those qualities do not conflict with who I want to be at all.) And “nice and inoffensive” helps you easily subclass/multiclass into corpo dialect…
I find it really easy to tell the difference between a human being polite, neat and well-spoken, and an AI being the same (but soulless). I don’t know if I could put it into words though, there’s just something about AI that lacks subjectivity? A human would phrase something in a certain way and stick with it, because that’s the way they experience it, while the AI takes a phrasing at random, only caring about gaining lexical and grammatical points.
I also think humans overestimate their ability to write clearly and correctly. There’s always some noise in there, even if they’re going full corpo-speak. Unless it’s written-by-committee meaningless corpo, but then I don’t even read it beyond the first sentence. It’s very obvious when someone has tried to strip all meaning from a sentence and the result is not far from AI.
Oh yeah, I’m in the same boat. I’ll go back to an issue I opened and keep adding context to make sure it’s as fleshed out as possible, because English isn’t my first language. Plus AuDD in my case.
For what it’s worth, if you didn’t tell me English wasn’t your first language, I would not have known from this comment.
Unless you’re an Autist who has heavily specced into the Corpo dialect/persona/mask.
Not autistic, but I write like chatgpt. And I really like formatting.
Without getting into a massive discussion about self-diagnosis and validity of various tests in which demographics and what not…
https://embrace-autism.com/raads-r/
If your total score is 65 or over on this, you may wanna look into a formal diagnosis.
I caught a stray bullet reading that.
I feel this is one of the few instances where I can say ‘takes one to know one’ and not mean that in some kind of rude or bellittling way.
Also: Etiquette!
Thats the word I couldn’t think of, thats used in ShadowRun to describe the … set of vocabulary and base cultural knowledge that functionally constitutes a social class, within those games.
Linus was ahead of his time in the human-identifiabilty stakes.
I remembered seeing a post on Mastodon a while ago about an AI-generated vulnerability report, and this article reminded me of that. Turns out, that old one was also about curl. He has been dealing with this bullshit for a while now. https://mastodon.social/@bagder/111245232072475867
On that old one, the indignant attitude of the guy who “reported” the vulnerability still irritates me. He admits that he used AI (this was when Google’s AI was called Bard, so that’s what he means by “I have searched in the Bard”), and still has this tone of “how dare you not have fixed this by now!”
Those who use AI to report to open source projects and flood the volunteering devs who keep the World going, should be disqualified from using those open source projects to begin with (even though thats not feasible)
Consider that it’s not intended to be helpful, but actually could be a malicious DDOS attempt. If it slows down devs from fixing real vulnerabilities, then it empowers those holding zero days for a widely used package (like curl).
Those who use AI to report to open source projects and flood the volunteering devs who keep the World going, should be disqualified from using those open source projects
I propose a GPL-noAI licence with this clause inserted.
so not GPL at all, then
Public with conditions on behaviour which can lead to your licence being revoked, just like the current GPL. 🤷♂️
The license doesn’t get revoked. It does not apply to things it does not allow in the first place.
Some kind of restrictions are easier to describe and assess than others.
I doubt someone that generates AI slob reports would care about the restrictions anyway.
I still don’t get it, like, why tf would you use AI for this kind of thing? It can barely make a basic python script, let alone actually handle a proper codebase or detect a vulnerability, even if it is the most obvious vulnerability ever
It’s simple actually, curl has a bug bounty program where reporting even a minor legitimate vulnerability can land you at a minimum $540
If they ever actually identify one, make a very public post stating that as this was identified using AI there will be no bounty paid.
What are the odds that you’re actually going to get a bounty out of it? Seems unlikely that an AI would hallucinate an actually correct bug.
Maybe the people doing this are much more optimistic about how useful LLMs are for this than I am but it’s possible that there’s some more malicious idea behind it.
The user who submitted the report that Stenberg considered the “last straw” seems to have a history of getting bounty payouts; I have no idea how many of those were AI-assisted, but it’s possible that by using an LLM to automate making reports, they’re making some money despite having a low success rate.
Maybe the people doing this are much more optimistic about how useful LLMs are for this than I am
Yes. That is the problem being reported in this article. There are many many people who have complete and unblemished optimism about how useful LLMs are, to the point where they don’t understand it’s optimism and don’t understand why other people won’t take them seriously.
Some of them are professionals in related fields
Scenario: I wanna land a sweet security job, but I don’t want to have to work for it.
We have several scientific articles being published and later found to have been generated via AI.
If somebody is willing to ruin their academic reputation, something that takes years to build, don’t you think people are also using AI to cheat at a job interview and land a high paying IT job?
I think it might be the developers of that AI, letting their system make bug reports to train it, see what works and what doesn’t (as is the way with training AI), and not caring about the people hurt in the process.
Just use AI to remove all the AI slop!
But if we use AI to fight the AI, then what do we use to fight the AI we released to fight the AI‽
Cats. If you don’t believe me, ask Australians. Cats all the way.
More AI 😈
Like turtles, it’s AI all the way down.
Behold: Perverse AI Incentive
I have a dream that one day it will be possible to identify which AI said slop came from and so to charge the owners of said slop generator for releasing such a defective product uncontrolled on the world.
Ooh, embedded stenography!
On a barely related note:
It would be funny to watch Markiplier try to take out a Tesla Bot, and then Asimo, and then a humanoid Boston Dynamics robot, in hand to hand combat.
Oddly specific, but I agree
I mean… the thumbnail looks almost exactly like Markiplier to me.
All these years later, still can’t get his damn voice out of my head, purely from clicking on ‘really great vid’ links from randos on Discord… bleck.
Just rewrite curl in Rust so you can immediately close any AI slop reports talking about memory safety issues. /s