• zygo_histo_morpheus@programming.dev
    link
    fedilink
    arrow-up
    1
    ·
    1 day ago

    What are the odds that you’re actually going to get a bounty out of it? Seems unlikely that an AI would hallucinate an actually correct bug.

    Maybe the people doing this are much more optimistic about how useful LLMs are for this than I am but it’s possible that there’s some more malicious idea behind it.

    • psivchaz@reddthat.com
      link
      fedilink
      arrow-up
      1
      ·
      5 hours ago

      AI could probably find the occasional actual bug. If you use AI to file 500 bug reports in the time it may take a researcher to find and report 1, and only 2 pay out, you’ve still gotten ahead.

      But in the process, you’ve wasted tons of time for the developers who have to actually sort through, read the reports, and verify the validity of the issue. I think that’s part of the problem. Even if it sometimes finds a legitimate issue, these people are trying to make it someone else’s problem to do the real work.

    • BatmanAoD@programming.dev
      link
      fedilink
      arrow-up
      1
      ·
      17 hours ago

      The user who submitted the report that Stenberg considered the “last straw” seems to have a history of getting bounty payouts; I have no idea how many of those were AI-assisted, but it’s possible that by using an LLM to automate making reports, they’re making some money despite having a low success rate.

    • CandleTiger@programming.dev
      link
      fedilink
      arrow-up
      1
      ·
      21 hours ago

      Maybe the people doing this are much more optimistic about how useful LLMs are for this than I am

      Yes. That is the problem being reported in this article. There are many many people who have complete and unblemished optimism about how useful LLMs are, to the point where they don’t understand it’s optimism and don’t understand why other people won’t take them seriously.

      Some of them are professionals in related fields