Hey fellow Lemmings,

I’m thrilled to announce the launch of AI News Summary Bot, a project that brings you News summary! The bot is now live on our community at !news_summary@lemmy.dbzer0.com.

The bot is still in its early stages, and I’m excited to hear your feedback and suggestions on how to improve it. Feel free to share your thoughts and ideas.

Repository: If you’re interested in contributing or exploring the code behind the bot, you can find the repository at https://github.com/muntedcrocodile/ai_news_bot.

Donations: If you’re interested in donating to allow me to spend more time developing please do: monero:8916FjDhEqXJqX9Koec9WaZ4QBQAa6sgW6XhQhXSjYWpQiWB42GsggEh73YAFGF86GU2gEE1TTRdWSspuMgpWGkiPHkgBTX

Stay informed, and let’s build this community together!

EDIT: grammar

    • rompe@lemm.ee
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      2
      ·
      3 days ago

      It clearly states what it is and it stays in its own community. I don’t see a problem here.

        • secret300@lemmy.sdf.org
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 days ago

          Ye no shit. I’m just saying that the journalist already write slop and would be better by AI.

          The AI summarizing their slop just sounds better because half of journalist think they are Stephen King instead of writing concise informational pieces. The AI will cut out the shit and just give the info in the summary

      • Admiral Patrick@dubvee.org
        link
        fedilink
        English
        arrow-up
        16
        arrow-down
        3
        ·
        3 days ago

        The concept is inherently flawed when you introduce an aspect (LLM) that can and will hallucinate (read: make shit up) when it’s trying to present reality.

        As far as I’m concerned, there is no place for that anywhere remotely close to news.

        • Staden_ スタデン@pawb.social
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          edit-2
          3 days ago

          correct, but humans also exaggerate and lie a lot in the news, so maybe this AI could look through different sources and identify inaccuracies.

          I haven’t looked the source code tho…

          • Staden_ スタデン@pawb.social
            link
            fedilink
            English
            arrow-up
            3
            ·
            3 days ago

            After checking the source code, well… it just summarizes the posts. Doesn’t help much with the human error problem.

            But as mentioned by OP, it’s in early stage of development, and they plan to add features to “find the missing perspectives on an issue” and analyze political alligmnent information. So in the future maybe it could become a useful tool.

        • The model i have used gives a 60% identical summary to that provided by a human. And has an overall conceptual accuracy of >95% i was very carefull with my model selection and implementation as to ensure hallucinations are extremely rare if at all possible. Im not just feeding in “summarise this: <text>” to a general purpose llm (known for hallucinations) i break the article into chunks at sentence breaks then make a summary of that chunk directly by passing it to a purpose build summarisation model.