Instructions here: https://github.com/ghobs91/Self-GPT

If you’ve ever wanted a ChatGPT-style assistant but fully self-hosted and open source, Self-GPT is a handy script that bundles Open WebUI (chat interface front end) with Ollama (LLM backend).

  • Privacy & Control: Unlike ChatGPT, everything runs locally, so your data stays with you—great for those concerned about data privacy.
  • Cost: Once set up, self-hosting avoids monthly subscription fees. You’ll need decent hardware (ideally a GPU), but there’s a range of model sizes to fit different setups.
  • Flexibility: Open WebUI and Ollama support multiple models and let you switch between them easily, so you’re not locked into one provider.
  • The Hobbyist@lemmy.zip
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 months ago

    whats great is that with ollama and webui, you can as easily run it all on one computer locally using the open-webui pip package or in a remote server using the container version of open-webui.

    Ive run both and the webui is really well done. It offers a number of advanced options, like the system prompt but also memory features, documents for RAG and even a built in python ide for when you want to execute python functions. You can even enable web browsing for your model.

    I’m personally very pleased with open-webui and ollama and they both work wonders together. Hoghly recommend it! And the latest llama3.1 (in 8 and 70B variants) and llama3.2 (in 1 and 3B variants) work very well, even on CPU only, for the latter! Give it a shot, it is so easy to set up :)

    • Tobberone@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 months ago

      Do you know of any nifty resources on how to create RAGs using ollama/webui? (Or even fine-tuning?). I’ve tried to set it up, but the documents provided doesn’t seem to be analysed properly.

      I’m trying to get the LLM into reading/summarising a certain type of (wordy) files, and it seems the query prompt is limited to about 6k characters.

    • jonno@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 months ago

      Are you running these llms in containers completely cut off from the internet? My understanding was that the “local first” llms aren’t truly offline and only try and answer base queries offline before contacting their provider for support. This invalidating the privacy argument.

      • The Hobbyist@lemmy.zip
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 months ago

        The interface called open-webui can run in a container, but ollama runs as a service on your system, from my understanding.

        The models are local and only answer queries by default. It all happens on the system without any additional tools. Now, if you want to give them internet access, you can, it is an option you have to setup and open-webui makes that possible though I have not tried it myself. I just see it.

        I have never heard of any llm “answer base queries offline before contacting their provider for support”. It’s almost impossible for the LLM to do it by itself without you setting things up for it that way.