Skip to content

Instantly share code, notes, and snippets.

@Talismanic
Created November 5, 2024 20:23
Show Gist options
  • Save Talismanic/be581dea6108753e721d7b9363b3597e to your computer and use it in GitHub Desktop.
Save Talismanic/be581dea6108753e721d7b9363b3597e to your computer and use it in GitHub Desktop.
Running Llama3.2 with Ollama and Open WebUI

Run Ollama Docker

docker run -d -v /my/ollama/local/directory:/root/.ollama -p 11434:11434 --name my-ollama ollama/ollama

Run Open-WebUI for Ollama

docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main

Download and run llama3.2

docker exec -it my-ollama ollama run llama3.2

Exit llama from cli

Play a bit in CLI if you want. I prefer this. In the cli, type \bye to exit llama 3. If you don't want to get into this step then just remove the -it part from the docker exec command of previous step.

SignUp for Open-WebUi

  1. Open http://localhost:3000
  2. Press Sign Up
  3. Create and Account
  4. Login

Choose Model

From the top left corner of the webui, you can now choose the llama3.2 model.

Use of Other Models

If you want to use other models just use the following command to bring those model to the WebUI and reload the Web UI.

docker exec -it my-ollama ollama run {model_name}
@Talismanic
Copy link
Author

Thanks a ton @abidkhan484 . Just to add, if you want to control the resource limit, you can add that in docker-compose file with deploy directive/section. It will look like below:

deploy:
      resources:
        limits:
          cpus: '6'
          memory: 16G
        reservations:
          cpus: '1'
          memory: 4G

@abidkhan484
Copy link

deploy:
     resources:

Thanks @Talismanic Bhai. Learned something new and unlocked the power of Docker Compose. 🚀

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment