docker run -d -v /my/ollama/local/directory:/root/.ollama -p 11434:11434 --name my-ollama ollama/ollama
docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
docker exec -it my-ollama ollama run llama3.2
Play a bit in CLI if you want. I prefer this. In the cli, type \bye
to exit llama 3. If you don't want to get into this step then just remove the -it
part from the docker exec command of previous step.
- Open http://localhost:3000
- Press
Sign Up
- Create and Account
- Login
From the top left corner of the webui, you can now choose the llama3.2 model.
If you want to use other models just use the following command to bring those model to the WebUI and reload the Web UI.
docker exec -it my-ollama ollama run {model_name}
Docker compose version
This is a Docker Compose version of the gist, utilizing the
qwen2.5-coder:0.5b
model for using on low-resource machines.Note: The
qwen2.5-coder:32b
model can be used if resources are available.If anyone thinks to play with different resources, the below command is useful.