This guide helps you set up a local instance of ChatGPT with LLaMA 3.2 for macOS or tablet access.
- Docker is installed on your machine.
- Ollama is installed.
-
Pull the LLaMA 3.2 Model using Ollama
Open a terminal and run:
ollama pull llama3.2
-
Stop Ollama Service
If Ollama is running, stop it before proceeding:
ollama serve
-
Run the Open WebUI Docker Container
Now, start the container with the following command:
docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
-
Find Your Local IP Address
Once the container is running, get your local IP address by running this command:
echo "https://$(ipconfig getifaddr en0):3000"
This will display a link to access your local ChatGPT instance. Open the link in your browser or on your tablet.
-
Register
Follow the on-screen instructions to register and start using your local ChatGPT with LLaMA 3.2.