Skip to content

Instantly share code, notes, and snippets.

@AvasDream
Created October 9, 2024 07:39
Show Gist options
  • Save AvasDream/92c307b62b0f32874581dcfd9fda8a9f to your computer and use it in GitHub Desktop.
Save AvasDream/92c307b62b0f32874581dcfd9fda8a9f to your computer and use it in GitHub Desktop.
Local Chatgpt setup

Local ChatGPT Setup with LLaMA 3.2

This guide helps you set up a local instance of ChatGPT with LLaMA 3.2 for macOS or tablet access.

Requirements

  1. Docker is installed on your machine.
  2. Ollama is installed.

Installation Steps

  1. Pull the LLaMA 3.2 Model using Ollama

    Open a terminal and run:

    ollama pull llama3.2
  2. Stop Ollama Service

    If Ollama is running, stop it before proceeding:

    ollama serve
  3. Run the Open WebUI Docker Container

    Now, start the container with the following command:

    docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
  4. Find Your Local IP Address

    Once the container is running, get your local IP address by running this command:

    echo "https://$(ipconfig getifaddr en0):3000"

    This will display a link to access your local ChatGPT instance. Open the link in your browser or on your tablet.

  5. Register

    Follow the on-screen instructions to register and start using your local ChatGPT with LLaMA 3.2.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment