Ollama is an engine that allows you to run large language models (LLMs) such as Deepseek R1 on your local machine.
- Download Ollama from the Ollama Download Page.
- Follow the installation instructions specific to your operating system.
To start the Deepseek R1 model, open your terminal:
- MacOS: Open Terminal
- Windows: Open Command Prompt (CMD)
You have the option to choose from multiple model versions based on size. The 7B model (4.7GB) is recommended for most home computers. Run the following command:
ollama run deepseek-r1:7b
For other model versions, please refer to the Deepseek R1 Documentation.
Ollama will download and set up the model, which may take some time depending on your internet speed.
Once the installation is complete, you can start interacting with Deepseek R1 through the terminal. If you prefer a web user interface (UI) similar to ChatGPT, follow the optional steps below.
To set up a web UI for Deepseek R1, you'll first need Docker.
- Go to the Docker Download Page and select your operating system to install Docker.
- Sign up for a Docker account and ensure you select Docker Personal (not Business).
To run the Open WebUI with Docker, execute the following command in your terminal:
docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
This command sets up the web-based interface for Deepseek R1.
- Wait approximately 2-3 minutes for the Open WebUI to start.
- Open your web browser and navigate to
localhost:8080
. - Create an admin account when prompted.
You can now chat with Deepseek R1 through a ChatGPT-like interface provided by the Open WebUI!
If you restart your computer, follow these steps to get the model and UI running again:
- Open your terminal and execute:
ollama run deepseek-r1:7b
- Open the Docker application and start the Open WebUI container.
- Launch your browser and visit
localhost:8080
. - Enjoy using Deepseek R1 locally on your computer!
If you're interested in using a different model version, check out the Deepseek R1 Library for available options.
Now you have Deepseek R1 running locally with a web UI!