Skip to content

Instantly share code, notes, and snippets.

@iam-veeramalla
Last active February 26, 2026 03:01
Show Gist options
  • Select an option

  • Save iam-veeramalla/d0f46791619b0db348d8312060a80f2d to your computer and use it in GitHub Desktop.

Select an option

Save iam-veeramalla/d0f46791619b0db348d8312060a80f2d to your computer and use it in GitHub Desktop.
claude code integration with ollama to use local models

Run Claude with the power of Local LLMs using Ollama

Install Ollama

curl -fsSL https://ollama.com/install.sh | sh

Pull the Model

ollama pull glm-4.7-flash # or gpt-oss:20b (for better performance)

Install Claude

curl -fsSL https://claude.ai/install.sh | bash

Run Claude with Ollama

ollama launch claude --model glm-4.7-flash # or ollama launch claude --model glm-4.7-flash gpt-oss:20b

@chinthakindi-saikumar
Copy link

Here are the clear steps @vinoth-6 .

1.Install Ollama
Open CMD/terminal and run below command curl -fsSL https://ollama.com/install.sh | sh
2.Pull the Model
Install model based on your system configuration using belw commands ollama pull glm-4.7-flash # or gpt-oss:20b (for better performance) or ollama pull gemma:2b
*.Optional: ollama run gemma:2b and work in your local
3.Install Claude
Install Claude using below commands macOS, Linux, WSL: curl -fsSL https://claude.ai/install.sh | bash Windows CMD: curl -fsSL https://claude.ai/install.cmd -o install.cmd && install.cmd && del install.cmd
4.Run Claude with Ollama
Launch Claude using below commands ollama launch claude --model glm-4.7-flash # or ollama launch claude --model glm-4.7-flash gpt-oss:20b

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment