Skip to content

Instantly share code, notes, and snippets.

@chinthakindi-saikumar
Forked from iam-veeramalla/claude_with_ollama.md
Last active February 24, 2026 12:58
Show Gist options
  • Select an option

  • Save chinthakindi-saikumar/06ac0e8f69e4ce8859a40a5cbadc21c0 to your computer and use it in GitHub Desktop.

Select an option

Save chinthakindi-saikumar/06ac0e8f69e4ce8859a40a5cbadc21c0 to your computer and use it in GitHub Desktop.
claude code integration with ollama to use local models

Run Claude with the power of Local LLMs using Ollama

Install Ollama

  1. Open CMD/terminal and run below command curl -fsSL https://ollama.com/install.sh | sh

Pull the Model

  1. Install model based on your system configuration using belw commands ollama pull glm-4.7-flash # or gpt-oss:20b (for better performance) or ollama pull gemma:2b
  2. Optional: ollama run gemma:2b and work in your local

Install Claude

  1. Install Claude using below commands macOS, Linux, WSL: curl -fsSL https://claude.ai/install.sh | bash Windows CMD: curl -fsSL https://claude.ai/install.cmd -o install.cmd && install.cmd && del install.cmd

Run Claude with Ollama

  1. Launch Claude using below commands ollama launch claude --model glm-4.7-flash # or ollama launch claude --model glm-4.7-flash gpt-oss:20b
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment