Skip to content

Instantly share code, notes, and snippets.

FROM ollama/ollama
# An ollama Dockerfile with a bunch of models built in
# Useful for when you want to work offline
# You can use this with bionic-gpt - when adding the model, the name should match the mode
# e.g. 'llama3.1:8b-instruct-q4_0'
# docker build -t ollama_multiple:latest .
# Add the classic llama3 image
RUN /download.sh 'llama3:8b'
@danishcake
danishcake / README.md
Last active August 4, 2025 10:59
md-to-pdf Mermaid configuration